Measuring the $$ Value of Customer Experience

 Marketing IS (Can Be?) an Experience

Early on I discovered something from the work of leaders in data-based marketing business models: they were always very concerned with post-campaign execution – not only from marketing, but also through product, distribution, and service.  I thought this strange, until I realized they knew something I did not: when you have customer data, you can actually identify and fix negative customer value impacts caused by poor experience.

This means you can directly quantify the value of customer experience, budget for fixing it, and create a financial model that proves out the bottom line hard money profits (or losses) from paying attention to the business value as a result of customer experience.

And critically, this idea becomes much more important as you move from surface success metrics like conversion and sales down into deep success metrics like company profits. Frequently you see the profit / loss from “marketing” often has less to do with campaigns and more to do with the positive or negative experiences caused by campaigns.

Examples

You might think taking the time to provide special treatment to brand new customers would always encourage engagement and repeat purchase.  You’d be wrong.  Sometimes this works, sometimes this does not work, depending on the context of the customer.  Does it surprise you to find out customers often do not want to be “delighted”?

Just outside of “campaigns”, we find simple changes to product packaging can create huge increases in repeat customer purchase just by adding a little copy.  Closer to operations, applying a little marketing know how to payment processing or the front end in the service center can generate significant lift in the profit of marketing campaigns.

And you can bet there are profitable (or not) experience issues in omni-channel to be uncovered, once measuring success happens at the customer rather than the channel.

How to Show Them the Money

If you want to go beyond surface / interface metrics and get down to the hard money benefits hidden in customer experience work, you have to set your measurement approach up correctly.  This takes some effort but the unusually large benefits found to be proven out at the end of this work more than pays off for the effort.  Here’s some core measurement ideas you should consider when getting ready for your next project:

1. Controlled Testing – when possible, create a control group of customers who will not be exposed to the new experience, and compare their behavior over time to those who are exposed to the new experience.  This is the most scientifically precise way to measure the true value of customer experience changes.  Not familiar with this idea?  See here.

The behavioral data will provide the hard money value of the change in experience.  You can guess at why the behavior changed, but that’s a rookie mistake that often goes bad (see my own example).  Finding tangible reasons “why” is strongly suggested, so also survey the test group before and after the experience change so you can discover the operators in the new experience driving increased value.  If you follow the model below, this info can typically be used over and over in future experience optimization projects.

Sometimes the specific business model or type of experience change cannot support the creation of control groups.  That’s OK, we can use surveys to look for response to change – but we want to use these surveys in a specific and highly accountable, bullet-proof way.

2.  Data with Surveys – in some cases, behavioral data may not be available either before or after the experience change.  Or, there’s no ability to track ongoing behavior at all because the experience incident is essentially a 1x event, e.g. hospital visit.  In these cases, surveys can be used to help fill in the blanks.

Please note: Surveying random visitors or customers with no idea “who” they are is not a very good idea, from the measurement (and management) perspective.  To reduce chance of choosing the wrong path, always tie survey data to the customer record or behavior of the specific customer taking the survey.  If you are not able to do this, it’s unlikely you will be able to provide reliable feedback on hard money metrics.  The customer’s context is extremely important to understanding the results –  customer experience is an area where the opinion of the “average customer” often hides all the most important ideas!

3. Survey Construction – please make sure to review the survey questions for bias – are you asking the questions in a way that might influence outcome?  Perhaps you could subcontract to a survey expert for review if you lack resources in this area.

4. Survey Testing – just because you now have unbiased questions does not mean the questions make sense to the target market.  Please trial / discuss the survey with your target market – do they really understand what the questions are asking of them?

5. Target Selection – administer your survey to specific customers with known profiles and behaviors.  Examples: light, medium, and heavy activity or sales; by heaviest usage product category, by source of business / NAICS code.  After the experience change or test is over and the time comes for review, you will be so happy you did this up front, trust me.

6. Tracking Over Time – when you look for evidence of benefit, follow the behavior of your different customer segments for a significant time after the change is implemented .  Why?  Changes in experience often produce effects that can be minor in the short-term, but tend to cascade and produce super-large benefits over longer periods of time.

The definition of “significant time” above depends on the business, but in general at least 3 – 5 contact cycles, which might be a few months, or just over a year to eliminate any seasonality effects.  In the 1x event or very long-cycle business models (example auto sales) you might have to use before and/or after surveys to generate comparison results.

Tracking is the area where use of control groups makes experience testing quite a bit easier because you will see divergence in the behavior of test and control groups as the tracking period progresses.  When this divergence remains constant or flattens, the experience change effect is likely stable or has ended, and active result monitoring can be put on hold.

Whenever possible, I highly suggest using the control group approach.  Though the set-up is a bit trickier, the end result is more reliable and much less likely to be disputed.

Cross those Silos 

I’m totally cool with taking responsibility for my bad marketing ideas and related planning or execution, but when poor operational or service practices decimate the financial outcome of a marketing campaign, that’s a completely different story.  Sad really, because often these issues are relatively easy to prevent with a little cross-silo collaboration.

This is why I have always thought of customer experience as part of Marketing – I have seen too many examples of great marketing compromised by bad customer experience.  So I always do everything I can to make sure the campaign promises will be delivered on, including a complete review of major programs with service / operations before execution.

Your thoughts or experience on this topic?  Have you ever been the “victim” of operational or service problems crushing the results of a prized marketing effort?  Thinking back, was there any way to prevent or reduce the tragedy that screwed up the campaign?

Share:  twittergoogle_plusredditlinkedintumblrmailby feather


Follow:  twitterlinkedinrssby feather

Is Your Digital Budget Big Enough?

At a high level, 2014 has been a year of questioning the productivity of digital marketing and related measurement of success.  For example, the most frequent C-level complaint about digital is not having a clear understanding of bottom-line digital impact. For background on this topic, see articles herehere, and here.

I’d guess this general view probably has not been helped by the trade reporting on widespread problems in digital ad delivery and accountability systems, where (depending on who you ask) up to 60% of delivered “impressions” were likely fraudulent in one way or another.  People have commented on this problem for years; why it took so long for the industry as a whole to fess up and start taking action on this is an interesting question!

If the trends above continue to play out, over the next 5 years or so we may expect increasing management focus on more accurately defining the contribution of digital – as long as management thinks digital is important to the future of the business.

If the people running companies are having a hard time determining the value of digital to their business, the next logical thought is marketers / analysts probably need to do a better job demonstrating these linkages, yes?  Along those lines, I think it would be helpful for both digital marketers and marketing analytics folks to spend some time this year thinking about and working through two of the primary issues driving this situation:

1.  Got Causation?  How success is measured

In the early days of digital, many people loved quoting the number of “hits” as a success measure.  It took a surprisingly long time to convince these same people the number of files downloaded during a page view did not predict business success ;)

Today, we’re pretty good at finding actions that correlate with specific business metrics like visits or sales, but as the old saying goes, correlation does not imply causation.

If we move to a more causal and demonstrable success measurement system, one of the first ideas you will encounter, particularly if there are some serious data scientists around, in the idea of incremental impact or lift.  This model is the gold standard for determining cause in much of the scientific community.  Personally, I don’t see why with all the data we have access to now, this type of testing is not more widely embraced in digital.

Continue reading Is Your Digital Budget Big Enough?

Share:  twittergoogle_plusredditlinkedintumblrmailby feather


Follow:  twitterlinkedinrssby feather

Omni-Channel Cost Shifting

One of the great benefits customer lifecycle programs bring to the party is unearthing cross-divisional or functional profitability opportunities that otherwise would fall into the cracks between units and not be addressed.  What I think most managers in the omni-channel space may not realize (yet) is how significant many of these issues can be.

To provide some context for those purely interested in the marketing side, this idea joins quite closely to the optimizing for worst customers and sales cannibalization discussions, but is more concerned with downstream operational issues and finance.  Cost shifting scenarios will become a lot more common as omnichannel concepts pick up speed.

Shifty Sales OK, Costs Not?

Why is cost shifting important to understand?  Many corporate cultures can easily tolerate sales shifting between channels because of the view that “any sale is good”.  On the ground, this means sourcing sales accurately in an omni-channel environment requires too much effort relative to the perceived benefits to be gained.  Fair enough; some corporate cultures simply believe any sale is a good sale even if they lose money on it!

Cost shifting  tends to be a different story though, because the outcomes show up as budget variances and have to be explained.  In many ways, cost shifting is also easier to measure, because the source is typically simple to capture once the issue surfaces.  And as a cultural issue, people are used to the concept of dealing with budget variances.

Here’s a common case:

Continue reading Omni-Channel Cost Shifting

Share:  twittergoogle_plusredditlinkedintumblrmailby feather


Follow:  twitterlinkedinrssby feather

Does Advertising Success = Business Success?

Digital Analytics / Business Alignment is Getting Better

I recently attended eMetrics Boston and was encouraged to hear a lot of presentations hitting on the idea of tying digital analytics reporting more directly to business outcomes, a topic we cover extensively in the Applying Digital Analytics class I taught after the show. This same kind of idea is also more popular lately in streams coming out of the eMetrics conferences in London and other conferences.  A good thing, given the most frequent C-Level complaint about digital analytics is not having a clear understanding of bottom-line digital impact (for background on this topic, see articles herehere, and here).

Yes, we’ve largely moved beyond counting Visits, Clicks, Likes and Followers to more meaningful outcome-oriented measures like Conversions, Events, Downloads, Installs and so forth.  No doubt the C-Level put some gentle pressure on Marketing to get more specific about value creation, and analysts were more than happy to oblige!

Is Marketing Math the Same as C-Level Math?

Here’s the next thing we need to think about: the context used to define “success”.

In my experience, achieving a Marketing goal does not necessarily deliver results that C-Level folks would term a success.  And here’s what you need to know: C-Level folks absolutely know the difference between these two types of success and in many cases can translate between the two in their heads using simple business math.

Here’s an example.  Let’s say Marketing presents this campaign as a success story:

Continue reading Does Advertising Success = Business Success?

Share:  twittergoogle_plusredditlinkedintumblrmailby feather


Follow:  twitterlinkedinrssby feather

Digital Customer Analysis Going Mainstream?

Is it possible the mainstream digital marketing space is about to finally move on from a focus on front-end measurement (campaigns, etc. ) to creating knowledge around how enterprise value as a whole is created?  And actually enabling action in this area?

Judging by the material coming out of the recent Martech conference in Boston, one would think so.  And it looks to me like I’m not the only one thinking “it’s about time”.

A couple of years ago I lamented:

It’s been very popular among marketing types to talk about “the customer” but seek metrics for affirmation other than those based on or derived from the customer. Digital analysts have followed their lead, and provided Marketers plenty of awareness, engagement, and campaign metrics.  As I’ve said in the past, this is a huge disconnect. Does it make sense (analytically) to have discussions about customer centricity, customer experience, customer service, the social customer, etc. and measure these effects at impression or visit level?

If you’d like to review some commentary on the conference, see a list of 5 posts here.  I found the list of tweets here particularly indicative of Martech’s potential, for example:

Continue reading Digital Customer Analysis Going Mainstream?

Share:  twittergoogle_plusredditlinkedintumblrmailby feather


Follow:  twitterlinkedinrssby feather

Marketing Funnel Not Dead, Using Funnel Model for Attribution Is

It’s become fashionable to declare the “Marketing Funnel Model” dead.

For example, here is a post worth reading on this topic by Rok Hrastnik.  There are some very good points in this post on why using a funnel to attribute media value is really a troubled idea.  I was flagged on this post because it has a quote from me that seems to support Rok’s thesis about the death of the funnel model and the related idea, “Direct Response Measurement is a Wet Dream”.   The quote is from a comment I made on a post by Avinash where we were discussing the value of sequential attribution models:

There are simply limits on what can be “proven” given various constraints, and that’s where experience and a certain amount of gut feel based on knowledge of customer kick in.  If you can’t measure it properly, just say so. So much damage has been done in this area by creating false confidence, especially around the value of sequential attribution models where people sit around and assign gut values to the steps.  Acting on faulty models is worse than having no information at all.

But none of this means the Funnel Model is dead, or that Direct Response Measurement overall is a Wet Dream.  What’s (hopefully) dead is  people using the funnel model inappropriately for tasks it was never designed for, in this case multi-step attribution of media value to goal achievement.  On the other hand, if this specific funnel use case is what Rok was coming after, I agree, because it didn’t make any sense to use a funnel model for this idea in the first place.

Let’s unpack these ideas

Funnel thinking is based on a relatively reliable model of human behavior, AIDA.  This model from human psychology does not specify tools, channels, or media.  It simply says that there is a path to purchase most humans follow.  That is:

A – Attention: (Awareness): attract the attention of the customer
I – Interest:  (Intent) promote advantages and benefits
D – Desire: convince customers the product will satisfy their needs
A – Action: lead customers towards taking action / purchace

Example:  I’m Aware of tons of products I would never buy.  There are lots of products I think are Interesting but I have no Desire for.  There’s a short list of products I Desire but have not Acted on.  The list of products in my head worthy of purchase consideration gets smaller and smaller at each stage of the AIDA model.  This is the funnel.

The AIDA funnel has not changed and it’s not dead.

It’s a model of human behavior, not media consumption.

Continue reading Marketing Funnel Not Dead, Using Funnel Model for Attribution Is

Share:  twittergoogle_plusredditlinkedintumblrmailby feather


Follow:  twitterlinkedinrssby feather

Marketing to Focus on Customer. Analytics?

It’s been very popular among marketing types to talk about “the customer” but seek metrics for affirmation other than those based on or derived from the customer.  Web analysts have followed their lead, and provided Marketers plenty of awareness, engagement, and campaign metrics.  As I’ve said in the past, this is a huge disconnect.  Does it make sense (analytically) to have discussions about customer centricity,  customer experience, customer service, the social customer, etc.  and measure these effects at the impression or visit level?

Is someone who visits or purchases or comments one time really a customer, for the purposes of analyzing “centricity” ideas and concepts?  I think not.  Visit metrics simply don’t work for understanding these customer concepts, because by definition they unfold over time, not as single events.   Add in the fact most web activity is 1x in nature – even buyers – and you begin to realize that analyzing “traffic” yields very little in the way of “customer” insight.

From a Marketing perspective, hey, happy to have the 1x revenue, but these are interactions I’m not really excited about increasing spend on, knowing they will be a one-night stands.  This is especially true when you also know re-allocating some of the funds spent on the 90% 1x-ers to the other 10% could double company profits!

If you have followed my writings over the past 12 years, none of the above perspective is new.  What might be changing is this: more people in the online world are beginning to think the same way.

Continue reading Marketing to Focus on Customer. Analytics?

Share:  twittergoogle_plusredditlinkedintumblrmailby feather


Follow:  twitterlinkedinrssby feather

“Missing” Social Media Value

I have no doubt there is some value in social beyond what can be measured, as this has been the case for all marketing since it began ;)  The problem is this value is often situational, not too mention not properly measured using an incremental basis (as you point out).
For example,  to small local businesses who do no other form of advertising, there is a huge amount of relative value to using social media, versus no advertising at all.  Some advertising is much better than none, and since it’s free, the incremental value created by (properly) using social is huge.
On the other hand, I wonder why social analysis seems to forget that people have to be aware of you to “Like” you in the first place.  Further, it seems unlikely a person would “Like” a brand or product if they have not already experienced it, and are already a fan.  If this is not true, if people “Like” a company even thought they do not (paid to Like?), then the problems with social go way beyond analysis…
But if true, , the number of “Likes” doesn’t have as much to do with awareness as it does with size of customer base, and is much more aligned with tracking customer issues (retention, loyalty) than anything to do with awareness / acquisition.
Add the fact many companies are running lots of advertising designed to create awareness, and the incremental value of social as a “media” may be close to zero, or at least less than the cost to analyze the true value of it.
And this last, really, is the core of the issue.  It’s simply not possible to measure “all” the value created by any kind of marketing, and there are hugely diminishing returns as you try to capture the last bits.  I think it’s quite possible the optimism for “value beyond what can be measured” is less than the cost of measuring it *if* people keep looking in the awareness / acquisition field.
Folks who want to find this “missing” social value should start doing customer analysis, and look in the “retention / loyalty” area, where the whole idea of social is a natural, rather than a forced, fit.

Has to be There

I find it really interesting that whenever there is a discussion of measuring the value of social media, there’s such a bias towards believing there is value in social beyond what can be properly measured.  See the comments following this post by Avinash for a good example.  Speculation is fine, but the confidence being expressed that a new tool or method will uncover a treasure trove of social media value seems un-scientific (as in scientific method) at best.

I don’t doubt there is some value in social media beyond what can be measured, as this has been the case for all marketing since marketing measurement began.  These measurement problems are not new to social either:  Marketing value created is often situational, it depends on the business model and environment.  What works in one situation may not work in another.

For example:

To small local businesses who do no other form of advertising, there is a huge amount of relative value to using social media versus no advertising at all.  Social advertising is much better than none, and since it’s free, the incremental value created by (properly) using social is huge.  It’s also really easy to measure the impact and true value, since the baseline control is “no advertising”.  Lift, or actual net marketing performance, can be pretty obvious in his case.

On the other hand, many companies are running lots of advertising designed to create awareness, and the incremental value of social as a “media” may be close to zero for these companies, or at least less than the cost to analyze the true value of it.  Possible explanation:  Social events such as “Likes” or comments are simply representations or affirmations of awareness already created by other media, so by themselves, create little value.  In other words, events such as Likes might track the value of other media spending, but may not create much additional marketing value.

Continue reading “Missing” Social Media Value

Share:  twittergoogle_plusredditlinkedintumblrmailby feather


Follow:  twitterlinkedinrssby feather

Defining Behavioral Segments

The following is from the April 2011 Drilling Down Newsletter.  Got a question about Customer Measurement, Management, Valuation, Retention, Loyalty, Defection?  Just ask your question.  Also, feel free to leave a comment and I’ll reply.

Want to see the answers to previous questions?  Here’s the blog archive; the pre-blog newsletter archives are here.

Q: I purchased your book and have a few questions you can hopefully help me out with.

A: Thanks for that, and sure!

Q: We have 4 product lines and 2 of them are seasonal. i.e we have customers that year in year out purchase these items consistently but seasonally, for example, every spring and summer.  Then they are dormant for Fall and Winter.  Should I include these customers along with everyone else when doing an RFM segmentation?

A: Well, it kind of depends what you will using the RF(M) model for, what kinds of marketing programs will be activated by using the scores. If you know you have seasonal customers and their habit is to buy each year, AND you wish to aim retention or reactivation programs at them, I would be tempted to divide the customer base so that seasonal customers are their own segment.  Then run two RF(M)  models – one for the seasonals, and one for everyone else.

Q: If I include seasonal customers, and I run RFM say on a monthly basis, these seasonal customers will climb / fall drastically with time depending on the season, so it seems like it may complicate the scoring process.

A: Sure, and you could segment as I said above.  Or, you could run across a longer time frame, say across 2 – 3 years worth of data. This would “normalize” the two segments into one and take account of the seasonality in the scoring – perhaps be more representative of the business model.  However, the scores would become less sensitive due to the long time frame so the actions of customers less accurately predicted by the model.

Q: Can you provide me with some examples as to how segmentation is carried out?  Let’s say I being with RFM and all my customers are rated 5-5, 5-4, 4-5 etc.  What are the next steps, do we overlay with other characteristics like age, gender, etc?  Or are the 5-3 etc. our actual segments?

A: This goes back to what you want to use the RF(M) model for.  In the standard usage, each score will have roughly the same number of customers in it, those with higher scores will be more likely to respond to marketing and purchase, lower scores less likely.

Continue reading Defining Behavioral Segments

Share:  twittergoogle_plusredditlinkedintumblrmailby feather


Follow:  twitterlinkedinrssby feather

Increase Profit Using Customer State

The following is from the March 2011 Drilling Down Newsletter.  Got a question about Customer Measurement, Management, Valuation, Retention, Loyalty, Defection?  Just ask your question.  Also, feel free to leave a comment and I’ll reply.

Want to see the answers to previous questions?  Here’s the blog archive; the pre-blog newsletter archives are here.

Q: We’ve been playing around with Recency / Frequency scoring in our customer email campaigns as described in your book.  To start, we’re targeting best customers who have stopped interacting with us.  I have just completed a piece of analysis that shows after one of these targeted emails:

1. Purchasers increased 22.9%
2. Transactions increased 69%
3. Revenue increased 71%

A: There you go!

Q: My concern is that what I am seeing is merely a seasonal effect – our revenue peaks in July and August.  So what I should have done is use a control group as you described in the book – which is what I am doing for the October Email.

A: Yep, that’s exactly what control groups are for – to strain out the noise of seasonality, other promotions, etc.  But don’t beat yourself up over it, nothing wrong with poking around and trying to figure out where the levers are first.

Q: Two questions:

1.  What statistical test do I use to demonstrate that the observed changes are not down to chance

2.  How big should my control group be – typically our cohort is 500-800 individuals

A: Good questions…

Continue reading Increase Profit Using Customer State

Share:  twittergoogle_plusredditlinkedintumblrmailby feather


Follow:  twitterlinkedinrssby feather