Marketing IS (Can Be?) an Experience
Early on I discovered something from the work of leaders in data-based marketing business models: they were always very concerned with post-campaign execution – not only from marketing, but also through product, distribution, and service. I thought this strange, until I realized they knew something I did not: when you have customer data, you can actually identify and fix negative customer value impacts caused by poor experience.
This means you can directly quantify the value of customer experience, budget for fixing it, and create a financial model that proves out the bottom line hard money profits (or losses) from paying attention to the business value as a result of customer experience.
And critically, this idea becomes much more important as you move from surface success metrics like conversion and sales down into deep success metrics like company profits. Frequently you see the profit / loss from “marketing” often has less to do with campaigns and more to do with the positive or negative experiences caused by campaigns.
You might think taking the time to provide special treatment to brand new customers would always encourage engagement and repeat purchase. You’d be wrong. Sometimes this works, sometimes this does not work, depending on the context of the customer. Does it surprise you to find out customers often do not want to be “delighted”?
Just outside of “campaigns”, we find simple changes to product packaging can create huge increases in repeat customer purchase just by adding a little copy. Closer to operations, applying a little marketing know how to payment processing or the front end in the service center can generate significant lift in the profit of marketing campaigns.
And you can bet there are profitable (or not) experience issues in omni-channel to be uncovered, once measuring success happens at the customer rather than the channel.
How to Show Them the Money
If you want to go beyond surface / interface metrics and get down to the hard money benefits hidden in customer experience work, you have to set your measurement approach up correctly. This takes some effort but the unusually large benefits found to be proven out at the end of this work more than pays off for the effort. Here’s some core measurement ideas you should consider when getting ready for your next project:
1. Controlled Testing – when possible, create a control group of customers who will not be exposed to the new experience, and compare their behavior over time to those who are exposed to the new experience. This is the most scientifically precise way to measure the true value of customer experience changes. Not familiar with this idea? See here.
The behavioral data will provide the hard money value of the change in experience. You can guess at why the behavior changed, but that’s a rookie mistake that often goes bad (see my own example). Finding tangible reasons “why” is strongly suggested, so also survey the test group before and after the experience change so you can discover the operators in the new experience driving increased value. If you follow the model below, this info can typically be used over and over in future experience optimization projects.
Sometimes the specific business model or type of experience change cannot support the creation of control groups. That’s OK, we can use surveys to look for response to change – but we want to use these surveys in a specific and highly accountable, bullet-proof way.
2. Data with Surveys – in some cases, behavioral data may not be available either before or after the experience change. Or, there’s no ability to track ongoing behavior at all because the experience incident is essentially a 1x event, e.g. hospital visit. In these cases, surveys can be used to help fill in the blanks.
Please note: Surveying random visitors or customers with no idea “who” they are is not a very good idea, from the measurement (and management) perspective. To reduce chance of choosing the wrong path, always tie survey data to the customer record or behavior of the specific customer taking the survey. If you are not able to do this, it’s unlikely you will be able to provide reliable feedback on hard money metrics. The customer’s context is extremely important to understanding the results – customer experience is an area where the opinion of the “average customer” often hides all the most important ideas!
3. Survey Construction – please make sure to review the survey questions for bias – are you asking the questions in a way that might influence outcome? Perhaps you could subcontract to a survey expert for review if you lack resources in this area.
4. Survey Testing – just because you now have unbiased questions does not mean the questions make sense to the target market. Please trial / discuss the survey with your target market – do they really understand what the questions are asking of them?
5. Target Selection – administer your survey to specific customers with known profiles and behaviors. Examples: light, medium, and heavy activity or sales; by heaviest usage product category, by source of business / NAICS code. After the experience change or test is over and the time comes for review, you will be so happy you did this up front, trust me.
6. Tracking Over Time – when you look for evidence of benefit, follow the behavior of your different customer segments for a significant time after the change is implemented . Why? Changes in experience often produce effects that can be minor in the short-term, but tend to cascade and produce super-large benefits over longer periods of time.
The definition of “significant time” above depends on the business, but in general at least 3 – 5 contact cycles, which might be a few months, or just over a year to eliminate any seasonality effects. In the 1x event or very long-cycle business models (example auto sales) you might have to use before and/or after surveys to generate comparison results.
Tracking is the area where use of control groups makes experience testing quite a bit easier because you will see divergence in the behavior of test and control groups as the tracking period progresses. When this divergence remains constant or flattens, the experience change effect is likely stable or has ended, and active result monitoring can be put on hold.
Whenever possible, I highly suggest using the control group approach. Though the set-up is a bit trickier, the end result is more reliable and much less likely to be disputed.
Cross those Silos
I’m totally cool with taking responsibility for my bad marketing ideas and related planning or execution, but when poor operational or service practices decimate the financial outcome of a marketing campaign, that’s a completely different story. Sad really, because often these issues are relatively easy to prevent with a little cross-silo collaboration.
This is why I have always thought of customer experience as part of Marketing – I have seen too many examples of great marketing compromised by bad customer experience. So I always do everything I can to make sure the campaign promises will be delivered on, including a complete review of major programs with service / operations before execution.
Your thoughts or experience on this topic? Have you ever been the “victim” of operational or service problems crushing the results of a prized marketing effort? Thinking back, was there any way to prevent or reduce the tragedy that screwed up the campaign?