Measuring Customer Experience ROMI #1: Nice to New Customers

I’m going to preface this piece by saying I don’t really think “Customer Experience Management” is anything different from smart, integrated Marketing and Customer Service.  If there isn’t an actionable framework for it, like Ron, I’m not sure CEM has a future, other than to create something for people to talk about, and maybe sell some software…

Whichever direction you believe in, here is an interesting case that makes several points about this area of discussion.

The Nice to New Customers test was conducted at Home Shopping Network in 1994.  The idea came from the annual survey of all customers that indicated that the “average” customer felt the “new customer experience” was “as expected”.  Given the high percentage of 1x buyers we were experiencing (as do all interactive remote retailers), I thought, “Hmm, maybe if we deliver a customized first purchase experience and process, these new customers will be more likely to make a second purchase”.  Sounds logical, right?  This was a Business SWAT case since it involved Marketing, Customer Service, IT, and Telecommunications, all working together to set it up, determine the metrics, make sure Management understood the impact of the test on existing silo Scorecards, etc.  In other words, I sold my soul to get this test to happen.

We set up a pretty elaborate test where a random sample of new customers (about 100,000, a solid test group) were shunted to our “best agents” and given a new “Welcome Treatment”.  Instead of the general “get them off the phone as fast as you can” attitude prevalent in the network, these reps had permission to spend as much time with the customer as the customer wanted and generally customize the experience.  There was a lot of role play and monitoring connected to this effort, and the service managers on the project were convinced these new customers were in fact treated to a much better initial experience than the average new customer.  In fact, the customers seemed thrilled.  So far, so good. 

Problem was, this test group of new customers exposed to a better “Customer Experience” ended up generating no incremental sales versus control.  Well, there you go.  We lost a ton of money on this test, a stellar -118% ROMI, because we literally had to pay back customer service out of the marketing budget for the lost productivity in the network due to the test.  Hey, that was the deal I cut to get this test done.  You win some, you lose some.

But it gets worse.  When we started dicing the post-analysis of the test down to behavioral groups based on the details of the first transaction, we found there was actually some incremental sales lift among new customers with “light buyer” initial profiles.  This is good.  Problem was (and you know what is coming, don’t you?), new customers with heavy buyer profiles were negatively impacted, and because the Potential Value of this group was so huge, the losses versus control in this relatively small number of folks far outweighed the gains in light buyers, causing the net effect of the promotion to be negative.

Isn’t that a fine kettle of fish?  Being Nice to potential Best Customers killed the test.

When we surveyed these customers in the test after we knew their behavioral profiles (to make sure we knew the behavioral context of their answers) they basically told us this: they were expecting a very operationally efficient transaction and we provided them a customer-centric one.  Cognitively, they were making an impulse purchase and they wanted an impulse transaction, not an empathetic one.  This disconnect caused post-purchase dissonance and reduced intent to purchase.  Using today’s language, we were basically “spamming” them; we were overstepping any Permission we had to engage them at a more personal level.  And this negative effect was most pronounced among new customers with high Potential Value.  In hindsight, knowing what we knew about the psychological profile of Best Buyers, this made all the sense in the world and was an interesting confirmation of the test results.

The CFO, well, he didn’t think this result was so interesting…but did applaud the idea that we would step up to the plate and actually pay back customer service for the losses related to decreased productivity in the network out of the Marketing budget.  It was the first time anybody had done this kind of intra-silo payment and really paved the way for tighter integration between Marketing and Service.

You might consider this test result when evaluating your e-mail contact strategy, at least for new customers.  Are you sure you are generating maximum revenue?  What if the half percent or so that unsubscribe each month are future Best Customers with high Potential Value?  Do you use control groups, do you know the answer to this question?

Interactive behavior provides a very special backdrop for Marketing and Service; be careful what you ask for. 

I’m not saying if you did this test you would get the same results.  What I am saying is you cannot assume all the stuff you read about “Customer Experience” online is going to work with your customers.  You simply have to test these ideas with real customers and measure the results.  And if you are dealing with interactive customers, keep in mind that “Customer in Control” is something you might not want to mess with.  In other words, sometimes Control is the Experience, particularly if the general Marketing / Brand backdrop is Operational Efficiency.

It’s one thing to start a company saying you are going to deliver some kind of superior Customer Experience and embed this idea in your service delivery model.  We all know these kinds of companies.  It’s a completely different idea to think that you are going to improve the current experience at your company, and this effort is going to have positive effects for both the customer and the company because it sounds logical to you.

Lessons learned:

 1.  The bottom line lesson here really was about a poorly constructed test based on a faulty customer survey methodology.  Without the customer opinion first tied to an actual behavior, we had no option other than to use the opinion of the “average customer” as a base to act against.  Because of this, the only action we could take was against  “all new customers”, and ended up shooting ourselves in the foot.  Based on the post test dicing, we later retested and found (surprise, surpirse) a program like this could be extremely profitable when we treated targeted new customers differently based on their Potential Value. 

If we had this behavioral information (the initial Light Buyer / Best Buyer profiles) tied to the survey responses from the beginning, we would have understood these segments were different and designed the test accordingly.  Make sure if you are going to take some kind of action on a survey, you first understand a behavior and then survey the people with that behavior.  To do it the other way around, trying to “back into the behavior”, wastes a lot of time and money just in the data gathering and processing itself, never mind in the “re-testing” we had to go through once we knew what was really going on.

2.  It doesn’t always pay out to be Nice to New Customers.  Sometimes they simply want what they expect.

2 thoughts on “Measuring Customer Experience ROMI #1: Nice to New Customers

  1. I have often encountered in truly difficult service situations a “manager” who has created “wow” for me by stretching that extra mile to make my experience memorable. Imagine shopping in a crowded Retail store on a festival day and getting the unexpected treat of a customer care executive offering to take care of your billing and agreeing to home deliver your purchases. It is all about the “surprise” factor at work here!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.