Monthly Archives: March 2007

Optimizing Mail Drops for Consumables
Drilling Down Newsletter 3/2007

The following is from the March 2007 Newsletter.  Got a question about Customer Measurement, Management, Valuation, Retention, Loyalty, Defection?  Just ask your question.  Also, feel free to leave a comment. 

Want to see the answers to previous questions?  The pre-blog newsletter archives are here, “Best Article” reviews here.

Optimizing Mail Drops for Consumables

Please note: The business discussed below is a “continuity business”, where customers consume the product and need to either reorder from the company every few months or seek alternatives sources for the product.  In this scenario, the behavior of customers is generally governed by the Latency Metric.

Q: Currently we mail our current customers direct mail every 6.5 weeks.  We have a new VP and he is asking if that is the optimal spacing of mailings.  I’m wondering if there are any best practices for setting up frequency tests?  If you can shed any light on how to set up such a test I would greatly appreciate it.

A: Well, do you know how you got to the 6.5 weeks number in the first place?  Somebody must have thought it was a good idea based on some kind of data (I hope)!

Obviously, there is some significant financial risk in simply “moving the drop around” and testing results that way.  You can do it, often by slivering off parts of the drop and dropping then at different times, but there could be a substantial financial penalty for approaching the problem this way – both on the cost and sales sides.  This is especially true when you have a current schedule that seems to be working.

The first thing I would do, if possible, before taking on the risk of messing with the mailing is to see if you can find any segmentation /  frequency that makes more sense from the customer data itself.  Since you also have a web site, there probably is evidence of “natural purchase cycles” the customer engages in that operate outside the mail drop – customers ordering “when and how they want to”.

Can you find evidence that the average purchase cycle is more like 5 weeks or 7 weeks?  How does this differ by product line, or packaging of the product?  Both segmentation by actual customer behavior and segmentation by product line will generally provide increased profits, provided the cost of dropping different mail streams does not overpower the increased sales.

For example, if someone can buy a “90-day supply”, well, 6.5 weeks is a bit  early for the mailing, I’d think.  If they can only buy a 30-day supply, well, it seems to me that 6.5 weeks could be a bit late.  Look to actual  purchase cycles by product line / supply length and see if you can find any patterns in the purchase behavior.

The key to this kind of analysis is to line up all the customers so that the purchase cycles match.  In other words, you need to enforce the same start date.  One way to do this, for example, is look at all new customers who started in January 2007; of the ones that bought again, when did they purchase – 5 weeks, 6 weeks, 7 weeks out?  What percentage of new starts in January (or any other month) purchased in each of the subsequent weeks?  Be aware choosing a single month may create results that have a seasonal bias, but I’m not sure that is relevant in a product line like yours.

A more complex but possibly more accurate way to do this is to “normalize” the start date of all new customers in 2006 and then look at the subsequent purchase patterns – given the same start date, what percent bought again 5 weeks out, 6 weeks out, 7 weeks out?  You can achieve virtually the same thing by taking each month of 2006 and running it through the same drill as the one described above for  January 2007, though it won’t be as accurate.

Once you have nailed the cycle for new customers, you can move on to see if  there is any change in optimal cycle date as customers age.  My guess is the cycle probably gradually lengthens until the customer defects.  If this is  true, it might be worth it to do two mailings with different cycles – one cycle for customers who became new customers in the past (say) 6 months and all other customers.  It’s likely in this business there could be an important behavioral difference between new and current customers that would allow you to deliver a more optimized mailing cycle.

Failing access to any analytical means to drill down into the data first, because either you lack the resources or simply don’t have the time, set up your next drop with flagged segments based on “weeks since last purchase” and look at profit per customer.  You could also back into this if you have good promotional history on your customers.

In other words, if you are going to drop “everybody” at the same time, there must be a segment where for this single drop, the time since last purchase based on arrival of the mail is 5 weeks ago, 6 weeks ago, 7 weeks ago, and so forth.  If you flag these segments before the drop in the database, you should be able to go back and determine sales per customer mailed for each segment.  This will tell you if your timing should be adjusted.  Further, you might divide these time-based segments, if there are enough members in the  segment, along various product lines.

Then, once you have a handle on the general cyclicality of different segments, you can get to profit per segment by using control groups to measure the lift and profit by segment.

A careful analysis of the next drop (or as I said, a previous drop if you have good history) should tell you which drop cycle for each product line is optimal.  From there, you have to look at economies of scale and decide if  you can afford that kind of segmentation.  You may find that due to the economies of scale in the mailing, you simply cannot drop 50% of your mail one week and the other 50% the next, for example.  But you might find enough support in your analysis to either justify the current 6.5 week drop as the most efficient, or to move it up or back somewhat.

Another way to approach the “timing problem” relative to economies of scale would be to try “reminder to re-order postcards” instead of mail or catalogs to some members of the group that require special timing considerations.  For example, new customers might not really need a catalog on their first drop, a postcard driving them to the phone or web site to reorder might be enough.

No silver bullets, I’m afraid. Just good ‘ol fashioned sloggin’ through the data ought to get you to where you want to go!

Jim

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

**** Bob Garfield’s Chaos Scenario 2.0

Chief Branding Officers Take Note…

Most people who read this blog are probably not all that interested in mass media as a marketing vehicle.  But I think just about any Marketing person would benefit from reading this incredibly stark view of the future in the traditional agency / mass media complex in this article over at Ad Age.  Yikes!

I don’t doubt that the cost structure of the mass media complex will have to change, especially on the agency side.  I mean really, you have Google trying to facilitate the purchase of radio and print through a web interface, for crying out loud.  Agencies should really start to push deeper into the corporation and become business strategy consultants.  There are a ton of smart, creative people in the agencies.  Perhaps they could help out with this Deconstruction of Marketing thing.

But the mass media itself?  They will just have to figure out what their place is in the world, and adapt.  I suspect that means leaning more towards direct (drive people to web site or call center) and away from “branding” in the traditional sense.  This has already been happening among the smarter players.  Perhaps we need to lose a major network to “cable only” status in order to funnel more dollars to fewer avails and increase quality.  Just remember, radio was supposed to kill print, FM was supposed to kill AM, network TV was supposed to kill radio, and Cable was supposed to kill network TV.

Quote from article: When Chairman-CEO A.G. Lafley (Proctor & Gamble) says, “We need to reinvent the way we market to consumers,” he doesn’t mean, “We need to find a place to amass 30 million people at a time so we can tell them not to squeeze the Charmin.”

Rich, I tell ya.  Very well written and quite funny, at least sitting on this side of the equation… your thoughts?

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

CRM, Chief Customer Officers, and XXM of the Month

In response to my comments on the potential for Marketing to lose a seat at the strategic table, Curtis Bingham comments on the difference between a Chief Marketing Officer and a Chief Customer Officer.  It’s not that I am opposed to the idea of a CCO, I’m just wondering, why are they needed?  I asked the same question about CRM when it came on the scene.  I mean, to me, CRM is Marketing; what would Marketing do if CRM was in charge of the customer relationship?  So then Curtis puts forth this gem:

“In some companies I’ve worked with, the CMO is so myopically focused on outward — bound marketing and “pushing” information on the customers that it takes a CCO to bridge the gap between what marketing hopes customers want and the customer reality.”

And then it hits me.  That’s really what is happening from a macro organizational perspective; it answers the question of “why” people are Deconstructing Marketing.  Current CMO’s can’t do the job I used to know as “Marketing”.

As someone who came from the database marketing side, all my experience has been in industries rich with customer data, and in these industries, the CMO is the CCO, performing all those functions, because that is simply the nature of the business, it is all about the customer and always has been.  I think what we are seeing is as more companies get access to their customer data and want to act on it, the skill sets of the CMO’s in those companies are lacking relative to the financial opportunity presented by having the data.  This conflict results in functions like “CRM” and “CCO” being stripped out of what I know as Marketing and created as new functions to address the new opportunity that “outward focused” Marketers don’t have the skills to address.  Unless, of course, the CMO steps up to the challenge of a data-driven organization and grabs hold of it.  Otherwise, the CEO simply fills the gap with another position.  

And that squares with the idea database marketing folks would make great Chief Customer Officers – they have both the Marketing skills and the Customer-centric empathy, plus a knowledge of process optimization all in one package.

Another issue of course is one of scale.  Not that HSN was a huge company at 2 billion in sales or so, where I managed to handle all the “Customer Centric” functionality as well as the Marketing.  But compared to Sun Micro or Cisco, I suppose at some size a single function like Marketing simply cannot pay enough attention to everything that is going on so you have to break it up…or do you?  I suppose that depends on the kind of talent you have access to.

Either way, at some level, as companies become more data-driven and so customer-centric, the traditionally trained “outbound CMO’s” are going to have to get with the customer-side program or will lose a lot of their power.  They will have to, because the financial leverage in customer marketing / analytics / accountability is so huge it’s bound to dwarf anything an “outbound CMO” can come up with.

Plus, the pressure to improve process optimization / accountability is only going to get more powerful as our friends over in IT keep rolling out their favorite XXM (Xxxxx Xxxxx Management) flavor of the month.

This all begs a larger question for me: If the above is true, then is there a market for training “outwardly-focused” CMO’s in the art of customer-centricity?  Or are they simply going to “let go” and cede control to the CCO’s because Customer Marketing is just too hard?

A pithy question we can perhaps discuss at the Don, Ron?

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

***** 7 Deadly Sins of Performance Management

Been a while since I’ve read an article deserving 5 stars, but this one by Dr. Michael Hammer through the Sloan School at MIT is a “must read” for those interested in the Analytical Culture issue.  Titled (warning, the article is a 10 page PDF) The 7 Deadly Sins of Performance Management [and How to Avoid Them], it is chock full of classic mis-measurement examples and the cultural reasons they happen, along with commentary from four managers who have been successful building the analytical culture at their companies.

The seven sins (links are to examples on this blog) are: Vanity, Provincialism, Narcissism, Laziness, Pettiness, Inanity, and Frivolity.  The author provides a four step solution to purging the corporation of these sins which sounds very much like the solution we’ve managed to create surrounding web analytics, and also addresses the Fear of Analytics question.

So here we have the analytical culture problem nicely outlined by a person with substantial credibility (as opposed to outlined by a raving blogger).  We still have the same problem though – actually doing something about it.  The four step solution provided sounds like the right “words”, but I’m still itchin’ for a bit more “How To” in the answer.

How, specifically, do I “create an organizational culture and value system that encourages the disciplined use of metrics for ongoing performance improvement rather than regard them as threats to be feared or opponents to be vanquished“?

I love it when you talk that way, Doc…

But seriously, of course you need leadership, you need to measure the right things in the right ways, and some reward for changing behavior and accomplishing goals would be nice, but I think it goes deeper than that.  For example, there are fundamental structures in place that conflict with the mission, particularly in Marketing.  Witness, the inherent conflict between Periodic and Customer Accounting.

If the entire company is marching to a quarter by quarter drum, and many people are rewarded based on the results of that march, how do you get these people to focus on the end customer-oriented metrics that really matter, and are probably not best measured in the periodic quarterly format? 

Isn’t this conflict with the quarterly financial reporting culture the same reason many companies go private?

Check out the article here.

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

“About the Blog” as a Post

I had a request to publish my “About the Blog” page as a post so people could comment on it.  Here ya go Jacques.

From the Drilling Down newsletter, 12/2004:

What is the number one characteristic shared by companies who are successful in turning customer data into profits?  The company fosters and supports an analytical culture.

Web analytics and Pay-per-Click Marketing in particular have served to teach many people the basics of applying the scientific method to customer data and marketing – creating actionable reporting, tracking source to outcome, KPI’s, iterative testing, etc.  The web has allowed companies to dip a toe into the acting-on-marketing-data waters at relatively low cost and risk when compared with offline projects.  And many have seen incredible ROI.

I think web analytics could be poised in the future to serve a greater role – teaching people / companies the optimal culture for success using analytics, also at relatively low cost and risk.  It’s going to be much harder to drive this concept but more rewarding if as users we can make this happen, because today’s web analysts (and maybe analytical apps) could potentially be among tomorrow’s leaders in a data-based, analytics-driven business world.

For example, do you think analyzing / understanding new interactive data streams where the interface is not a browser will be any different, in terms of the culture required to turn interactive customer data into profitable business actions?  I don’t.

Look, a “request” is a request, whether a click, IP phone call connect, cable TV remote button push, verbal command, card swipe, RFID scan, etc.  You’re still asking a computer to do something.  The request has a source, is part of a sequence (path), and has an outcome. 

Analysis of these requests will face challenges and provide potential benefits similar to those provided right now in web site analytics.  This is the beginning of analyzing the interaction of computers, people, and process.  

Without a doubt, no matter what form these requests take, there will be a “log” of some kind to be analyzed.  Usability?  Conversion?  ROI?  These issues are not going to go away, and companies need to develop a culture that properly embraces analyzing and addressing them.  Companies not developing this culture will find themselves continuing to bump along the “drowning in data” road and will never optimize their interactive customer marketing.

As I see it, here’s the “culture” issue in a nutshell: as a company, you have to want to dig into data and really understand your business.  This pre-supposes that you (as a company) believe that understanding the guts of your business through analytics will drive actions that increase profits.  If the company doesn’t generally support this idea, there is no incentive for anyone to pursue it and the company just happily bumps down the road.

Of course most people don’t really relate to the “company”, but their own division or functional silo.  So you might have manufacturing / engineering groups who live and die through analytics but marketing is not held to the same standards and thought processes.  This is where the idea of Six Sigma Marketing comes in, it’s a “bridge” of sorts that tries to say (perhaps to the CEO and CFO), “Hey folks, if the engineers can engage in continuous improvement through ongoing analytics, so can the Customer Service silo and the Marketing silo and perhaps others.”

At a higher conceptual level, analytical culture takes root when management makes it known they are not afraid of failure, and want employees not to be afraid of it either.  

Another way to say this is experimentation and testing are encouraged throughout the company.  Failure is a regular occurrence, and is even celebrated because through failure, learning takes place.  Show me a company with no failures or that hides failure and I’ll show you a company that is asleep at the switch, afraid of its shadow, a company soon to be irrelevant to the market it serves.

Hand in hand with accepting failure must be continuous improvement.  Even though failure is embraced as a learning tool, the lesson of the failure both prevents it from happening again and results in new ideas with a higher potential for success.  These twin ideas of embracing failure / continuous improvement are at the heart of every business successful in using analytics to improve profitability.

“Evidence” of a company with the right bones to grow an analytical culture is this: you see the various levels of employees working in cross-functional teams with a common problem-solving mission.  Instead of people in a silo groaning about members from other silos being present at a problem-solving meeting, people are instead asking, “Where is finance, where is customer service?”

The most common place “analytics” live in a company is in Finance with the “Financial Analysts”, who are mostly tasked with analysis related to financial controls and producing financial reports.  If marketing or customer service was willing to expose themselves to the rigor of these analysts, they would undoubtedly be able to improve their business areas.  But that exposure takes substantial guts and confidence in your abilities, and a “culture” that supports a scientific process.

And you can’t engage in this process without analytics; success and failure need to be defined and measured.  The easiest way to encourage this culture to take root is to team a department head with a Financial Analyst familiar with the area.  

Often, you find this finance person already has insightful questions that could lead to improvement, but “never asked” because “it’s not my job”.  And often, to make changes in a business today, you need IT support of some kind.  That’s the basic cross-functional unit – Finance rep, IT rep, and a department head.  

I would also argue that if Marketing has a seat at the table in the strategic, “Voice of the Customer” sense (as opposed to being relegated to Advertising, PR, and Creative), then marketing is part of the core unit.  Then you add other disciplines as needed based on the particular problem you are trying to solve.

If the culture is flexible enough, this can turn into “Business SWAT” where the best and brightest cross-functional teams roam through the company as “consultants”, tackling the hardest business problems, which (surprise) are usually cross-functional in nature.  And “blame” is never on the agenda, it’s about “how can we help you make it better?”  You need a culture that is clear about this idea in order for people to expose themselves to the analytics-based scientific process.  Success and failure are defined by the analytics.

If you think about it, web site management ruled by analytics is a microcosm of this Business SWAT set-up.  You have marketing, finance (ROI component), and technology all working together based on the data.  That’s why I think there is a higher mission for the web analytics area / people; they are building the prototype that can teach companies how to go about measuring, managing, and maximizing a data-driven business.

At the highest level of this culture, managers “demand” these SWAT teams because the success rate and business impact is so high.  As the various departments or functional silos produce wins and losses, capital (budget) flows to where the successes are and away from the failures.  When managers see this happening, they jump on board, because they want the budget flowing their way.  This creates a natural economic supply and demand scheme with a reward system for participation built into the process.

One caution: when the culture gets to this level, the analytics group must be sanitized from the reporting hierarchy.  It can’t report to finance, or marketing, or IT anymore.  It has to be completely independent, which usually means reporting directly to the CEO.  There has to be confidence in the integrity of the results of all testing based on standards.  All the little “pools” of analytical work throughout the company must be gathered into one.

What kind of companies do you see really engaging in this kind of culture right now? Those that for legacy reasons have always had access to their operational and customer data and have been using analytics for years.  For these legacy players, web analytics is a “duh” effort – they get it right out of the box, because it’s more of the same to them.  But many types of businesses have not had this access to data before and web analytics is the first taste they are getting of the power and leverage in the scientific method.  I think this “accountability” disease we’ve created in web analytics and search marketing will continue to spread and infect every business unit.

The longer-term question is, can we flip this model over, can the successful culture of cross-functional approach and continuous improvement used in web analytics be used to create a “duh” moment for other areas of the company?  Will “best practices” and success stories create an environment where people say to the (web?) analytics team, “Hey, can I get some of that over here?”  In other words, will the analytical culture develop?

Methinks there is more going on with web analytics than meets the eye; it’s potentially a platform for the creation of a new business culture, a culture based on the scientific method – Six Sigma Everything.  Sure, it’s awkward and maybe the web is not meaningful enough yet to many companies.  But as we thrash all this out, there is something greater being learned here.

Right now, many CRM projects can’t show ROI because nobody knows what to do with the data, how to turn it into action that improves the business.  Sounds very much like web analytics 5 years ago…and look what we talk about now.  KPI’s, turning data into action.  The analytical culture playing out.

What does this mean for the people currently involved in web analytics?  If I was a young web analytics jockey, I would be preparing for the spread of the analytical culture, and seriously thinking about learning some of the tools traditionally used in offline analytics – the query stuff like Crystal Reports, the higher end stuff like SAS, SPSS, and so on.  Search the web for “CHAID” and “CART” and see if you like what you read about these analytical models.  If this kind of stuff interests you, you are much closer to being a business analyst than you think.  And guess what?  Analysts who can both develop the business case and create the metrics and methods for analysis – like you have to do for a web site – are rare.

It takes a particular mind set, and that mind set is not common.  Most of the people with the right mind set go into the hard sciences, but demand on the soft side of business (marketing, customer service, etc.) is just beginning in our data-driven world.  

On the hard side, (with all apologies to the real engineers out there for the exaggeration) the drug works or it doesn’t, the part fits or it doesn’t.  The development of softer-side marketing and service analytical techniques is always going to be populated with a lot more gray area than there is on the hard side, and it takes a special skill to conceive of and develop the metrics required.  But we should be trying to bring the same analytical rigor to the soft side of business that the hard side has always had to deal with.  The trick is to apply that rigor without damaging the mission.

For example, the whole “fire your unprofitable customers” thing from some factions in CRM.  That’s ridiculous.  What you want to do is identify them and then act appropriately, whether that means controlling their behavior, not spending additional resources on them, or not doing the things that create them in the first place.  That’s the gray showing.  You don’t just hit the “reject button” on a customer.

Customer data is customer data.  It’s all going to end up in one place eventually as the analytical culture spreads, and those with the skills to apply the scientific method across every customer data set are going to be rare and in very high demand.  Don’t spend all your spare time watching the Forensic Files on Court TV.  You’re a business analyst.  Get out there and learn the rest of your craft!

And, please consider doing whatever you can, whenever you can, to spread the analytical culture within your company.  If most of what your analytics involve is “online marketing”, reach out to “offline service” or another silo and ask if you can help them with anything.  What’s the call they would like to take less of, can you use the web site to make that happen – and prove that it worked?  Can you use the web site to generate offline ROI?  

Web analysts, you are the cross-functional prototype.  Please teach others how to optimize the entire business.

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Measuring Customer Experience ROMI #2: Lab Store – New Customer Kits

Here’s another Customer Experience kind of test that proves you can generate incremental profit by improving the Experience.  You just have to make sure customers want the experience “improved”.  This example is from the Lab Store and the ROMI on this little program is a real eye popper.

Back in the old days (meaning the 80’s), what I guess is now called WOW was referred to as “surprise and delight”.  Essentially, this 2-step idea works like this: when you surprise the customer, you really get their attention.  If you can get their attention by surprise and delight them at the same time (instead of pissing them off with your surprise), then you are going to have a more loyal customer.  The trick, of course, is to somehow make more money doing it…

New Customer Kits are a very simple way to do this, and in my remote retailing experience, it works every time.  First impressions, in case you didn’t know, are really important – and especially so in remote retailing, where there is no way for the customer to get any tangible “feeling” for the company.  Sure, you have copy on the web site that paints a picture.  But how many times have people read all this wonderful copy only to be screwed when delivered the tangible experience?

The challenge is to design a kit that is relatively inexpensive yet packs an emotional delight.  Lots of people toss extra stuff for the customer in the first order, but that stuff is usually company-centric, for example, “Here is a magnet with our URL on it” or “Here is a catalog of our other products”.  That’s fine, but it’s neither surprising nor delightful.

Here is what makes up a good New Customer Kit, based on years of testing:

1.  A letter or other message from the company that Welcomes the customer, talks about the people and philosophy behind the company, and reinforces any guarantees or promises that are part of the Brand.  This piece must be written carefully, and from a customer-centric point of view.  No “we we” stuff.

2.  A free gift.  This gift must be related to the merchandise or general category being purchased, and must not be discards, seconds, or defective merch.  Giving a new customer something that is dented or discolored is not a gift, it’s an insult.  Giving a new customer something that is promotional (magnet) may be a gift, but it is expected and not particularly delightful.  Giving a new customer a “gift” because they made a first purchase (Buy today and we’ll include a…) might be delightful but sure is not surprising.  Ignore the above cautions at your own peril.

3.  Free Samples, if relevant to the business.  Anything that is consumable and generates repeat purchase is ideal.

Anyway, I suppose you’re expecting some kind of numbers to go along with all the fuzzy-wuzzy “Oh, if we just make their experience better, they will be more loyal” drivel you hear all the time online.  This is the Marketing Productivity Blog, after all, right?  OK, here are the stats on this technique from the Lab Store.  As usual, this promotion was tested versus control (new customers who did not receive a New Customer Kit are control) and we compare sales activity of both test and control over the next 90 days.  Why 90 days?  Well, if it makes money at 90 days, it sure makes money at 120…

Average cost of New Member Kit (there are several versions) – $.74

Increase in 90-day second purchase rate, test versus control – over 30%

90-day ROMI – 4,891%  ($36.68 in net profit for every $.75 spent)

Surprised and Delighted Customers – Priceless

Now that the bottom line has been presented, the black box folks simply interested in the “what happens” can skip the next part.  If you want to know why it works and maybe learn something useful you can port elsewhere, read on.

New Customer Kits are a great way to shape Theatre of the Mind. 

What you have with a remote retailing customer is a “theatre of the mind” scenario, much like you have in radio advertising.  Customers can’t see or touch you, so “Cues” become extremely important; if you don’t populate the theatre of the mind for the customer, the customer will go ahead and populate it themselves.  If you want some control over the image of your company people create in their head, you need to be proactive.  Theatre of the mind, folks.  Very powerful stuff. 

Our New Customer Kit generates absolutely tons of “Thank You” e-mails from new customers who want to tell us all about how great the experience was purchasing from the Lab Store.  Now, I think you’d agree that purchasing from a web site isn’t a particularly thrilling experience in any way, but if you really listen (and understand a bit of Consumer Psychology) these customers are not really talking about the web site, or even our company.  

What they really are saying is they are very happy with themselves for making a first purchase from us; our actions have confirmed they made a good decision.  Remember, this is remote retailing.  There is risk to the customer, especially on that first purchase; they have no idea if their expectations based on the web site copy are going to match the reality of delivery.  They are concerned about what might happen – will they be proven smart or dumb for taking this risk?

When we deliver the products they ordered in a timely way we meet expectations.  When we deliver these products carefully packed in a pristine new box packed with fresh blank newspaper, we probably exceed expectations by a bit.  But when these new customers get to the Welcome letter, the free gift, and the samples, we blow out their expectations. 

The picture these new customers had in their mind of our company based on the web site experience is then permanently altered; we’re doing brain surgery for 74 cents a head.

Now, I have a question for you – is this program Marketing or Customer Experience Management?

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Measuring Customer Experience ROMI #1: Nice to New Customers

I’m going to preface this piece by saying I don’t really think “Customer Experience Management” is anything different from smart, integrated Marketing and Customer Service.  If there isn’t an actionable framework for it, like Ron, I’m not sure CEM has a future, other than to create something for people to talk about, and maybe sell some software…

Whichever direction you believe in, here is an interesting case that makes several points about this area of discussion.

The Nice to New Customers test was conducted at Home Shopping Network in 1994.  The idea came from the annual survey of all customers that indicated that the “average” customer felt the “new customer experience” was “as expected”.  Given the high percentage of 1x buyers we were experiencing (as do all interactive remote retailers), I thought, “Hmm, maybe if we deliver a customized first purchase experience and process, these new customers will be more likely to make a second purchase”.  Sounds logical, right?  This was a Business SWAT case since it involved Marketing, Customer Service, IT, and Telecommunications, all working together to set it up, determine the metrics, make sure Management understood the impact of the test on existing silo Scorecards, etc.  In other words, I sold my soul to get this test to happen.

We set up a pretty elaborate test where a random sample of new customers (about 100,000, a solid test group) were shunted to our “best agents” and given a new “Welcome Treatment”.  Instead of the general “get them off the phone as fast as you can” attitude prevalent in the network, these reps had permission to spend as much time with the customer as the customer wanted and generally customize the experience.  There was a lot of role play and monitoring connected to this effort, and the service managers on the project were convinced these new customers were in fact treated to a much better initial experience than the average new customer.  In fact, the customers seemed thrilled.  So far, so good. 

Problem was, this test group of new customers exposed to a better “Customer Experience” ended up generating no incremental sales versus control.  Well, there you go.  We lost a ton of money on this test, a stellar -118% ROMI, because we literally had to pay back customer service out of the marketing budget for the lost productivity in the network due to the test.  Hey, that was the deal I cut to get this test done.  You win some, you lose some.

But it gets worse.  When we started dicing the post-analysis of the test down to behavioral groups based on the details of the first transaction, we found there was actually some incremental sales lift among new customers with “light buyer” initial profiles.  This is good.  Problem was (and you know what is coming, don’t you?), new customers with heavy buyer profiles were negatively impacted, and because the Potential Value of this group was so huge, the losses versus control in this relatively small number of folks far outweighed the gains in light buyers, causing the net effect of the promotion to be negative.

Isn’t that a fine kettle of fish?  Being Nice to potential Best Customers killed the test.

When we surveyed these customers in the test after we knew their behavioral profiles (to make sure we knew the behavioral context of their answers) they basically told us this: they were expecting a very operationally efficient transaction and we provided them a customer-centric one.  Cognitively, they were making an impulse purchase and they wanted an impulse transaction, not an empathetic one.  This disconnect caused post-purchase dissonance and reduced intent to purchase.  Using today’s language, we were basically “spamming” them; we were overstepping any Permission we had to engage them at a more personal level.  And this negative effect was most pronounced among new customers with high Potential Value.  In hindsight, knowing what we knew about the psychological profile of Best Buyers, this made all the sense in the world and was an interesting confirmation of the test results.

The CFO, well, he didn’t think this result was so interesting…but did applaud the idea that we would step up to the plate and actually pay back customer service for the losses related to decreased productivity in the network out of the Marketing budget.  It was the first time anybody had done this kind of intra-silo payment and really paved the way for tighter integration between Marketing and Service.

You might consider this test result when evaluating your e-mail contact strategy, at least for new customers.  Are you sure you are generating maximum revenue?  What if the half percent or so that unsubscribe each month are future Best Customers with high Potential Value?  Do you use control groups, do you know the answer to this question?

Interactive behavior provides a very special backdrop for Marketing and Service; be careful what you ask for. 

I’m not saying if you did this test you would get the same results.  What I am saying is you cannot assume all the stuff you read about “Customer Experience” online is going to work with your customers.  You simply have to test these ideas with real customers and measure the results.  And if you are dealing with interactive customers, keep in mind that “Customer in Control” is something you might not want to mess with.  In other words, sometimes Control is the Experience, particularly if the general Marketing / Brand backdrop is Operational Efficiency.

It’s one thing to start a company saying you are going to deliver some kind of superior Customer Experience and embed this idea in your service delivery model.  We all know these kinds of companies.  It’s a completely different idea to think that you are going to improve the current experience at your company, and this effort is going to have positive effects for both the customer and the company because it sounds logical to you.

Lessons learned:

 1.  The bottom line lesson here really was about a poorly constructed test based on a faulty customer survey methodology.  Without the customer opinion first tied to an actual behavior, we had no option other than to use the opinion of the “average customer” as a base to act against.  Because of this, the only action we could take was against  “all new customers”, and ended up shooting ourselves in the foot.  Based on the post test dicing, we later retested and found (surprise, surpirse) a program like this could be extremely profitable when we treated targeted new customers differently based on their Potential Value

If we had this behavioral information (the initial Light Buyer / Best Buyer profiles) tied to the survey responses from the beginning, we would have understood these segments were different and designed the test accordingly.  Make sure if you are going to take some kind of action on a survey, you first understand a behavior and then survey the people with that behavior.  To do it the other way around, trying to “back into the behavior”, wastes a lot of time and money just in the data gathering and processing itself, never mind in the “re-testing” we had to go through once we knew what was really going on.

2.  It doesn’t always pay out to be Nice to New Customers.  Sometimes they simply want what they expect.

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

*** Community Activism

This article from CRM Magazine both makes fun of the current “Customer Experience” bandwagon and provides a solid suggestion (I think) about how to properly use an online community.  Though it’s not spelled out in detail, I’d assume creating a “democratic online forum where between 300 and 500 customers get to know each other” implies you first understand your customers from a segment or profile perspective, and then intentionally select a representative group or an intentionally skewed group to join the community.  Reason?  When you know who you have making suggestions and comments, you can put these comments in context, try to take action, and measure the results, which I think is something Ron is looking for in the Customer Experience Management debate.   This as opposed to simply hanging a “community” off the side of your web site and taking comments and suggestions from all comers. 

If you can’t put the comments in context (how long have they been a customer, what products do they use, what customer service experiences have they had, etc.) all the comments are barely worth a scan.  I mean really, what if you took an action based on the comments of unprofitable customers that destroyed the value of your business to your most profitable customers?  That would not be a very prudent use of “Community”, would it?  Don’t laugh, I have seen it happen – both online and offline.

Check out the article here.

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

*** A Formula for Alignment

I’ve written pretty frequently about the Marketing / IT interface.  This article from CIO Magazine provides a firm roadmap on how to integrate and manage a development team consisting of IT and non-IT people.  The really interesting thing going on with this IT / medical team is some of the clinical folks ending up learning so much about IT processes and techniques they crossed the border and became IT folks!  That is some career move, and testimony to the successful management of the development team.  Check out the article here.

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss