Category Archives: Analytical Culture

“About the Blog” as a Post

I had a request to publish my “About the Blog” page as a post so people could comment on it.  Here ya go Jacques.

From the Drilling Down newsletter, 12/2004:

What is the number one characteristic shared by companies who are successful in turning customer data into profits?  The company fosters and supports an analytical culture.

Web analytics and Pay-per-Click Marketing in particular have served to teach many people the basics of applying the scientific method to customer data and marketing – creating actionable reporting, tracking source to outcome, KPI’s, iterative testing, etc.  The web has allowed companies to dip a toe into the acting-on-marketing-data waters at relatively low cost and risk when compared with offline projects.  And many have seen incredible ROI.

I think web analytics could be poised in the future to serve a greater role – teaching people / companies the optimal culture for success using analytics, also at relatively low cost and risk.  It’s going to be much harder to drive this concept but more rewarding if as users we can make this happen, because today’s web analysts (and maybe analytical apps) could potentially be among tomorrow’s leaders in a data-based, analytics-driven business world.

For example, do you think analyzing / understanding new interactive data streams where the interface is not a browser will be any different, in terms of the culture required to turn interactive customer data into profitable business actions?  I don’t.

Look, a “request” is a request, whether a click, IP phone call connect, cable TV remote button push, verbal command, card swipe, RFID scan, etc.  You’re still asking a computer to do something.  The request has a source, is part of a sequence (path), and has an outcome. 

Analysis of these requests will face challenges and provide potential benefits similar to those provided right now in web site analytics.  This is the beginning of analyzing the interaction of computers, people, and process.  

Without a doubt, no matter what form these requests take, there will be a “log” of some kind to be analyzed.  Usability?  Conversion?  ROI?  These issues are not going to go away, and companies need to develop a culture that properly embraces analyzing and addressing them.  Companies not developing this culture will find themselves continuing to bump along the “drowning in data” road and will never optimize their interactive customer marketing.

As I see it, here’s the “culture” issue in a nutshell: as a company, you have to want to dig into data and really understand your business.  This pre-supposes that you (as a company) believe that understanding the guts of your business through analytics will drive actions that increase profits.  If the company doesn’t generally support this idea, there is no incentive for anyone to pursue it and the company just happily bumps down the road.

Of course most people don’t really relate to the “company”, but their own division or functional silo.  So you might have manufacturing / engineering groups who live and die through analytics but marketing is not held to the same standards and thought processes.  This is where the idea of Six Sigma Marketing comes in, it’s a “bridge” of sorts that tries to say (perhaps to the CEO and CFO), “Hey folks, if the engineers can engage in continuous improvement through ongoing analytics, so can the Customer Service silo and the Marketing silo and perhaps others.”

At a higher conceptual level, analytical culture takes root when management makes it known they are not afraid of failure, and want employees not to be afraid of it either.  

Another way to say this is experimentation and testing are encouraged throughout the company.  Failure is a regular occurrence, and is even celebrated because through failure, learning takes place.  Show me a company with no failures or that hides failure and I’ll show you a company that is asleep at the switch, afraid of its shadow, a company soon to be irrelevant to the market it serves.

Hand in hand with accepting failure must be continuous improvement.  Even though failure is embraced as a learning tool, the lesson of the failure both prevents it from happening again and results in new ideas with a higher potential for success.  These twin ideas of embracing failure / continuous improvement are at the heart of every business successful in using analytics to improve profitability.

“Evidence” of a company with the right bones to grow an analytical culture is this: you see the various levels of employees working in cross-functional teams with a common problem-solving mission.  Instead of people in a silo groaning about members from other silos being present at a problem-solving meeting, people are instead asking, “Where is finance, where is customer service?”

The most common place “analytics” live in a company is in Finance with the “Financial Analysts”, who are mostly tasked with analysis related to financial controls and producing financial reports.  If marketing or customer service was willing to expose themselves to the rigor of these analysts, they would undoubtedly be able to improve their business areas.  But that exposure takes substantial guts and confidence in your abilities, and a “culture” that supports a scientific process.

And you can’t engage in this process without analytics; success and failure need to be defined and measured.  The easiest way to encourage this culture to take root is to team a department head with a Financial Analyst familiar with the area.  

Often, you find this finance person already has insightful questions that could lead to improvement, but “never asked” because “it’s not my job”.  And often, to make changes in a business today, you need IT support of some kind.  That’s the basic cross-functional unit – Finance rep, IT rep, and a department head.  

I would also argue that if Marketing has a seat at the table in the strategic, “Voice of the Customer” sense (as opposed to being relegated to Advertising, PR, and Creative), then marketing is part of the core unit.  Then you add other disciplines as needed based on the particular problem you are trying to solve.

If the culture is flexible enough, this can turn into “Business SWAT” where the best and brightest cross-functional teams roam through the company as “consultants”, tackling the hardest business problems, which (surprise) are usually cross-functional in nature.  And “blame” is never on the agenda, it’s about “how can we help you make it better?”  You need a culture that is clear about this idea in order for people to expose themselves to the analytics-based scientific process.  Success and failure are defined by the analytics.

If you think about it, web site management ruled by analytics is a microcosm of this Business SWAT set-up.  You have marketing, finance (ROI component), and technology all working together based on the data.  That’s why I think there is a higher mission for the web analytics area / people; they are building the prototype that can teach companies how to go about measuring, managing, and maximizing a data-driven business.

At the highest level of this culture, managers “demand” these SWAT teams because the success rate and business impact is so high.  As the various departments or functional silos produce wins and losses, capital (budget) flows to where the successes are and away from the failures.  When managers see this happening, they jump on board, because they want the budget flowing their way.  This creates a natural economic supply and demand scheme with a reward system for participation built into the process.

One caution: when the culture gets to this level, the analytics group must be sanitized from the reporting hierarchy.  It can’t report to finance, or marketing, or IT anymore.  It has to be completely independent, which usually means reporting directly to the CEO.  There has to be confidence in the integrity of the results of all testing based on standards.  All the little “pools” of analytical work throughout the company must be gathered into one.

What kind of companies do you see really engaging in this kind of culture right now? Those that for legacy reasons have always had access to their operational and customer data and have been using analytics for years.  For these legacy players, web analytics is a “duh” effort – they get it right out of the box, because it’s more of the same to them.  But many types of businesses have not had this access to data before and web analytics is the first taste they are getting of the power and leverage in the scientific method.  I think this “accountability” disease we’ve created in web analytics and search marketing will continue to spread and infect every business unit.

The longer-term question is, can we flip this model over, can the successful culture of cross-functional approach and continuous improvement used in web analytics be used to create a “duh” moment for other areas of the company?  Will “best practices” and success stories create an environment where people say to the (web?) analytics team, “Hey, can I get some of that over here?”  In other words, will the analytical culture develop?

Methinks there is more going on with web analytics than meets the eye; it’s potentially a platform for the creation of a new business culture, a culture based on the scientific method – Six Sigma Everything.  Sure, it’s awkward and maybe the web is not meaningful enough yet to many companies.  But as we thrash all this out, there is something greater being learned here.

Right now, many CRM projects can’t show ROI because nobody knows what to do with the data, how to turn it into action that improves the business.  Sounds very much like web analytics 5 years ago…and look what we talk about now.  KPI’s, turning data into action.  The analytical culture playing out.

What does this mean for the people currently involved in web analytics?  If I was a young web analytics jockey, I would be preparing for the spread of the analytical culture, and seriously thinking about learning some of the tools traditionally used in offline analytics – the query stuff like Crystal Reports, the higher end stuff like SAS, SPSS, and so on.  Search the web for “CHAID” and “CART” and see if you like what you read about these analytical models.  If this kind of stuff interests you, you are much closer to being a business analyst than you think.  And guess what?  Analysts who can both develop the business case and create the metrics and methods for analysis – like you have to do for a web site – are rare.

It takes a particular mind set, and that mind set is not common.  Most of the people with the right mind set go into the hard sciences, but demand on the soft side of business (marketing, customer service, etc.) is just beginning in our data-driven world.  

On the hard side, (with all apologies to the real engineers out there for the exaggeration) the drug works or it doesn’t, the part fits or it doesn’t.  The development of softer-side marketing and service analytical techniques is always going to be populated with a lot more gray area than there is on the hard side, and it takes a special skill to conceive of and develop the metrics required.  But we should be trying to bring the same analytical rigor to the soft side of business that the hard side has always had to deal with.  The trick is to apply that rigor without damaging the mission.

For example, the whole “fire your unprofitable customers” thing from some factions in CRM.  That’s ridiculous.  What you want to do is identify them and then act appropriately, whether that means controlling their behavior, not spending additional resources on them, or not doing the things that create them in the first place.  That’s the gray showing.  You don’t just hit the “reject button” on a customer.

Customer data is customer data.  It’s all going to end up in one place eventually as the analytical culture spreads, and those with the skills to apply the scientific method across every customer data set are going to be rare and in very high demand.  Don’t spend all your spare time watching the Forensic Files on Court TV.  You’re a business analyst.  Get out there and learn the rest of your craft!

And, please consider doing whatever you can, whenever you can, to spread the analytical culture within your company.  If most of what your analytics involve is “online marketing”, reach out to “offline service” or another silo and ask if you can help them with anything.  What’s the call they would like to take less of, can you use the web site to make that happen – and prove that it worked?  Can you use the web site to generate offline ROI?  

Web analysts, you are the cross-functional prototype.  Please teach others how to optimize the entire business.

Measuring Customer Experience ROMI #1: Nice to New Customers

I’m going to preface this piece by saying I don’t really think “Customer Experience Management” is anything different from smart, integrated Marketing and Customer Service.  If there isn’t an actionable framework for it, like Ron, I’m not sure CEM has a future, other than to create something for people to talk about, and maybe sell some software…

Whichever direction you believe in, here is an interesting case that makes several points about this area of discussion.

The Nice to New Customers test was conducted at Home Shopping Network in 1994.  The idea came from the annual survey of all customers that indicated that the “average” customer felt the “new customer experience” was “as expected”.  Given the high percentage of 1x buyers we were experiencing (as do all interactive remote retailers), I thought, “Hmm, maybe if we deliver a customized first purchase experience and process, these new customers will be more likely to make a second purchase”.  Sounds logical, right?  This was a Business SWAT case since it involved Marketing, Customer Service, IT, and Telecommunications, all working together to set it up, determine the metrics, make sure Management understood the impact of the test on existing silo Scorecards, etc.  In other words, I sold my soul to get this test to happen.

We set up a pretty elaborate test where a random sample of new customers (about 100,000, a solid test group) were shunted to our “best agents” and given a new “Welcome Treatment”.  Instead of the general “get them off the phone as fast as you can” attitude prevalent in the network, these reps had permission to spend as much time with the customer as the customer wanted and generally customize the experience.  There was a lot of role play and monitoring connected to this effort, and the service managers on the project were convinced these new customers were in fact treated to a much better initial experience than the average new customer.  In fact, the customers seemed thrilled.  So far, so good. 

Problem was, this test group of new customers exposed to a better “Customer Experience” ended up generating no incremental sales versus control.  Well, there you go.  We lost a ton of money on this test, a stellar -118% ROMI, because we literally had to pay back customer service out of the marketing budget for the lost productivity in the network due to the test.  Hey, that was the deal I cut to get this test done.  You win some, you lose some.

But it gets worse.  When we started dicing the post-analysis of the test down to behavioral groups based on the details of the first transaction, we found there was actually some incremental sales lift among new customers with “light buyer” initial profiles.  This is good.  Problem was (and you know what is coming, don’t you?), new customers with heavy buyer profiles were negatively impacted, and because the Potential Value of this group was so huge, the losses versus control in this relatively small number of folks far outweighed the gains in light buyers, causing the net effect of the promotion to be negative.

Isn’t that a fine kettle of fish?  Being Nice to potential Best Customers killed the test.

When we surveyed these customers in the test after we knew their behavioral profiles (to make sure we knew the behavioral context of their answers) they basically told us this: they were expecting a very operationally efficient transaction and we provided them a customer-centric one.  Cognitively, they were making an impulse purchase and they wanted an impulse transaction, not an empathetic one.  This disconnect caused post-purchase dissonance and reduced intent to purchase.  Using today’s language, we were basically “spamming” them; we were overstepping any Permission we had to engage them at a more personal level.  And this negative effect was most pronounced among new customers with high Potential Value.  In hindsight, knowing what we knew about the psychological profile of Best Buyers, this made all the sense in the world and was an interesting confirmation of the test results.

The CFO, well, he didn’t think this result was so interesting…but did applaud the idea that we would step up to the plate and actually pay back customer service for the losses related to decreased productivity in the network out of the Marketing budget.  It was the first time anybody had done this kind of intra-silo payment and really paved the way for tighter integration between Marketing and Service.

You might consider this test result when evaluating your e-mail contact strategy, at least for new customers.  Are you sure you are generating maximum revenue?  What if the half percent or so that unsubscribe each month are future Best Customers with high Potential Value?  Do you use control groups, do you know the answer to this question?

Interactive behavior provides a very special backdrop for Marketing and Service; be careful what you ask for. 

I’m not saying if you did this test you would get the same results.  What I am saying is you cannot assume all the stuff you read about “Customer Experience” online is going to work with your customers.  You simply have to test these ideas with real customers and measure the results.  And if you are dealing with interactive customers, keep in mind that “Customer in Control” is something you might not want to mess with.  In other words, sometimes Control is the Experience, particularly if the general Marketing / Brand backdrop is Operational Efficiency.

It’s one thing to start a company saying you are going to deliver some kind of superior Customer Experience and embed this idea in your service delivery model.  We all know these kinds of companies.  It’s a completely different idea to think that you are going to improve the current experience at your company, and this effort is going to have positive effects for both the customer and the company because it sounds logical to you.

Lessons learned:

 1.  The bottom line lesson here really was about a poorly constructed test based on a faulty customer survey methodology.  Without the customer opinion first tied to an actual behavior, we had no option other than to use the opinion of the “average customer” as a base to act against.  Because of this, the only action we could take was against  “all new customers”, and ended up shooting ourselves in the foot.  Based on the post test dicing, we later retested and found (surprise, surpirse) a program like this could be extremely profitable when we treated targeted new customers differently based on their Potential Value. 

If we had this behavioral information (the initial Light Buyer / Best Buyer profiles) tied to the survey responses from the beginning, we would have understood these segments were different and designed the test accordingly.  Make sure if you are going to take some kind of action on a survey, you first understand a behavior and then survey the people with that behavior.  To do it the other way around, trying to “back into the behavior”, wastes a lot of time and money just in the data gathering and processing itself, never mind in the “re-testing” we had to go through once we knew what was really going on.

2.  It doesn’t always pay out to be Nice to New Customers.  Sometimes they simply want what they expect.

Lab Store: Automating Worst Practices

The news that Omniture has acquired Touch Clarity is shaking up the world of web analytics a bit.  Machine automation has always been a very sexy sell for software companies.  The problem is people think it’s a magic bullet and often end up using these tools to their disadvantage because they do not have the experience to really understand how to use the tools properly.  Then they get caught in trap of Reporting versus Analysis.

Here is a real world example from the Lab Store.  I am constantly fighting the Google AdWords A/B split testing algorithm for rotating ads.  Google almost always picks the wrong ad to run more frequently so I have to force it to run 50 / 50 in order to get accurate results.  How do I know Google is picking the wrong ad?  Because I have seen thousands of such tests, online and off, and I have a “feel” for these things based on my background in Database Marketing, Consumer Behavior and Psychology.  In each case where Google has picked one ad over another, and where I have forced it to then run the ads 50 / 50, it ends up I was right – Google picked the ad that generated the least profit per dollar of PPC spend as “best” and demoted the more profitable ad until it was not running at all.

Why does this happen?  Because Google isn’t smart enough to understand the complexity of the customer behavior in the Lab Store - and it can’t be, given the number of clients it has.  If you have done a lot of this kind of testing, you know that often the campaign with the highest response rate generates the lowest quality customers.  While these campaigns were running, I could see that the visitors generated by the campaigns Google picked as “best” were actually inferior to the visitors generated by the campaigns Google demoted, using a variety of metrics other than conversion (primarily Recency).  In other words, I was able to predict Google was doing the wrong thing by looking at the Customer LifeCycle. When I forced Google to run the ads 50 / 50 to give the demoted ads a chance, I was proven right – the campaigns Google demoted had a 90-day ROMI averaging 2.1 times higher than the campaigns Google promoted.

Look, I know these are software companies and their sole purpose in life is to create the next big thing and sell their software into it.  That’s fine, and frankly, I hope they are successful in doing it, because it will create a tremendous amount of business down the road for database marketing consultants as “machine optimization” hits the wall and companies need to be rescued from the results of it.  Just like they had to be rescued from demographic clustering in the 80’s and data mining in the 90’s.

People are always looking for the easy way out, and it ends up costing them more in the long run because they don’t really understand what the tool does and does not do. Perhaps that is simply the state of Marketing today.  So be it…

If you are an analyst and you see a black-box test result that simply does not make any sense based on your past experience, I encourage you to question the result, find a way to test it outside the system.  Learn why, because this kind of incident usually will lead to a shattering of some myth or bias you will be most happy to fully understand!