Another eMetrics (Toronto) has passed and I have to say this: Web Analysts and Marketers proved once again they are up to the task of continuously improving the Productivity of their efforts!
At the same time, (and as I expressed during the sessions on the analytical culture), I fear that many in the web analyst community are becoming very “inwardly focused”. They tend to talk more among themselves about the pennies they are making / saving while tripping over the dollars that are right there to be had if they reached out to other analytical disciplines in the company or measurement community.
Many among us knew this was a danger from our BI experiences. If all you ever do is talk to each other about new shiny objects, your contribution to the business effort can suffer. BI struggles every day with this weight, the challenge of being labeled “really smart but irrelevant”. I don’t think we want this to happen to WA.
So with this backdrop, some of the conversations I heard at eMetrics Toronto about certain measurement practices were disturbing. For example, it seems very few people are measuring their customer contact efforts properly, and in time this lack of analytical rigor is going to damage the WA effort for all practitioners.
In the rest of the Marketing Measurement world outside Web Analytics, the fundamental measurement concept is not Response. Response is, in fact, practically irrelevant. This is because people appear to “respond” to campaigns when in fact the campaign was just a coincidence – these “responders” would have taken action anyway.
The rest of the Marketing Measurement world acknowledges this “coincidence effect” and uses the far more rigorous concept of Lift – the incremental response directly attributable to the campaign.
If you are one of those peeps who is constantly talking about “Pull Marketing” and the power of Interactivity and Social and all that, you cannot say in the same breath you think your Marketing campaigns actually generate all the response you are claiming. If there was ever a reason to think analytically in terms of Lift instead of Response, Interactive tops the damn list; you can’t have it both ways.
And how does one measure Lift?
By using control groups – a random sample of people targeted for the campaign who are held back and do not receive the campaign. Lift is measured by looking at the incremental response of the targeted population over the control response.
In other words, if 2% of the control group ends up taking the desired action, and the “response” of the targeted group is 2.5%, the campaign is credited with driving a .5% response rate – not 2.5%.
While most easily used in contacting known populations, you can create control groups any number of ways. For example, mass media campaigns are often tested for Lift geographically, comparing “no media” markets to the markets where the media is running. This is how Marketing Mix models are built. Online, you could do the same with PPC, banners, just about anything.
My Point is this: The rest of the world measures Marketing success using control groups. In fact, the business world measures a lot of things using “variance to control” models. So if WA wants to be taken seriously, this practice of measuring Lift rather than Response has to come into play as a best practice.
You will be surprised how much more seriously your analytical cases will be taken when you use controls. Finance people in particular are very used to the concept of “variance analysis”. When you show Finance people two identical groups, one who received the campaign and the other who did not, and claim the influence of the campaign to be the “Lift” or difference between the two groups, Finance people just nod and say “I get that”. No challenge, self-evident.
This is a wonderful thing, you know, having Finance people really understand Marketing Measurement. It leads to much bigger budgets.
Now, will introducing (requiring if you are a manager?) control groups and Lift Measurement be popular with the people whose campaigns you analyze? Probably not to start, I’d guess. Because their results are going to be different. Could be much better than they ever dreamed. But could be worse.
And this is what leads to my 2nd point about the analytical culture: It is not the job of an analyst to be popular. It is not the job of an analyst to “support” an effort, Marketing or otherwise. The job of an analyst is to seek the truth – an ongoing process, with “better truths” exposed as you move forward.
Those of you who were around when we moved the WA industry from Hits to Visits know what I’m talking about. This was not a particularly fun or popular exercise, but it had to be done, because Visits were a “better truth”. People moving WA practices from log files to tags are often faced with a similar problem of explaining a better truth. It’s not easy, but it’s the right thing to do.
Why should you care about this Lift issue?
At some point, a boss, someone in Finance, or a BI person is going to require you prove the effectiveness of online marketing campaigns using a control group. And when using the control group tells you the campaigns are not nearly as effective as you thought they were, well, there’s going to be a little bit of a problem.
So, as I have done at many an eMetrics Summit, on this blog and elsewhere I strongly encourage you to start exploring the use of controls and the results they produce so you are ready for that day.
The Lift approach to measurement will change your mind about a lot of things you may now take for granted because they are part of the echo chamber you listen to all the time in the blogosphere. From folks who wouldn’t know a control group if it bit them on the arse, I might add. This is the problem with WA being so “inwardly focused” as a group – the crowd is often not as wise as you think they are.
For example, Measuring Lift will change your opinion about:
* Buying #1 rank PPC ads when you have great Organic exposure
* Profitability of Shopping Cart recapture programs
* Level of discount required to generate a certain sales volume
to just name a few of the sacred cows floating around.
Now, let me just add that as an analyst, it’s not your job to decide if a marketing program that loses money should be continued. There are perhaps several reasons this might be OK, e.g. “We lose money on that PPC campaign but it delivers Branding, so we’re fine with that”.
Whatever. The analytical truth is to know how much that “PPC Branding” really costs. So decisions can be made to (for example) buy the same Branding impact at a much lower cost.
Your job is to properly measure campaigns and deliver an accurate analysis so whoever makes these decisions has all the facts – good, bad, and ugly. Otherwise, it will be your fault the proper decision on how these programs should be executed was not made.
So, if you have to save this little exercise in the truth until your next job, I can buy that. But please, when you interview for or walk into that next job, express the appropriate level of surprise and dismay:
“Really, you don’t use control groups yet? Too bad, let’s start today.”
The Web Analytics community thanks you in advance.