Culture of Control (Groups)

This post is part of a series on control groups. The first post is here, a list of all posts in the series here.

There are a couple of analytical culture issues I’d like to touch on with using control groups. Control groups are the gold standard in customer marketing campaign measurement, and at some point, you will be asked to use them. Heck, you might even get fired for not using them – think new boss comes in.

Despite all this, the most obvious stumbling block is you will take a small hit on the revenue line because you’re not dropping the campaign to the control group. I can hear it now, “But Jim, I can’t afford to take a hit on the revenue”.

My answer to this is always the same, “You can’t afford not to take the hit, because you absolutely do not know what your true revenue generation is.” Imagine being in the position of dramatically understating or overstating the true incremental revenue generated by your campaigns – sometimes for years and years. This is not a pretty picture when it has to be explained. Personally, I like to avoid that kind of thing!

So I’m just saying, you might want to mess around with control groups a bit before using them gets forced on you. Controls are a “best practice”, and I don’t know of anyone that can really defend not using best practices. If your company has a BI group, it’s only a matter of time before somebody over there forces the use of controls.

So how do you deal with the revenue hit? Like much of analytics, it’s all about explaining what you are doing and why. Instead of “gross sales”, the campaign focus becomes “sales per customer” – customer centric, if you will. You are moving to a more customer-focused measurement system. The goal is lift, improvement in performance, Marketing Productivity. The tiny loss in sales from the control group is simply a cost of measuring customer marketing properly.

And trust me, the insights you will get from using controls will be mind blowing. You will begin to really understand customer behavior, and that’s the first step to creating truly game-changing customer marketing campaigns.

For example, often the increase in sales attribution to your campaigns from using controls will dwarf the loss in sales by not marketing to controls by a factor of 10 or more. So while you are worrying about dropping half a percentage in campaign revenue by not using a control, you are leaving an increase of 5% in corrected revenue attribution on the table.

How’s that math working for ya?

Yes, this change will probably will be about as painful as explaining to management why you are moving from measuring hits to measuring page views, but tha’s life in analytics. When there is a better way to measure something, you should embrace it – and teach those around you why it makes more sense to measure that way.

More on the cultural issues of using control groups in the next post.

What about you? Have you faced this “revenue drop” issue with control groups? How did you handle it?

10 thoughts on “Culture of Control (Groups)

  1. When challenged on the ‘revenue drop’ question I usually tell people about real cases in which the net impact of campaigns (normally retention campaigns) is actually negative (i.e. they do more harm than good). Ironically, in those cases, a control group saves money. Without a control group, you just don’t know.

    OK, so I admit that such cases are rare. But it’s not at all rare for targeting using response models to direct mailings to people who would have bought anyway, and if the mailing is incentivized this again can lead to a real drop in revenue.

    Using control groups really should be a complete no brainer.

  2. Thanks for the comment Nick.

    I think the case you describe is not rare, I think it happens all the time with best customers. It’s just that people don’t segment out best customers and use controls for them so they can’t see this happen.

    Frankly, I was trying to stay away from this case because:

    1. I have covered it several times before
    2. The complexity of explaining it to people

    in favor of just trying to get some folks interested in using control groups.

    Speaking of difficult concepts to explain, the notion “lack of marketing” can actually increase sales is another difficult one to swallow for most folks, but I’ve seen it many times before. Thanks for stopping in and I’ll be following your blog…By the way, those cartoons are a riot!

    http://scientificmarketer.com/2007/09/controls.html

  3. I know what you mean, Jim.

    The cartoons, as you may have guessed, evolved from my own frustration at how subtle some of these concepts are, so now I’m trying to break them down into one-concept-per-four-box-cartoon chunks. And it’s still hard!

    The quote I always use about control groups comes from Robert McNamara, of all people, (who was talking, of course, in a different context): “We have to find a way of making the important measurable, instead of making the measurable important.” That’s control groups.

  4. I guess you’re not expecting your cartoons to make it into the Sunday papers…maybe DM News would pick them up?

    Of course, the paradox of the cartoons as a learning tool is the more you know about this topic, the funnier the cartoons are! Have you had any luck with them from an educational perspective?

  5. I hadn’t really thought about the idea of anyone else publishing the cartoons, but it’s a thought. I might investigate: thanks.

    I think the cartoons are working at least a bit as an educational tool. They definitely help people outside the industry to understand what I do, but I think they also help people who know more to keep in mind key distinctions,and if people have read them you can kind-of refer back to them to make a point. Time will tell.

    …And I just love the idea that as I learn more I might get to find them funnier :-)

  6. Speaking for a few people here at work Nick,
    myself: Unix Sysadmin; our Oracle DBA, and one of the ColdFusion Developers all had a very appreciative chuckle at the Uninformation cartoon.

    I’ve been holding off on reading this series of yours Jim, to when I can devote the time to really understand. After a quick skim of some of Nick’s cartoons, I’m regretting the earlier loss to otherwise increasing my “laughing out loud” quotient. :-)

    Cheers! (and thanks!)
    – Steve

  7. I occasionally run into groups that want to maintain a control group that is never, ever sent email: control group that is kept pristine. The idea is to use this control group constantly and consistently. Aside from the practical difficulty of assuring that these customers are still representative (and have a pulse), this concept seems to have a few flaws, but I can’t characterize the fundamental problem for my associates in the business. Can you comment on the relative merits of (a) randomly selecting a control group from the current customer population base for every campaign, and (b) re-using the same, initially randomized control group for an extended period of time (e.g., a year).

    Please do NOT use my real name or company.

  8. Don

    This is a fairly common concept. There’s not really an agreed name for such long-term controls, but I tend to call them fallows.

    Having a fallow group is useful if what you want do is to understand the impact of an entire marketing program. It’s particularly useful if you introduce a new way of doing things and want to compare it to business as usual, or if you just want to prove that marketing has a positive impact (assuming it does…)

    However, it is NOT a substitute for a regular campaign control group. The whole idea of campaign controls is that the ONLY systematic difference between the group in the campaign and the controls is the campaign. If you try to use fallows as a campaign control, you can’t isolate the impact of your particular campaign, so you can’t say what’s due to that campaign and what’s due to all the other stuff they missed out on.

    Does that help?

    Regards

    Nick

  9. It’s about time frame, really.

    If you’re looking to measure lift for a single campaign, you should always pull a control group.

    If you’re looking to measure lift for a series of campaigns to the same targets, you can pull one control group for the series, and optionally for each campaign in the series.

    If you’re measuring very long term effects, for example, the first 5 years of a loyalty program, then you’d want to use the same control group for the entire span and perhaps for the life of the program – but this control is really only valid for the original sample it was taken from. There needs to be a realization the validity of this control for the entire customer population tends to expire over time as the “age” of customers in it increases.

    You could argue a control taken in Year 1 was still valid in Year 3, but strictly speaking, it’s only valid against the set of Year 1 customers it was pulled from. So while you could still compare customers of the Year 1 vintage to this Year 1 control, if you have a 40% annual attrition rate, by Year 3 this Year 1 vintage is becoming a pretty small percentage of the overall customer population.

    Perhaps you can assume that Year 2 and Year 3 customers “are the same” as Year 1, but chances are they are not – the Year 1 customers tend to be early adopters and so have a different profile – especially in a pure online kind of environment.

    So to be sure of what you’re getting for lift, I would create new controls every year or 2 and track them all. This approach not only checks for variations in customer quality and program effectiveness for every year of the program, but also allows a prediction of the program value based on any changes in customer mix.

    Make sense? Thanks for the question, a very good one!

    Update: Whoops! Looks like Nick had it covered already…

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.