Monthly Archives: August 2007

*** Call it E-RFM

By way of Multichannel Merchant, we have this article: Call it E-RFM

by Ken Magill.  His idea will probably be a stretch for many in the e-mail space, but it’s a great example of what I was talking about in How to do Customer Marketing testing.

The gist of the article is you can reduce spam complaints and better manage reputation by anticipating which segments of subscribers are going to click the spam button.  Yes, anticipate.  You know, predict?

For some reason, online marketers seem like they are not really into the prediction thing – or at least are unwilling to fess up to it.  Test, measure, test, measure, web analytics is mostly about history, as opposed to predicting the future.  How about predict, measure, predict, measure?  Same thing, only much more powerful – if you can guess what customers will do before they do it, you have real marketing power.  Perhaps this is why folks don’t talk about it much…

The prediction model discussed, RFM, is one of the most durable and flexible models in the entire BI quiver.  As Arthur Middleton Hughes says in this article, “There isn’t a predictive model in the world that doesn’t have RFM inside of it”. 

And the RFM model is free.  You don’t even have to hire a statistician!

The RFM model sometimes gets a bad rap because people use it with very little imagination, simply reproducing the basic catalog model from the 1950’s, instead of understanding the guts of it and using it in new ways.  This Call it E-RFM article is a good example of how to use RFM in a new way; a broader explanation of using modified RFM for e-mail is here.

Those of you interested in how to really take advantage of the new Webtrends Score product should pay attention to this prediction area, because “Potential Value” – a prediction - is absolutely fundamental to optimizing a Score model.  You could use Score to predict which segments are most likely to click the spam button.  And then you could test, track, and fine-tune those predictions until you get them right.  Sounds like fun, huh?  Does to me, anyway…

But you don’t need something like Score to predict likelihood to click the spam button; sending an e-mail every week for 3 years to somebody who never clicks through should be a rough indication…

So, do you use predictive models in your work?  Why or why not? 

If you don’t use prediction, is it because coming up with a great campaign for a prediction is the problem?  Or because nobody really cares about customer marketing, it’s all about customer acquisition?

An overview of the Potential Value idea is here, or for a more comprehensive version including marketing direction on what to do with the results, get the PDF.

*** How to do Customer Marketing testing

“We don’t need testing. We know what works.”

“If you do no testing at all, no one will complain.”

OK, the title of this article by way of DM News is actually How to do direct marketing testing, but I figured some folks who should read it might not with “direct marketing” in the title.

Arthur Middleton Hughes is one of the great educators in database marketing, and this article hits on several issues that are very well known in the offline customer marketing business, but few folks in online practice.  Control groups, half-life effects, best customer segmentation, effects of promotion beyond the campaign.

He also briefly addresses a problem I run into all the time.  Things are “going great”, so we don’t need to test.  Underlying this statement is frequently a very weird emotion peculiar to many online operations, especially when I talk about control groups.  It’s the “what if we find out our results are not as good as management thinks” problem.

In other words, the “not broke, why fix it” issue.

Not sure why this occurs so much with online when compared with offline, though it probably is simply an issue of undeveloped analytical culture.  Why else would people be afraid of failure, if failure is truly embraced as a learning experience?

Perhaps a culture problem:  Testing is OK as long as it doesn’t rock the boat too much, doesn’t push the edge of knowledge out too far, is safe and sterile and won’t result in any quantum leaps in knowledge.  “Safe testing” only.

Perhaps an idea problem:  The testing culture is fine, but has become too robotic, no really new ideas, people don’t know of any high-impact, meaningful tests to conduct?

What’s going on where you work?

Research for Press Release

I think one of the reasons “research” has become so lax in design and execution is this idea of doing research to drive a press release and news coverage.  Reliable, actionable research is expensive, and if all you really want to do is gin out a bunch of press, why be scientific about it?  Why pay for rigor?  After all, your company is not going to use the research to take action, it’s research for press release.

So here’s a few less scientific but more specific ideas to keep in mind when looking at a press release / news story about the latest “research”, ranked in order of saving your time.  In other words, if you run into a problem with the research at a certain level, don’t bother to look down to the next level – you’re done with your assessment.

Press about Research is Not Research – it’s really a mistake to make any kind of important decision on research without seeing the original source documentation.  For lots of reasons, the press accounts of research output can be selectively blind to the facts of the study. 

If there is no way to access the source research document, I would simply ignore the press account of the research.  Trust me, if the subject / company really had the goods on the topic, they would make the research document available – why wouldn’t they?  Then if / when you get to the research source document, run the numbers a bit for your self to see if they square with the press reports.  If not, you still may learn something – just not what the press report on the research was telling you!

Source of Sample – make sure you understand where the sample came from, and assess the reliability of that source.  Avoid trusting any source where survey participants are “paid to play”.  This PTP “research” is often called a Focus Group and though you can learn something in terms of language and feelings and so forth from a Focus Group, I would never make a strategic decision based on a non-scientific exercise like a Focus Group. 

Go ahead and howl about this last statement Marketers,  I’m not going to argue the fine points of it here, but those wish to post on this topic either way, go ahead.  Please be Less Scientific or More Specific than usual, depending on whether you are a Scientist or a Marketer. 

For a very topical and probably to some folks quite important example of this “source” problem, see Poor Study Results Drive Ad Research Foundation Initiative.  If you want a focus group, do a focus group.  But don’t refer to it as “research” in a scientific way.

Size of Sample – there certainly is a lot of discussion about sample sizes and statistical significance and so forth in web analytics now that those folks have started to enter the more advanced worlds of test design.  Does it surprise you the same holds true for research?  Should’t, it’s just math (I can feel the stat folks shudder.  Take it easy, relax).

Without going all math on this, let’s say someone does a survey of their customers.  The survey was “e-mailed to 8,000 customers” and they get 100 responses to the survey.   I don’t need to calculate anything to understand the sample is probably not representative of the whole, especially given the methodology of “e-mailed our customers”.  Not that a sample of 100 on 8000 is bad, but the way it was sourced is questionable.

What you want to see is something more like “we took a random sample of our customers and 100 interviews were conducted”.  It’s the math thing again.  Responders, by definition, are a biased sample, probably more of a focus group.  This statement is not always true, but is true often enough that you want to verify the responders are representative.  Again, check the research documentation.

OK Jim, so how can political surveys be accurate when they only use 300 or so folks to represent millions of households?  The answer is simple.  They don’t email a bunch of customers or pop-up surveys on a web site.  They design and execute their research according to established scientific principles.  Stated another way, they know exactly and specifically who they are talking to.  That’s because they want the research to be precise and predictive.

How do you know when a survey has been designed and executed properly?  Typically, a confidence interval is stated, as in “results have margin of error +- 5%”.  This generally means you can trust the design and execution of the survey because you can’t get this information without a truly scientific design (Note to self, watch for “fake confidence level info” to be included with future “research for press release” reporting).

More rules for interpreting research