Monthly Archives: October 2007

What’s the Frequency?

Jim answers questions from fellow Drillers
(More questions with answers here, Work Overview here, Index of concepts here)


Q: I ordered your book and have been looking at it as I have a client who wants me to do some RFM reporting for them.

A: Well, thanks for that!

Q: They are an online shoe shop who sends out cataloges via the mail as well at present. They have order history going back to 2005 for clients and believe that by doing a RFM analysis they can work out which customers are dead and Should be dropped etc. I understand Recency and have done this.

A: OK, that’s a great start…

Q: But on frequency there appears to be lots of conflicting information – one book I read says you should do it over a time period as an average and others do it over the entire lifecycle of a client.

A: You can do it either way, the ultimate answer is of course to test both ways and see which works better for this client.

Q: Based on the client base and that the catalogues are seasonal my client reckons a client may decide to make a purchase decision every 6 months. My client is concerned that if I go by total purchases , some one who was  really buying lots say two years ago but now buys nothing could appear high up the frequency compared to a newer buyer who has bought a few pairs, who would actually be a better client as they’re more Recent Do I make sense or am I totally wrong?

A: Absolutely make sense. If you are scoring with RFM though, since the “R” is first, that means in the case above, the “newer buyer who has bought a few pairs” customer will get a higher score than the “buying lots say two years ago but now buys nothing” customer.

So in terms of score, RFM self-adjusts for this case. The “Recent average” modification you are talking about just makes this adjustment more severe.  Other than testing whether the “Recent average” or “Lifetime” Frequency method is better for this client, let’s think about it for a minute and see what we get.

The Recent average Frequency approach basically enhances the Recency component of the RFM model by downgrading Frequency behavior out further in the past. Given the model already has a strong Recency component, this “flattens” the model and makes it more of a “sure thing” – the more Recent folks get yet even higher scores.

What you trade off for this emphasis on more recent customers is the chance to reactivate lapsed Best customers who could purchase if approached.  In other words, the “LifeTime Frequency” version is a bit riskier, but it also has more long-term financial reward. Follow?

So then we think about the customer. It sounds like the “make a purchase decision every 6 months” idea is a guess as opposed to analysis.  You could go to the database and get an answer to this question – what is the average time between purchases (Latency), say for heavy, medium, and light buyers?  That would give you some idea of a Recency threshold for each group, where to mail customers lapsed longer than this threshold gets increasingly risky, and you could use this threshold to choose parameters for your period of time for Frequency analysis.

Also, we have the fact these buyers are (I’m guessing) primarily online generated.  This means they probably have shorter LifeCycles than catalog-generated buyers, which would argue for downplaying Frequency that occurred before the average threshold found above and elevating Recency.

So here is what I would do. Given the client is already pre-disposed to the “Recent Frequency” filter on the RFM model, that this filter will generally lower financial risk, and that these buyers were online generated, go with the filter for your scoring.

Then, after the scoring, if you find you will in fact exclude High Frequency / non-Recent buyers, take the best of that excluded group – Highest Frequency / Most Recent – and drop them a test mailing to make sure fiddling with  the RFM model / filtering this way isn’t leaving money on the table.

If possible, you might check this lapsed Frequent group before mailing for reasons why they stopped buying – is there a common category or manufacturer purchased, did they have service problems, etc. – to further refine list and creative. Keep the segment small but load it up if you can, throw “the book” at them – Free shipping, etc.

And see what happens. If you get minimal  response, then you know they’re dead.

The bottom line is this: all models are general statements about behavior that benefit from being tweaked based on knowledge of the target groups. That’s why there are so many “versions” of RFM out there – people twist and  adopt the basic model to fit known traits in the target populations, or to better fit their business model.

Since it’s early in the game for you folks and due to the online nature of the customer generation, it’s worth being cautious. At the same time, you want to make sure you don’t leave any knowledge (or money!) on the table. So you drop a little test to the “Distant Frequents” that is “loaded” up / precisely targeted and if you get nothing, then you have your answer as to which version of the model is likely to work better.

Short story: I could not convince management at Home Shopping Network that a certain customer segment they were wasting a lot of resources on – namely brand name buyers of small electronics like radar detectors – was really worth very little to the company. So I came up with an (unapproved) test that would cost very little money but prove the point.

I took a small random sample of these folks and sent them a $100 coupon – no restrictions, good on anything. I kept the quantity down so if redemption was huge, I would not cause major financial damage.

With this coupon, the population could buy any of about 50% of the items we showed on the network completely free, except for shipping and handling.

Not one response.

End of management discussion on value of this segment.

If you can, drop a small test out to those Distant Frequents and see what you get. They might surprise you…

Good luck!

Jim

Get the book at Booklocker.com

Find Out Specifically What is in the Book

Learn Customer Marketing Concepts and Metrics (site article list)

Download the first 9 chapters of the Drilling Down book: PDF 

Web Data: Randomly Erratically Variably Unpredictably Incomplete?

So there I am at the eMetrics Summit, sitting with WAA President Richard Foley who also has the impressive title of World Wide Product Manager and Strategist for SAS Institute.?  He asks me what I’m going to talk about for my “Guru” (hate that word) session with Avinash and John Q and I respond with the Accuracy versus Precision thing. You know, that web analytics folks are generally far too obsessed with Accuracy when the data is really too “dirty” to support that obsession.

Well, don’t you know, (and this is 90 minutes before the Guru gig, but I have a Track presentation first), Richard responds, “Web Data isn’t dirty, it’s some of the cleanest data around.”

Hmmm, I think.  This has to be another one of those Marketing / Technology Interface things.  Clearly a semantic rift of some kind.  But he’s a SAS guy, so there must be substance behind this statement!

So we spend the next half hour or so Drilling Down into the meat of the issue.  Turns out none of his analysts would call web data “dirty” because it’s created by machines, don’t you know.  No mistakes.  Data is “clean”.  You haven’t seen dirty data until you start looking at human keystroke input, for example. Think large call centers.  Or how about botched data integration projects. Millions of records with various fields incomplete or truncated.  That’s dirty data.

Dirty, from both an Operational and Marketing perspective, you see.  But web server logs, they might be dirty from a Marketing perspective, but they’re not dirty from an Operational perspective.  They just are what they are; super-clean records of what the server did or the tag read or the sniffer sniffed.

OK, I’m with Richard on this idea, having seen some horrendously dirty data in my time by his definition.  So what do we call web data, if it’s clean?  Even a 404 Error isn’t really “dirty”, right?  It sure is dirty from a customer / user perspective; but from an already widely-used Operational / BI definition, it’s not dirty, it just “is”. 

So how do we get to this idea of all the problems with web data that can lead an analyst down the wrong track if they focus so much on Accuracy they never get Precision?  You know, cookie deletion, network serving errors, crashing browsers, multiple users of a single machine, single users of multiple machines, tabbed browsing, etc. etc. etc.? What do we call that kind of data, if not dirty?

We start going through all the lingo, like trying on different sets of clothes, looking for something that fits.  What other kinds of data are like web data?  What is the precise nature of the “problem” with web data?  We finally arrive at the notion of Incomplete that seems to fit pretty well.  It’s not that the data is dirty, it simply is often “not there” for the end user or analyst, as in missing a cookie, or serving a page that is never rendered in the browser, or a tag that never gets to execute properly.

But that’s not quite it, we decide, because there has been a solution for “incomplete” data around a long time – modeling.  As long as you can get a set of reliable data, you can interpolate or “fill in” the missing data, right?  Like is often done with geo-demographic modeling?

There’s a word, we think – “reliable”.  Web data is certainly not reliable, but that’s not quite it.  Why is it not reliable?

Well, because at a fundamental level, the incompleteness is Random, so it cannot be modeled very well.

And there we have it. 

Web data is not dirty, it is Randomly Incomplete.  A label that works for both the Marketing and Technology folks at the same time.  A beautiful thing, don’t you think?  A great example of being a little “less scientific” on the Technical side and a little “more specific” on the Marketing side, I think.  We wrastled it to the ground.

So I rush off to change the phrase “data is dirty” in my Guru presentation to “data is Randomly Incomplete”.  The panel is right after my Track presentation, so I rush up on stage with Avinash and John Q. We’re late so Avinash starts right away; we don’t even have time to mention to each other what we will be presenting.

Avinash is riffing on Creating a Data Driven Boss and his Rule #2 is:

Embrace Incompleteness

Yikes.  That’s some coincidence, don’t you think?

But more importantly, do you think web data is dirty, Randomly Incomplete, or some other definition?  Because if there are no objections, I’m moving from “dirty” to “Randomly Incomplete” – at least when I talk with BI folks!

On the eMetrics / Marketing Optimization Summit

I had to bolt the Summit a day early to speak at the Direct Marketing Association annual conference in Chicago.  Too bad, the conference was humming and there was a ton of great content along with the usual great people.

The most interesting trend going on (for me, remember I favor a behavioral approach to marketing, online and off) is the killing off of e-mail subs once they become unresponsive.  The most excellent Jay Allen from Cutter and Buck kills them off at 6 months because he simply gets more pain than gain from mailing them – basically zero response and lots of spam complaints after 6 months dormant.  Reputation management, don’t you know. 

Hard to figure out why more people don’t do this, but I have a good guess – folks simply can’t (or don’t) segment behaviorally so they can’t really see where the sales come from.  If they could, they’d kill off the “haven’t opened in 6 months” subs too.  These e-mail “purge” practices are simply a manifestation of the reality of Engagement – there is a time-based predictive element that tells you when it is over. 

The smartest marketers will realize they can predict this degradation of the relationship and take action before it is too late – in other words, before 6 months of no opens.  Check with your (offline?) BI folks for any patterns that might be useful in managing these LifeCycles, hopefully they have seen these patterns before.  Use segmentation; source of customer is highly predictive of these patterns, as is entry / first content and first purchase product.

Beware the average LifeCycle of interactive relationships are typically quite short compared with offline.  For example, catalogs can get decent ROI mailing all the way out to customers who have been dormant for 2 years.  In TV shopping, we considered folks dormant at about 6 months.  Online, the majority of the value is generated in the first 3 months.  Put another way, in catalog you get a 20 / 80 Pareto.  In TV shopping, more like 90 / 10.  Online, 95 / 5.

In the end, this behavioral knowledge ties directly to the “customer experience” idea so many people comment about in vague prose but never quantify.  You have sales people, products, procedures, and business rules that create customers likely to defect.

Sure, you have online customers that stick.  But the percentage of those that stick is smaller, and since they generate huge sales volume, it’s incredibly important to pay attention to what they are doing behaviorally.  You can predict when they will defect by the parameters mentioned above; isn’t it your responsibility to take action on this knowledge?

For the Brand folks out there, Rachel Scotto from Sony Pictures also kills off her e-mail subs after 6 months of no opens, a rule that varies a bit with the type of list and topic (movie, TV show, etc.) For her, Brand is everything and she simply does not want the negative experience of unwanted e-mails to tarnish the Brand.  If someone demonstrates through their behavior they are no longer interested, then why continue to send them e-mails?  Good question.  Brand folks, please respond.

Jay also had a great shopping cart recovery example.  They e-mail folks that abandon carts with a simple, subtle message featuring the product and no discount – and get  fabulous response.  The folks sending discounts in this kind of program really need to do some controlled testing – they are giving away the store.

I’ve had a lot of positive feedback on my Summit presentations and I thank you for that.  Feel free to leave any comments or questions.

That’s it on the eMetrics / Marketing Optimization Summit from me.  Between WAA stuff and speaking / travel logistics I did not get to see many presentations, but the ones I did see demonstrated significant progress in grasping and leveraging visitor behavior.