Tag Archives: Analytical Culture

Difference between RF(M) Scores & LifeCycle Grids?

Jim answers questions from fellow Drillers
(More questions with answers here, Work Overview here, Index of concepts here)

Topic Overview

Hi again folks, Jim Novo here.

Both RF(M) scoring and Lifecycle Grids use the same key predictive metrics – Recency and Frequency. So what’s the difference? RFM is a predictive “snapshot” at a specific point in time; LifeCycle Grids are more like a “movie” designed to be predictive over different periods of time. Another way to think of this: RFM is tactical, LifeCycle Grids are strategic.

You dig? Let’s Drill …


Q:  We’re a telecom company trying to get a handle on customer churn and defection, so we can come up with some programs that will hopefully extend customer participation.  We live in the no contract space, offering a service that’s an add on to wireless phone service, so we don’t have a good indicator as to when the customer relationship might end.

A:  Ah, yes.  Your business model is “built for churn”, as I said on my blog the other day.  The behavior then is more like retail, where independent decisions are made in an ongoing way, deciding again and again to purchase.

Q:  I think your LifeCycle Grids method will show best what is happening to our customers.  If using this method, there doesn’t seem to be any reason to do the RF scoring as customers are just going into cells based on where they fall in the Recency and Frequency spectrum.  Is that correct?  Is there any real  difference between RF scoring and the LifeCycle Grids approach?

A:  You are partially correct, they are two versions of the same idea – both are scoring using Recency and Frequency. The traditional RF(M) scoring where customers are ranked against each other is a “relative” scoring method used primarily for campaigns – it is tactical, an allocation of resources model. 

Continue reading Difference between RF(M) Scores & LifeCycle Grids?

Is Your Digital Budget Big Enough?

At a high level, 2014 has been a year of questioning the productivity of digital marketing and related measurement of success.  For example, the most frequent C-level complaint about digital is not having a clear understanding of bottom-line digital impact. For background on this topic, see articles herehere, and here.

I’d guess this general view probably has not been helped by the trade reporting on widespread problems in digital ad delivery and accountability systems, where (depending on who you ask) up to 60% of delivered “impressions” were likely fraudulent in one way or another.  People have commented on this problem for years; why it took so long for the industry as a whole to fess up and start taking action on this is an interesting question!

If the trends above continue to play out, over the next 5 years or so we may expect increasing management focus on more accurately defining the contribution of digital – as long as management thinks digital is important to the future of the business.

If the people running companies are having a hard time determining the value of digital to their business, the next logical thought is marketers / analysts probably need to do a better job demonstrating these linkages, yes?  Along those lines, I think it would be helpful for both digital marketers and marketing analytics folks to spend some time this year thinking about and working through two of the primary issues driving this situation:

1.  Got Causation?  How success is measured

In the early days of digital, many people loved quoting the number of “hits” as a success measure.  It took a surprisingly long time to convince these same people the number of files downloaded during a page view did not predict business success ;)

Today, we’re pretty good at finding actions that correlate with specific business metrics like visits or sales, but as the old saying goes, correlation does not imply causation.

If we move to a more causal and demonstrable success measurement system, one of the first ideas you will encounter, particularly if there are some serious data scientists around, in the idea of incremental impact or lift.  This model is the gold standard for determining cause in much of the scientific community.  Personally, I don’t see why with all the data we have access to now, this type of testing is not more widely embraced in digital.

Continue reading Is Your Digital Budget Big Enough?

Does Advertising Success = Business Success?

Digital Analytics / Business Alignment is Getting Better

I recently attended eMetrics Boston and was encouraged to hear a lot of presentations hitting on the idea of tying digital analytics reporting more directly to business outcomes, a topic we cover extensively in the Applying Digital Analytics class I taught after the show. This same kind of idea is also more popular lately in streams coming out of the eMetrics conferences in London and other conferences.  A good thing, given the most frequent C-Level complaint about digital analytics is not having a clear understanding of bottom-line digital impact (for background on this topic, see articles herehere, and here).

Yes, we’ve largely moved beyond counting Visits, Clicks, Likes and Followers to more meaningful outcome-oriented measures like Conversions, Events, Downloads, Installs and so forth.  No doubt the C-Level put some gentle pressure on Marketing to get more specific about value creation, and analysts were more than happy to oblige!

Is Marketing Math the Same as C-Level Math?

Here’s the next thing we need to think about: the context used to define “success”.

In my experience, achieving a Marketing goal does not necessarily deliver results that C-Level folks would term a success.  And here’s what you need to know: C-Level folks absolutely know the difference between these two types of success and in many cases can translate between the two in their heads using simple business math.

Here’s an example.  Let’s say Marketing presents this campaign as a success story:

Continue reading Does Advertising Success = Business Success?