Category Archives: DataBase Marketing

Messaging for Engagement

Or Behavioral Messaging, as we used to call it. 

Much has been written about Measuring Engagement, but once you measure it, then what do you do with this information?  Most folks know the idea driving the Engagement Movement is to make your messaging more Relevant, but how do you implement?  Perhaps you can find the triggers with a behavioral measurement, but then what do you say?

This is the part Marketing folks typically get wrong on the execution side.  They might have a nice behavioral segmentation, but then crush the value of that hard analytical work by sending a demographically-oriented message, often because that is really all they know how to do.  So as an analyst, how to you raise this issue or effect change?

Marketing messaging can be a complex topic, but there are some baseline ideas you can use.  Start here, then do what you do best – analyze the results, test, repeat.

You want to think of customers as being in different “states” or “stages” along an engagement continuum.  For example:

  • Engaged – highly positive on company, very willing to interact – Highest Potential Value
  • Apathetic – don’t really care one way or the other, will interact when prompted – Medium Potential Value
  • Detached – not really interested, don’t think they need product or service anymore – Lowest Potential Value

Please note that none of these states have anything to do with demographics – they are about emotions.  The messaging should relate to visitor / customer experience as expressed through behavior, not age and income.

These states are in flux and you can affect state by using the appropriate message based on the behavioral analysis.  Customers generally all start out being Engaged (which is why a New Customer Kit works so well), then drop down through the stages.  The rate of this drop generally depends on the product / service experience – the Customer LifeCycle.

Generically, this approach sets up what is known as “right message, to the right person, at the right time” or trigger-based messaging.  Just think about your own experience interacting with different companies; for each company, you could probably select the state you are in right now!

OK, so for each state there is an appropriate message approach:

Engaged – Kiss Messaging: We think you are the best.  Really.  We’d like to do something special for you – give you higher levels of service, create a special club for you, thank you profusely with free gifts.  Marketing Note: be creative, and avoid discounting to this group.  Save the discounts for the next two stages.

Apathetic – Date Messaging: We’re not real clear where we stand with you, so we’re going to be exploratory, test different ideas and see where the relationship stands.  Perhaps we can get you to be Engaged again?  In terms of ROI, this group has the highest incremental potential.  Example: this is where loyalty programs derive the most payback.

Detached – Bribe Messaging: You’re not really into this relationship, and we know that.  So we are simply going to make very strong offers to you and try to get you to respond.  A few of you might even become Engaged again.

Can you see how sending a generic message to all of these groups is sub-optimal?  Can you see how sending an Engaged message to the Detached group would probably generate a belly laugh as opposed to a response?  You’ve received this mis-messaged before stuff, right?  You basically hate the company for screwing you and then they send you a lovey-dovey Kiss message.  Makes you want to scream, you think, “Man, they are clueless!” and now you dislike the company even more.

Combine this messaging approach with a classic behavioral analysis, and you now have a strategy and tactic map.  For example, you know the longer it has been since someone purchased, clicked, opened, visited etc, the less likely they are to engage in that activity again.  Here’s the behavioral analysis with the messaging overlay:

Click image to enlarge…

Kiss Date Bribe

Please note “Months Since Last Contact” means the customer taking action and contacting you in some way (purchase, click) not the fact that you have tried to contact them! 

So does this make sense?  Those most likely to respond are messaged as Engaged – as is proper in terms of the relationship (left side of chart).  As they become less likely to respond, you should change the tone of your communication to fit the relationship up to a point, where quite frankly you should take a clue from the eMetrics Summit and not message them any more at all (right side of chart).

Example Campaign for the Engaged: At HSN, I came up with the idea of creating some kind of “Holiday Ornament” we could send to Engaged customers.  If the idea worked (meaning it generated incremental profit), we could do it as an annual thing; we could put the year on the ornament and create a “collectible” feel, which is the right idea for this audience.  No discount – just a “Thank You” message “for one of our best customers” and “Here’s a gift for you”.

These snowflake ornaments were about $1.20 in the mail (laser cut card stock) and generated about $5 in 90-day incremental profit per household with the Engaged, test versus control.  Why?  Good ‘ol Surprise and Delight, I would bet.

We had some test cells running to see how far we could take this, and as expected, the profitability dropped off dramatically based on how Engaged the customer was.  If the customer was even minimally dis-engaged – no purchase for over 120 days – there was very little effect. 

Interactivity cuts both ways; it’s great when customers are Engaged, but once the relationship starts to degrade, folks can move on very quickly emotionally.  That’s why it is so important to track this stuff – so you can predict when your audience is dis-engaging and do something about it.

Data, Analysis, Insight

Poor BI; still struggling with broader adoption – as outlined by Ron in the post Four BS BI Trends (And One Good One).  So Gartner identifies BI as the “number one technology issue for 2007” then immediately pulls out this old chestnut as BI Trend #1: There’s so much data, but too little insight.

Sigh. Â

Then I get this comment by Ron Patiro asking: Besides simply not being actionable, what are some of the common pitfalls and tangles of metrics that analysts get themselves into in the pursuit of engagement?

These two ideas are closely related.  The “common pitfalls and tangles of metrics” are often the reason people get a “so much data, but too little insight” experience.  Let’s explore these issues a bit.

The primary reason you get a “so much data, but too little insight” situation – if you have an analyst to work with the data – is indeed the actionable analysis problem, as Ron P.  points out.  But, there are at least 3 versions of the actionable analysis problem, one obvious and two not so obvious:

  • Producing analysis that isn’t actionable at all
  • Producing analysis that is valid but too complex to be actionable, and
  • Failing to act correctly on a valid and easy to understand analysisÂ

And often, I find the Root Cause of these three problems (to answer Ron P’s question) to be faulty segmentation logic.  This condition in turn often is born of a situation many web analysts are familiar with by now: No Clear Objective.  But let’s leave the segmentation discussion for later and examine each of three cases above.

One cause of the “too much data, no insight” experience is producing analysis that isn’t actionable at all; it’s literally worthless and cannot be acted upon.  This is the most common vision of the “actionable analysis problem“ but probably not the one causing the majority of the negative outcomes.  Analysis can be “actionable” from the analyst’s perspective, but not the business perspective.  And if no actual business action takes place, no real insight is gained.

In my experience, people spend an incredible amount of time analyzing things that will never create impact.  Even if the analysis produces something that looks actionable, often the execution is impractical or financially irrelevant and so is not acted upon.  Just because you can “find a pattern” does not mean the business can do anything productive with that pattern.  Randomly “mining for gold” is one of the biggest time wasters around, and why people are often dissatisfied with the result they get from black box data mining projects.  You have to start with an actual business problem of some kind, preferably one that if solved, will increase sales or reduce costs, or no action will be taken.  Otherwise, you have simply created more data to add to the “too much data” side of the problem.

The bottom line for this slice of the problem: The intent and result of the analysis might be actionable, but unless there is a clear business case for acting, you have just contributed to the actionable analysis problem.  In other words, there is a difference between an analysis being “actionable” and having people actually act on it.

The 2nd slice of the “too much data, no insight” problem occurs when the analysis is too complex.   In Marketing at least, complexity introduces error, and probably more importantly, hinders the explanation of the analysis to people who might take action and gain insight.  If a Marketing person can’t understand the analysis, how are they going to formulate a campaign or program to address the problem, never mind get budget to act on the analysis?  Please note I’m talking about the analysis, not solving the problem itself.  Often, an elegantly simple analysis uncovers a problem that will be quite complex to solve.  These are two different issues.Â

In fact, I would go as far as to say the more complex the problem is to be solved, the more elegantly simple the analysis needs to be.  The reason is this: the most complex Marketing / Customer problems are usually cross-functional in nature, and to drive success in a cross-functional project, you need rock-simple analysis that galvanizes the team without a lot of second-guessing on the value of a successful outcome.

The bottom line for this slice of the problem: An analysis might be correct and even actionable, but too complex to be acted on.  Complexity opens the analysis up to (often accurate) disbelief in the conclusion, action never takes place, so insight is lost. The 3rd “too much data, no insight” problem is failure to translate a valid and easy to understand analysis into the correct action.  Here, we are finally moving out of the analytics side of the problem (delivering actionable analysis) and into the Business side.

Why is there failure to act correctly?  I’d submit to you it goes back to the Deconstruction of Marketing – most marketing folks simply don’t understand what to do with “people” as opposed to “Reach and Frequency”.  In other words, they can’t conceptualize how to act successfully against the individual or behavioral segment level as opposed to the nameless, faceless demographic level.Â

In my opinion, this is the primary reason why demographics are so overused in customer analysis, especially online – the marketing folks simply can’t get out of that box, i’s where the “actionability” starts for them.  The problem with this thought process, as has been pointed out, is that demographics often have little to do with behavior.  Behavior predicts behavior; demographics are mostly coincidental.  Yet the analyst, looking to produce a successful project, often will allow themselves to be dragged into endless demographic segmentation that is primarily a waste of time (unless you are a media site and sell demos) and leads to false conclusions, which lead to failed or inconsistent implementation.

The bottom line for this slice of the problem: the analysis identified a problem or opportunity, but in the end, the execution against the analysis was flawed and ultimately delivered poor or no real insight.  By the way, I think this third form of failure to deliver insight is the most common – much more common than most people think.  Why?  I’s the hidden one, the one that’s not so obvious and much easier to push under the table.

So there you have it.  Three versions of the “actionable analysis” problem that lead directly to the “so much data, but too little insight” issue.  I think #3 is probably the most prevalent; a lot of analysis “fails” not because of poor analysis, but poor execution against the analysis.

What do you think?  Have you delivered a clearly actionable analysis, one that is capable of real business impact, only to have the execution against the analysis botched?

Perhaps more importantly, were you able to do anything about the botched execution?  Were you able to turn it around?  How did you make that happen?

Or, is execution not really your problem – if Marketing (or whoever) screws it up, then they screw it up?

What’s the Frequency?

Jim answers questions from fellow Drillers
(More questions with answers here, Work Overview here, Index of concepts here)


Q: I ordered your book and have been looking at it as I have a client who wants me to do some RFM reporting for them.

A: Well, thanks for that!

Q: They are an online shoe shop who sends out cataloges via the mail as well at present. They have order history going back to 2005 for clients and believe that by doing a RFM analysis they can work out which customers are dead and Should be dropped etc. I understand Recency and have done this.

A: OK, that’s a great start…

Q: But on frequency there appears to be lots of conflicting information – one book I read says you should do it over a time period as an average and others do it over the entire lifecycle of a client.

A: You can do it either way, the ultimate answer is of course to test both ways and see which works better for this client.

Q: Based on the client base and that the catalogues are seasonal my client reckons a client may decide to make a purchase decision every 6 months. My client is concerned that if I go by total purchases , some one who was  really buying lots say two years ago but now buys nothing could appear high up the frequency compared to a newer buyer who has bought a few pairs, who would actually be a better client as they’re more Recent Do I make sense or am I totally wrong?

A: Absolutely make sense. If you are scoring with RFM though, since the “R” is first, that means in the case above, the “newer buyer who has bought a few pairs” customer will get a higher score than the “buying lots say two years ago but now buys nothing” customer.

So in terms of score, RFM self-adjusts for this case. The “Recent average” modification you are talking about just makes this adjustment more severe.  Other than testing whether the “Recent average” or “Lifetime” Frequency method is better for this client, let’s think about it for a minute and see what we get.

The Recent average Frequency approach basically enhances the Recency component of the RFM model by downgrading Frequency behavior out further in the past. Given the model already has a strong Recency component, this “flattens” the model and makes it more of a “sure thing” – the more Recent folks get yet even higher scores.

What you trade off for this emphasis on more recent customers is the chance to reactivate lapsed Best customers who could purchase if approached.  In other words, the “LifeTime Frequency” version is a bit riskier, but it also has more long-term financial reward. Follow?

So then we think about the customer. It sounds like the “make a purchase decision every 6 months” idea is a guess as opposed to analysis.  You could go to the database and get an answer to this question – what is the average time between purchases (Latency), say for heavy, medium, and light buyers?  That would give you some idea of a Recency threshold for each group, where to mail customers lapsed longer than this threshold gets increasingly risky, and you could use this threshold to choose parameters for your period of time for Frequency analysis.

Also, we have the fact these buyers are (I’m guessing) primarily online generated.  This means they probably have shorter LifeCycles than catalog-generated buyers, which would argue for downplaying Frequency that occurred before the average threshold found above and elevating Recency.

So here is what I would do. Given the client is already pre-disposed to the “Recent Frequency” filter on the RFM model, that this filter will generally lower financial risk, and that these buyers were online generated, go with the filter for your scoring.

Then, after the scoring, if you find you will in fact exclude High Frequency / non-Recent buyers, take the best of that excluded group – Highest Frequency / Most Recent – and drop them a test mailing to make sure fiddling with  the RFM model / filtering this way isn’t leaving money on the table.

If possible, you might check this lapsed Frequent group before mailing for reasons why they stopped buying – is there a common category or manufacturer purchased, did they have service problems, etc. – to further refine list and creative. Keep the segment small but load it up if you can, throw “the book” at them – Free shipping, etc.

And see what happens. If you get minimal  response, then you know they’re dead.

The bottom line is this: all models are general statements about behavior that benefit from being tweaked based on knowledge of the target groups. That’s why there are so many “versions” of RFM out there – people twist and  adopt the basic model to fit known traits in the target populations, or to better fit their business model.

Since it’s early in the game for you folks and due to the online nature of the customer generation, it’s worth being cautious. At the same time, you want to make sure you don’t leave any knowledge (or money!) on the table. So you drop a little test to the “Distant Frequents” that is “loaded” up / precisely targeted and if you get nothing, then you have your answer as to which version of the model is likely to work better.

Short story: I could not convince management at Home Shopping Network that a certain customer segment they were wasting a lot of resources on – namely brand name buyers of small electronics like radar detectors – was really worth very little to the company. So I came up with an (unapproved) test that would cost very little money but prove the point.

I took a small random sample of these folks and sent them a $100 coupon – no restrictions, good on anything. I kept the quantity down so if redemption was huge, I would not cause major financial damage.

With this coupon, the population could buy any of about 50% of the items we showed on the network completely free, except for shipping and handling.

Not one response.

End of management discussion on value of this segment.

If you can, drop a small test out to those Distant Frequents and see what you get. They might surprise you…

Good luck!

Jim

Get the book at Booklocker.com

Find Out Specifically What is in the Book

Learn Customer Marketing Concepts and Metrics (site article list)

Download the first 9 chapters of the Drilling Down book: PDF