Peak Engagement (Band 5)

Optimizing Individual Communications

Where the Band 4 Optimization optimizes general communications like newsletters, the Band 5 Optimization is all about hyper-targeted communications to individuals.  We’re talking mostly about special circumstance stuff here, more exotic ideas that may actually fall outside what you might traditionally think of as “Marketing”.

If Band 4 is the “Air Cover“, Band 5 is Special Ops (see Band Model).

In Band 5, you basically have algorithms of various kinds that are “sniffing” the databases looking for special situations that have exceedingly high ROMI.  Often, these ideas deal in one way or another with high value customers that appear to be dis-Engaging; many of these scenarios related to Marketing, Service, or Product in one way or another.

The tactical Marketing idea is this: you have all these sequenced communications in Bands 1 – 4, and some of them can be customized down to a certain level, but you are still dealing with fairly broad segments.  There are certain situations where you want to reach out at the micro-segment or individual level that can’t be handled by the “air cover” media, no matter how customized / personalized it is.

Our largest program in Band 5 was FIPS – the Future Intent to Purchase Score.  Small audience, large impact; the ROMI on this thing was unbelievable.  To give you an idea of scale, the mass customization Magazine dropped to about 2.5 million customers.  The FIPS drop usually ran about 100,000 pieces during the cycle between Magazine drops.

Yet, FIPS generated more incremental profit per drop cycle than the Magazine, which dropped all at once.  FIPS dropped when the customer was “ready”.  More on this idea below.

FIPS was based on a hand-built multiple regression model.  The data used to build the model came out of exotic ideas we just came up with and tested based on results from the Magazine and a fundamental understanding of Interactive behavior.  Lots of these special situation tests failed financially, but they ultimately provided the data needed to build the FIPS model.

The personalization data we used for the Magazine was driven by an Engagement model based solely on purchase behavior.  FIPS added lots of other behavioral data points.  One of the most mind-blowing, and ultimately the most revealing idea, had to do with the use of Tootie – the Interactive Voice Response Unit.

Tootie began life ignored by our best customers – the same folks who specifically asked us for “a way to order without waiting on hold for a Net Rep”.  In other words, these customers craved more control of the Interactive system we offered them.

After Marketing decided to go in and Optimize what those smart engineers had built, Tootie rapidly gained support from best customers.  The “personality” of the VRU was so popular she would get Holiday cards and gifts from customers!

What we found buried in the FIPS model was this: changes in use of the VRU predicted eventual customer defection by defining a tremendously important concept – Peak Engagement.

Here’s a simple explanation of this part of the FIPS model: if the percentage of orders placed using Tootie was rising, the customer was Engaged, and accelerating.  If the percentage of orders placed using Tootie was falling, this best customer – even though she had been purchasing Frequently and purchased Recently – was dis-Engaging, and on the way to Defection.  For those of you who might care, the actual calculation was more like a “Rate of Change” idea.

Think about this for a second.  The customer, by all “normal” ways you might view behavior, was “Active” and still purchasing.  But there was a change in the associated behavior which predicted the customer was in Defection mode, that Engagement had Peaked.  It signaled the customer was beginning to dis-Engage even though the customer was still Active.  And it tuned out that Peak Engagement was the absolute highest ROMI opportunity for a Marketing intervention, a classic “TripWire” or trigger for taking Action.

For the digital analysts out there, this idea would be similar to a visitor who has a history of posting  comments or reviews evey week who continues to visit at the same rate, but all of a sudden posts less frequently, say every other week.  FIPS would look at this change and call it a “trigger”, predicting the visitor’s post frequency would start dropping, and then pretty soon, the visitor would simply stop visiting.

The question is, if you knew this scenario was going to play out for a whole segment of Buyers, Pokers, Frienders, Posters, Reviewers, or whatever your site’s primary Action is, what would you do?  How would you go about redesign or adding new functionality?  How would you treat them differently, what would you say?

For those of you that are into surveys, after this analysis, at least you would know who specifically to get feedback from.  Nothing like knowing exactly who they are behaviorally to get to Root Cause.

The first time we heard about this Peak Engagement idea, everybody in the room (about 15 analysts / Marketers) gasped out loud, followed by a chorus of “NO WAY!” (this was the mid-90’s, remember).  The question, as with all great behavioral analysis, was “why?”  What explains the relationship between Tootie use and Peak Engagement?

We took customers who tripped Peak Engagement and ran their history backwards to look for clues.  We came to a perfectly mundane conclusion: when we could find something tangible, the root cause of this change in Tootie behavior often was what we called Friction.   In today’s language, you would probably call these causal events “customer experience issues”.

Oooh, call the Chief Customer Officer

In particular, for you segmentation fans, in order of magnitude:

1.  Customer bought a certain product or from a certain category.  If you followed this track down deeply enough, you usually found poor quality, misleading copy, high return rates, etc.

2.  Customer participated in a Marketing campaign.  Typically, these were run by well-meaning affiliates but not always; some were from other divisions of the company or mistakes we made.

3.  What appeared to be poor service experiences, probably related to #1 above, that could have or should have been handled in a more appropriate way, given the value of the customer.

The customer’s response, the “signal” they gave that Friction had entered their relationship with us, was a decline in direct interactivity with the system - declining rate of Tootie use.  The giving up of control over the system they had initially chosen to control.  In other words, it wasn’t as much of a pleasure to be in control anymore; Peak Engagement with the Interactive system had passed.

These customers, upon experiencing Friction, decided they would rather use a live rep (and probably wait on hold) than the Interactive interface to place orders.  Something fundamental had changed; the customer was slipping out of Interactive mode into a more catalog-like relationship.  They were moving from a lean-forward, participatory experience to a lean-back, detached experience.  What’s worse, this change in VRU usage creates a more frustrating customer experience (being on hold) than they were used to with Tootie.  So now they’re in a “Friction Loop”, creating more and more frustration.

Peak Engagement – the beginning of the end of the relationship.

Customers expressed this idea in their own words as “not as much fun anymore to shop”.  In most cases, they continued to shop, but with declining momentum and ultimate defection.  We had failed them, broken the Interactive bond.  Now we were just a “store”, like any other store or catalog.  The thrill of Interactivity was gone.

This effect is what I tried to describe back when the Engagement discussion started as the difference between Physical and Emotional Engagement.  There is a real difference.  The fact that someone is taking Actions (Physical Engagement) does not mean they are Emotionally Engaged, that they are deriving increasing benefits from the Interactions they Engage in.

This analysis drove the realization that Interactivity was truly a different thing from a Marketing perspective; it had a different emotional set surrounding it.  This is the idea an interactive system creates “Pull”, a desire above and beyond the utility it provides.  When this emotional layer is stripped away, you’re left with the utility of the experience.  And the utility of it the experience – Physical Engagement – is often not enough to maintain a relationship.

Marketing Productivity Conclusion: in an Interactive system, from a customer Marketing perspective, dis-Engagement is more important to measure than Engagement.  Engaged customers, well, they just truck along quite nicely all by themselves; the Interactivity itself is the Marketing, the Pull.  It’s when they start to dis-Engage from that system that you need to take Action; that’s the highest ROMI Tripwire.

Based on the above, the creative idea for the FIPS mailer was very simple – Recognition.  There were lots of versions, but the format was the same.  Plain white envelope with the HSN logo on the outside.  Short, very carefully written, one page letter.  Primary message idea:

“We just wanted to take the time to thank you for being one of our very best customers, and let you know we appreciate your business.  If we can be of service in any way, please let us know”.

As an aside, when a product or category was clearly the root cause of Friction, we turned this “evidence” over to the Business SWAT team, along with our calculation of lost customer value - nice touch, huh?

Understand, as is often true in database marketing, it’s not the creative that’s most important here – it’s the timing.  The right person, at exactly the right time.  With an appropriate message.  That same message anywhere else in the LifeCycle would not have been nearly as effective at “Pulling” these people back into the system.

In case you are wondering, yes, of course Customer Service knew all about this program.  In fact, they helped design the program, since the logical conclusion was some of these customers – if they had not given up on us completely – were going to call.  And call they did.  We even managed to get a flag into the rep screens so when a rep pulled up the customer, they knew the customer was in “FIPS Mode”.

Just to be crystal clear, not all of the FIPS customers had some kind of service issue.  But if they did have one, we wanted to try and resolve it in a manner appropriate to the customer’s value.

You just can’t do this kind of thing with a “calendar drop” mentality, it won’t work.  You have to drop according to where each customer is in their LifeCycle – regardless of date.  In a really large operation, that might mean you drop almost every single day of the year.   When the customer is ready, you have to drop - it’s a Tripwire thing.

FIPS was a completely automated, “lights-out” program – nobody had to touch it once it was built, except for occasional model tuning.  When the customer told us she was ready through her behavior, the mail went out automatically, directly from the lettershop with all the other customer service communications.

The right customer, at the right time, with the right message, all completely automated.  Next time you hear that phrase and think “that’s a bunch of crap, it will never happen”, remember the FIPS example.  It not only works, it works far better than you can imagine.

Any questions on this?  Anyone done similar stuff they can talk about?

(A post by post index of this Marketing Bands Series is here.)

9 thoughts on “Peak Engagement (Band 5)

  1. I think my brain is about to explode. :-)
    Talk about shoving me well and truly outside my comfort zone!

    What I find most… fascinating is the honest and hence *hard*, appraisal that “you” were somehow failing the customer. Bad products, misleading copy etc.
    And presumably act on fixing that accordingly.

    Another pearler of an article Jim!

    Cheers!
    – Steve

  2. Steve – Yes, well, it may take some brains exploding to get there…

    The first step to doing any of this is to redefine what Marketing is and what Marketing people are responsible for. It’s rarely the customer’s fault, in my view. People just don’t know how or where to look for Root Cause.

    I keep saying that the web site team is a prototype for this kind of business success, and I really think it is – testing oriented, looking for Root Cause, Six-Sigma / cross-functional SWAT, Failure is a Learning Experience, all of that. You see the same type of work I provided in this example with a lot of the great web teams.

    The question is: when will they cross over and do the Optimize thing for the Enterprise? That’s a Business SWAT team I want to have working on the whole damn thing.

    If they have to end up working for the Chief Customer Officer because Marketing “doesn’t get it”, so be it…but that would be a shame, in my opinion.

  3. Jim,
    Thanks for giving us such an up close and personal look into the HSN world of marketing. Truly insightful and very clearly communicated.
    There’s alot to think about here and I may have more questions later.

    For now though, I’m kind of curious about what the beginning and end state looked like in terms of the proportion of the total marketing budget that was allocated to push and pull. How much of a re-allocation did you go through?
    Thanks again,
    Moe

  4. Moe – glad you’re finding value in this series!

    That’s an awesome question and I can answer it a couple of ways.

    If we’re talking strictly Marketing (including Mail Order) budget of about $15 million, prior to the Optimization, 50% of that was spent on Band 1, 5% in Band 2, none in Band 3, 25% in Band 4, none in Band 5, 20% in Bands 6 – 8. This analysis ignores outlier years; one early year we spent $50 million on testing mass media alone that didn’t work, excluded this spend for a fair comparison.

    After the Optimization, 3% in Band 1, 7% in Band 2, none in Band 3, 75% in Band 4, 10% in Band 5, 5% in Bands 6 – 8. It’s worth reminding here that the ROMI in Band 5 was much higher than in Band 4, but the volume lower. We were constantly looking to reallocate funds out of Band 4 into Band 5, but you have to find the opportunities first – test, test, test.

    The shift in spend above, from the “Front End” (Bands 1 – 2) to the “Back End” (Bands 4 – 5) is why this topic is so important.

    Interactivity, the Pull, really changes where you should focus spend. Dollar for dollar, Interactivity is great for acquiring customers, tends to suck at retaining them, hence the shift in the allocations above after Optimization. It’s worth saying that this is somewhat opposite of the traditional Mail Order (Push) model, where dollar for dollar acquiring customers is very costly but keeping them not as challenging, all else equal. For example, in a pure web customer database you will often see 70% – 80% 1x buyers; in a pure catalog database it’s more like 40% – 50%, both stats depending of course on how good the Marketers are :O.

    Now, if you are talking “company” budget, then you get a different picture because of all the Infrastructure costs related to Band 3. I have no idea how many hours were spent on the development of the systems to support Optimizing the Interface, but a lot of IT work went into that. A lot of Research & Analysis work went into optimizing Bands 4 and 5. Both efforts primarily labor cost.

    Sometimes, as in the case of Nice to New Customers, Marketing felt so strongly about the idea we paid for the costs incurred by other departments to execute (and I was ultimately proven wrong on this one – another example of why Interactivity is different). We also engaged in various kinds of internal resource Barter, especially with IT, to get things done.

    Not included in Band 2 above are Affiliate fees, because we had contractual obligations to pay commissions on sales for cable distribution. This was non-controllable variable cost so I didn’t include these costs above because they could not be Optimized. However, many Marketing programs done with these Affiliates could be Optimized and those costs are included above. This particular Affiliate model doesn’t have a great correlation with a web scenario, unless the web model relies solely on affiliates or platforms like Amazon or eBay to make sales, in which case you have a pretty similar situation.

    Did that answer your question?

  5. Howdy,
    I’m getting to this as I’m responding to one of your TheFutureOf comments.
    Very impressive, Jim. Truly and sincerely. Did you folks use any heuristic modeling? Based solely on what you’ve written here I wonder if that could have saved a few cycles.
    Also (and offered tongue-in-cheek) doesn’t this post resile your comments elsewhere about simplicity? I mean, I believe I can follow what you’ve written and I love the recurrent research modeling (especially the statements about some tests failing financially but leading to joy elsewhere, ahem) and the comments of others here indicate some might argue what is described isn’t simple.
    Last thought; how small a world would this method work with, do you think? Lots of effects that become hidden due to scale become monsters with smaller data sets. Just curious.
    And thanks for the excellent post.

  6. Welcome, Joseph!

    Heuristic modeling? Not to my knowledge, but I didn’t build the models. My job was to interpret the results and translate them into Marketing action.

    I don’t know how to answer about Simplicity; this is definitely the most complex “Band” due to the need for modeling – if you take it to this level.

    However, what I think people often miss is you can get about 80% of the way to what is represented here with MUCH simpler models. After all, the FIPS model was the culmination of about 6 years worth of test activity, most of it done with very simple behavioral models.

    In fact, it was the extensive data set created by using the simple models (Recency – Frequency) that facilitated the creation of the FIPS model. We already knew from prior testing that Recency was hugely predictive for Interactive. So FIPS was really about “just how could can we get at this?”

    On the last regarding “world size”, I think the question is above my pay grade, if I understand it. My understanding has always been that the advanced predictive models (FIPS) work much better with larger data sets. If you’re talking about simple models like Recency, they work well with small data sets. So my impression was the more advanced the model, the more data it needed, with machine intelligence / data mining requiring the largest data sets of all to produce actionable insight.

    But like I said, I don’t actually create any models above the RF level, which requires only simple counting. I just translate the model output into money!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.