Measuring Desirability

Why do we want to do a 2-Step acquisition?  Because the conversion rate is going to be higher per dollar of media spend.  It’s the equivalent in Online of the difference between buying single words and buying phrases in PPC.  The former generates a lot of traffic, but the latter gets higher conversion and is much more Productive.

In other words, a 2-step customer comes into the Relationship with higher Potential Value and higher Momentum.  And that’s important, because it means you spend less in Marketing over the longer term as the customer will, on average, keep interacting for a longer time.

If you’re not sure what that all means, perhaps it will become clearer as we dissect Desirability (Satisfaction), the last component of the AIDAS model.  Here’s the core issue:

Offline, we know people come back to Brands or Businesses “by themselves” because they like the Product or Experience.  We also do Advertising to these same people, as well as those less likely to come back or not likely to come back at all.

So how do we know what percent of the resulting activity is due to people just coming back because they enjoy the business, and how much is due to the Advertising?  How do you calculate ROI?

A Very difficult task.  Even if you could identify the “likelies”, you generally can’t exclude them from offline media.  So this whole issue of “likelihood to come back” offline has been completely ignored, because there’s no way to act on it.

Online, and in much of Offline Database Marketing, we don’t have this problem.  It’s a pretty straightforward and common analytical task.

We can measure quite accurately how much of “coming back” is from Advertising and how much is from “Experience” or the more global concept of what Forrester calls Desirability – the fact the customer simply enjoys interacting with the business, and wants to interact again.   And, online we can target specific individuals with specific messages based on their likelihood to come back.

But, most people in Online marketing are not acting on this intelligence or targeting capability; they’re ignoring the idea largely because it didn’t matter offline.  Are these the same people that keep saying “Interactivity is Different”?

I hope not, because they’re certainly not acting like it is!

Why should this concept of “likelihood to come back” really matter to Online Marketers?  Because it is much, much more powerful than you think it is.  Orders of magnitude larger.  However, once you screw up, the downside is also quite powerful – “not likely to come back”.  This brings up two important and powerful areas to consider:

1.  Over-spending to get people to come back who would have come back anyway
2.  Under-spending to get people to come back who are less likely or unlikely to come back

In most cases, you will find the budget mis-allocated in this way.  To optimize, you will want to reallocate budget from #1 into #2.

Online, there is a powerful “Pull” that brings people back, over and over – without needing to provide incentives or begging them.  This Pull is the very fabric of Interactivity.

What’s more, you can measure this Pull quite precisely and take action where appropriate.  Here is how:

1.  If you don’t try anything else new this year, do a controlled test with your e-mail program.  This is the simplest, most direct way to prove to people you’re not (I’m not?) crazy about how powerful this Pull idea is.  Please do not use whatever demo / product segmentation you normally use with e-mail for this test.  If you want to analyze this Pull behavior, you have to segment using behavior.

Most of the big e-mail vendors can do this for you, tell them you want to do a “Recency Test with 30-day segments and a Control Group for each segment”.  The most universal “last interaction” (the base for Recency) for many folks will be “last open”.  You could also use “last click-through”, but of course you will have smaller active base.  If you’re in commerce, use “last purchase date” if you can, since that is what really matters.   Just send whatever your default creative is so you keep a baseline with prior campaigns.  You will probably end up with results that look like this.

If you want to know more about these ideas or set the test up yourself, there are detailed explanations  in this series and this series.  Questions?  Just comment below.

2.  Perhaps more importantly, you can measure the decline of Pull, the absence of Pull, and take action on that as well.  Pull is your measurement of Desirability.  Where you find lack of Pull, you will find un-Desirable experiences you can take action on.

Now, a lot of people talk about being “customer-centric” and customer experience and all that.  Makes perfect sense, and has made sense since probably the first barter transactions, right?

What you don’t hear people talk about is how to measure the profitability of a customer experience or Desirability effort.  How to identify Desirability problems – even if the customer doesn’t say a word about them.  How to isolate and fix these Desirability problems.  And how to measure the increased profitability directly attributable to fixing these Desirability problems.  Wouldn’t you like to identify these un-Desirability problems before they go Social on you?  Why be reactive when you can be proactive?

That would be a pretty neat trick, don’t you think?

Here’s how you do it.

Once you have proven how powerful this Pull (come back by themselves) concept is with your own data – and it is especially powerful among your best, most Engaged customers (is that a surprise to you?), start asking why, for other groups, Pull is declining or absent.  What is the commonality among visitors or customers with the lowest “”likelihood to come back”, where Pull is declining or absent?

Here’s what you will find:

a.  They bought the same product or products
b.  Products bought were from the same vendor or category
c.  Responded to same campaign / traffic from same source
d.  They talked to the same salesperson or service agent
e.  They were formally Engaged with the same kind of content

and on and on.  Behavioral segments.

Visitors or customers who “did the same thing”.

Basically, you will find out where Desirability is lacking, literally, what you are doing every day in Sales, Marketing / Product, Service, or Operations to drive away customers and prospects.

And then you can decide what you are going to do about it.  That’s a whole other challenge I will address in the next post.

Your feedback and questions are appreciated.

One thought on “Measuring Desirability

  1. Enjoyed reading this blog post and wanted to comment on one other way of testing mis-allocated budgets related to your comment about marketers ‘over-spending’ issues. This is related to consumers that would have come back to an advertisers brand vs. under spent budgets on getting consumers to come back who are less likely or unlikely to come back. In addition to using an email vendor to accomplish a test, you can accomplish the same test with your display media budgets and paid search budgets.

    We have a few clients that have accomplished this via ‘attribution analysis’ or display cookie based analysis. These companies looked at multiple marketing touchpoints across media channels to give them a more accurate understanding of how various multi-channel activities would ultimately affect the final click (or conversion). By analyzing marketing campaign data (including cookie-level raw data) across all websites and other media channels, building a unified consumer data warehouse, and applying specific business-driven algorithms, they learned the following insights:

    1. Organizations must consider multiple channels. CPA (and sometimes CTR) doesn’t provide accurate data for optimizing campaign spends. Given that CPA is calculated based on the last impression, our clients were not giving appropriate credit to other channels and the influence these channels have on consumer behavior. Such an adjustment in attribution and media optimization based on the new data led to an increase of 12.5 percent ROI on media spend with one of our clients.

    2. Assigning higher budgets for certain publisher websites, based on historically low CPAs, without understanding the bandwidth of these sites can lead to wasted money and missed opportunities gained by spreading the spend to niche online publisher sites.

    3. Cookie data analysis is a much more effective to way to set frequency caps on media placements. Most advertisers set frequency caps on media placements based on the initial planning data they receive from third-party sources such as comScore or A.C. Nielsen. For our clients, they found cookie data analysis changed the optimal frequency for each campaign, site, placement and creative. Impressions were being wasted, and in some cases a few more impressions would have generated a substantially larger number of new conversions. By placing the correct frequency caps, they gained efficiencies of about 6-9 percent of media spend in the first few months.

    In summary, via these tests, our clients learned that they achieved significant marketing campaign conversion improvement when taking into consideration the “sphere of influence” of multiple consumer touchpoints. Focusing on and measuring the final conversion click is simply not enough.

    Alan Osetek
    Visual IQ

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.