Monthly Archives: April 2007

The Analyst’s Enigma

Some of you might know that I produced the Web Analytics Association’s continuing education courses on Web Analytics (with Co-Chair Raquel Collins and a ton of great contributors).  One part of this project I am very proud of is the 4th Course, Creating and Managing the Analytical Business Culture.  This course was a real “stretch” goal because initially, the Education Committee thought there simply would not be enough material to create an entire course on this subject.  But we did it, and more importantly, the students really love it, because it addresses the “politics of analytics” they face every day.

One of the exercises in this 4th course is called The Analyst’s Enigma.  It always generates a lot of intense discussion and without giving away too much of the actual exercise, I thought you might like to either comment o how you would handle this situation or share your own similar war stories and how you handled them.

By the way, it’s based on an actual situation you may well have experience with (or will in the future) as web analytics folks spread their wings and bring their business optimization skills to the rest of the company. Here goes:

The Analyst’s Enigma

You are the manager of web analytics at a large public company.  The “web stuff” in the company started out in IT, so due to legacy reasons you are part of IT, but your internal client is really the marketing department, who is now in charge of the web site.  About 95% of your time and effort is spent working with marketing to optimize the web site, a project that has been very successful.

Over time, your success with the web site optimization process and as an analyst has been recognized in the company, and you occasionally get requests to do “problem solving” sessions with other parts of the company.  Typically, you run these projects through the same analytical process you used with the web site: Define objectives, create / get buy off on KPIs, measure baselines, develop ideas for testing, test and measure the results of those tests.

Recently, you were working on a project for customer service, trying to develop / improve KPI’s for the measurement of performance in the call center, which has a very rich data set.  This data is surprisingly similar in many ways to the traffic data set from the web site.  A phone call is very much like a visit; it has a duration, it typically has a number of steps like a web site funnel, and the steps end with accomplishing or not accomplishing a goal.  The call center is trying to evolve from relying on simple metrics that score only “efficiency” to a KPI that better balances efficiency and a good customer experience.

Your web site work has lately focused on a new campaign that the marketing folks are very proud of.  It’s blowing the doors off anything they have ever done in terms of response, thanks in large part to your analytical work on promotions.  The success of this program is widely known throughout the company, as is your role in the success.

On a Friday afternoon, you find yourself with some free time to devote to the customer service KPI project.  While examining the call center data set, a remarkable possibility presents itself.

Using a new, experimental KPI you have developed that balances efficiency and customer experience in the call center, it appears that every time marketing drops this new, highly successful campaign, there is a dramatic negative spike in this new call center KPI.  The correlation between the marketing campaign drop and negative spike in the call center KPI is extremely high, leaving no doubt in your mind that that there is a causal relationship between the two events.  There is always some negative impact on the call center when a campaign drops, but nothing like the magnitude of the impact caused by this most successful campaign.  Nothing about the campaign execution – for example, the volume of the drop – would lead one to conclude it should cause problems in the call center.

What would your next steps be?  What should you do with this knowledge that (as far as you know) only you possess?  The topics below might be worth touching on:

Topic 1.  You work for IT, and your main internal client is marketing.  The customer service analysis is a side project.  What responsibility do you have for resolving the apparent conflict between optimizing marketing and optimizing the call center?  Is it your responsibility to try and “optimize the company” across all the business units by providing this kind of information?

Topic 2.  One alternative would be to try and alter the new KPI (which you feel is very, very good) so that it masks the effect of the marketing campaign on the call center.  This would reduce potential internal conflict, for sure, but would result in a weaker, less trustworthy KPI for the call center.  Would you consider this route? What if your main client (Marketing) suggests you “tweak the customer service KPI a little bit to help us out on this”, what would you say?

Topic 3.  What kind of action plan can you imagine for trying to resolve the apparent conflict between the success of the campaign and the performance of the call center?  Would you call a meeting first or speak privately to some folks and discuss a potential meeting?  Who would you speak to 1st, 2nd, and 3rd given your ties to IT, marketing, and customer service?  Who would be invited to the first meeting?

Looking forward to hearing your ideas on this or similar situations you have faced.  If you’re relating a real analytical culture war story, you might think about changing the names to protect the innocent!

And / Or, if you’d like to share your story interactively with the Course 4 students in their Cafe’ (chat), we’d love to have you as a guest.  FYI, the students are primarily adults who are already working with web analytics as part of their job who now are faced with a need to upgrade their skills.  Let me know if you are interested in sharing your story with them - you can use the ”Email Jim” link below to contact me.

Banners versus Search

Alan quotes a Fred Wilson post on the “return of the banner” as a significant force due to Google’s DoubleClick purchase. 

I had pretty much the opposite reaction – this is a chance for Google to prove what banners are really worth and replace a lot of that banner inventory with more targeted avails, aka Adsense or some variant based on DoubleClick tracking data.

For example, I think the much touted “view-through” metric that really helped out the banner business is up for grabs here.  The unresolved problem (to my knowledge) with tracking view-through is the lack of cross-cookie tracking.

Let’s say you are in search mode, you search and arrive at a site that has banners.  Even though you really were scanning the text on the page and ignoring the banners, you are counted as being “exposed” to the banners.  You continue searching and land at the site the same banners are linked to, and complete an action.

The banners will get credit for the “view through” on this action, even though you were searching and / or clicking on PPC ads.  To make matters worse, you will probably also credit SEO or PPC for the conversion – so you’re double-counting.

If you are Google with DoubleClick, you can reconcile and sequence all this activity if Google is the search engine being used, and figure out what the real value of a banner is.  Branding value aside, of course..;)

Would you be surprised if the true value of a banner ad is a lot closer to an AdSense avail than an AdWords avail?  I wouldn’t be; in fact, I bet banners are worth less that AdSense avails – at least for generating conversions. 

I guess there will always be Branding folks who buy impressions and perceive value in them, without any further measurement.  You could measure the success of this tactic using Engagement with the Brand site – overall Engagement should rise, no banner click required.  Failing any improvement in Engagement, you could always say “the benefits all accrue offline” and be done with it.

Those interested in a more technical discussion of this view-through tracking issue, try here.

Recency Defines Engagement: Visitors

The Measuring Engagement series starts here.  For a clickable index of the 5 part Measuring Engagement series, look here. 

Last time we addressed the topic of measuring Engagement – and attributing actual Value to it – we were looking at visitors generated by various campaigns.  Here is what the Frequency (average number of visits) and Recency (average days since last visit) look like in a web analytics interface:

Initial Campaign

And here is what the Campaigns, numbered 1 – 16, look like in the Current Value / Potential Value Map:

Quadrant 1 contains campaigns generating visitors with both high Current Value and high Potential Value – these are the campaigns deserving more investment because the visitors created generate highest value to the company now, and have the highest likelihood to generate more value in the future (are the most Engaged).  If you’d like to know more about what metrics drive the Map and how it was created, see here.

Beyond Campaigns, how else can we use the Current Value / Potential Value Map?

Search Phrases

One of the more interesting uses is looking at search phrases as the “campaigns”.  Search marketers, especially PPC folks, are often victims of initial conversion rate-itis, where campaigns are managed and funded based on a short-term conversion rate.  To be fair, often this is a systems integration problem more than anything else – there simply is not enough “visibility” in the out weeks to determine if longer-term conversion to final goal is occurring.  This is common where there is not a clean integration between web analytics and the back-end commerce system, for example.

Using the Customer Value Map with search phrases provides you with a way to imply a future conversion and balance out some of the decision making on short-term conversion.  If you know a certain search phrase is generating visitors who visit Frequently and are still Recent in their visit behavior (Quadrant 1), you can imply this phrase is going to be more profitable than a phrase generating visitors who end up in Quadrant 4.  For an example of this idea in action, see here. 

Likewise, let’s say you’ve optimized the heck out of all PPC campaigns as far as copy, landing page navigation, etc. and still have a number of phrases that are “breaking even” on an ROI basis.  But some of these break-even campaigns consistently deliver visitors who end up in Quadrant 1.  The last campaigns I would kill are the ones delivering visitors who end up in Quadrant 1, since these visitors have the highest Potential Value.  Kill Quadrant 4’s first, then 3’s, then 2’s to see if you can get where you need to go in the overall ROMI mix.  Then do anything you can (including fishing through databases / logs manually, if need be) to find out if those Quadrant 1’s are really not paying out – I’d bet something is missing, there is a break in the logic / code somewhere that is not giving credit where credit is due.

Navigation / Functionality

Before we get into this area, let’s step back a minute for a global thought. 

This Retention / Engagement analysis stuff may seem oddly strange to you, and if it does, this is probably the reason: what is most important to measure in this area is what does not happen. 

Think about it.  This is not what you are used to in web analytics (or most other transactional analysis) – you are always focusing on what did happen.  How many visitors, clicks, conversions, etc. happened?  But I ask you this: in terms of Objective / Action, where would you want to take action in the Engagement area, where would the highest payout be?  Right.  Not with the Visitors who are already Engaged, but with those who are becoming less Engaged – where something is not happening.

Keep that in mind as we go through the next section…

Has this ever happened to you?  Your revenue KPI’s start sinking, gradually at first, and then at an increasing rate.  You run around trying to figure out what the problem is – campaigns, changes in natural ranking, competitor activity, whatever.  You’re pulling your hair out because it doesn’t make any sense – everything is tracking “normal”, right?  No changes in the past few days, or even weeks?  Right.  So, what the heck is going on?

Understanding the Volume of traffic by segment to your site is a given.  But what happens to visitor Value segments after their first visit cycle is important as well.  I can’t tell you how many times I have seen people screw themselves over the longer run because they are tracking / optimizing for Current Value rather than both Current and Potential Value.  This is a particularly important idea when you are testing new navigation / functionality and content or products, because it’s not only Campaigns that determine the long-term quality of visitors, but also the site itself.

Here’s an example.  Let’s say you have a simple visitor value segmentation of visitors during the past 12 months that divides the Current Value of Visitors into 2 groups – Frequency over 50 Visits and under 50 Visits.  Further, you divide Potential Value (Engagement) into 2 groups – Recency of Visit within 2 months and over 2 Months ago.  You end up with a 2 x 2 Visitor Value Map that looks something like this, with percentage of the 12 month visitor base listed in each Quadrant:

(Analysts: This simple data set, the first time you present it, may cause some rapid heart beats,  Trust me, most every site looks about like this – the majority of Visitors are in Quadrant 4 – have only visited a few times and have not been back lately.  What’s a few rapid heartbeats among friends anyway??  Gulp…  Hey, you’re an analyst, you’re used to this kind of thing!)

In the chart above, we see 10% of your Visitors are in Q1 (Quadrant 1) – at least 50 visits, Last Visit within 2 Months.  These are the 10% of your Visitors who probably drive the majority of your revenue, the “rocket fuel” visitors.  Q3 is where former best Visitors end up – they have high Frequency / Current Value but have abandoned visiting the site.  If you’re not clear how time since Last Visit date correlates to site abandonment, see here.

Now, let’s say you make a major change in navigation on the site.  Traffic flow to the site remains the same; all the same campaigns are running and everything seems normal.  Hopefully, conversion even goes up (that’s why you redesigned the nav, right?) 

A couple of months later, all of a sudden your revenue per visitor or visit metrics start to slip. 

Thankfully, you have been keeping track of the Percentage of Visitors in each Quadrant of your Customer Value Map over time (phew!) – I wonder what that looks like?  Here is what you find:

The Quadrant 1 Visitor segment (Top Graph, dark line) is shrinking; it has dropped from 10% of the visitor base to 6% or so over a 7 month period.  Doesn’t sound like much, right?  That is, until you remember that these Quad 1 rocket fuel visitors are responsible for a very significant portion of your revenue.  This means, of course, that your revenue per visitor follows the shrinking Quad 1 population right down the curve, as shown in the Bottom graph above.

Think about it.  In terms of gross numbers on the site, you would hardly notice a change like this in any of the “did happen” metrics.  Traffic and conversion, traffic and conversion, all just chugging along, right?  But this change in a small yet powerful group of Visitors significantly affects your Revenue KPI’s – because something did not happen.

Where are these Quad 1 visitors going?  Well, they are becoming dormant – they are moving into Quad 3 – high Frequency but poor Recency (Engagement).  It’s really the only place they can go; most can’t move to Q2 or Q4 because they have high Current Value as they start to move.  So as the population of Q1 shrinks, the population of Q3 rises, as seen in the Top chart.

What you are seeing in the chart above is a tangible visual representation of Best Visitor defection – visits not happening among most Valuable Visitors – that is hard to dispute.  Can you say Engagement Dashboard?

Then why is this happening?  I’d bet on the navigation change.  The problem is, of course, that unless you have a chart like the one above, it will be difficult to prove this idea to anybody, since the drop in the revenue KPI’s lagged the navigation change by such a long time, and all else remains consistent.

The fact is, you changed your “product” – the web site.  For some reason, the site simply does not generate or retain high value Quad 1 visitors like it used to.  Perhaps you pissed off the current Quad 1 Visitors with your changes.  Maybe the parts of the site that create new Quad 1 visitors are now buried in the new navigation, so up-and-coming Best Visitors (Quadrant 2) never find these high value creation areas. 

Did you bury sections of the site considered “low volume” in the navigation?  Better check that idea, because the low volume areas (uniquely targeted areas?) often create the highest value visitors.  You can check on this by running a Current Value / Potential Value Visitor Map for each Content Group – hopefully, before you make any changes to the web site!

Next time we visit this topic, we will look at Customers – those good folks who actually pay money to support a web operation.  If your web analytics tool does not support Visitor Frequency and Recency, you can still use the same Current Value / Potential Value model to manage Engagement through your customer database.

As always, your comments and questions appreciated…

The next post in this series on Measuring Engagement is here.