Monthly Archives: August 2007

Marketing / Technology Interface

I’m a marketing person that in one way or another has been tangled up with the technical / engineering world all of my professional life.  Cable Television, TV Shopping, Wireless, Internet.  I have always been dealing with brand new business models having no historical reference, while swimming in data to make sense of, and dealing with engineering folks as the people who “make things happen”.

I have also been really fortunate to work with many Ph.D. level statisticians who had the patience to answer all my questions about higher level modeling and explain things to me in a language I could understand.

Because of this history, I’ve been a long-time student of the “intersection” between  Marketing and Technology.  I’ve in effect become a “translator”in many ways – taking ideas from each side and converting them into the language of the other side.  Distilling the complexity of Technology down to the “actionable” for Marketing, while converting the gray world of Marketing into the White / Black – On / Off world for Technology.

With no offense to either side, to generate some kind of tangible progress, sometimes you just have to strip out all the crap from both sides to get to the core value proposition of working together.  You have to start somewhere.  Then you can build out from there.

And so I try with posts like Will Work for Data to define this intersection for others, to help both sides understand each other, and it’s tough, especially with an unknown audience varying widely in their knowledge of either side.  I try to create a “middle” both sides can understand.

Marketing folks are in the middle of a giant struggle right now with the whole accountability thing.  But i’s not so much accountability itself, because many of the best marketers have always been accountable in one way or another.  No, it’s the granularity of the accountability that is the issue; the movement from accountability defined at the “impression” and “audience” level to accountability at the “action” and “individual” level.

Here’s the challenge for Marketers: the data is different.  Impression and audience are defined by demographics but response and individual are defined by behavior.

Perhaps this will “translate” poorly, but the Technology parallel would be folks who have built a skill set around a certain programming language and then are told that language is now obsolete.  This is extremely disruptive when you have spent 20 years understanding your craft from a particular perspective.

So here’s what we need to do to make this work.  We have to find common ground.  This will mean being a little”less scientific” on the Technical side and a little “more specific” on the Marketing side.  And we work down through all this to the core.

This is the same struggle web analytics folks deal with every day, but due to the early work of many writing on this topic, the web analysts were always urged to connect analysis to business outcome.  Many are getting pretty good at it; they don’t suffer the “too much science” problems their peers in marketing research seem to run up against.

But web analytics is just a microcosm of the whole Analytical Enterprise, which may or may not be (background info this link) Competing on Analytics at this time, but is probably headed in this direction.

I submit it’s a bit early to teach most Marketing folks about statistical significance, about what types of data sets CHAID works best with, the difference between Nearest Neighbor and Clustering models, and so forth.  We can always get there after we reach the core understanding.

Right now, what we need to do is figure out how to get to the core. 

I think where I might take this is to propose some fundamental rules of understanding and see if we get both Marketers and Analysts to understand and agree on them.

You up for that?

Hit and Run Research

I’ve just scrapped a long and detailed post on the topic of how survey results are reported and used, particularly in the promotion of online marketing topics.  But I’m thinking maybe everybody else is hip to this topic, and I’m just old. 

Seems like a lot of folks use “surveys” specifically to generate press, it’s like a formula now.  When you look at any of the methodology – when it is rarely exposed – it often looks like crap.

I mean really, an online survey of a bunch of your customers or newsletter subscribers is fine, but then it gets reported like “research”.  If it’s research, how come there is never a “margin of error” reported?  A comparison of the folks who answered versus the folks who did not, of the potential bias in the sample?

WebTrends Score

Is “Engagement” Physical or Emotional?

From the WebTrends Press release:

“WebTrends Score is a patented technology solution that evaluates visitors’ online behavior by quantifiably measuring the level of engagement or interest they have in content, products and services. By establishing rules that assign values to specific visit and visitor activities, marketers can go beyond conversion to evaluate the success of their efforts using realized and potential customer value”.  Bold in the last sentence is mine.

Two value dimensions – “realized”, which is history, and “potential”, which is predicted, future value.

This is great work, because now we have a more accessible way to test what is important to the execution of High ROI web site efforts – historical data or predictive data.  And I’m wiling to bet anybody the answer will be the same as database marketing folks have been discovering for years: predictive data.  The leverage you gain from prediction far outstrips the leverage you gain from understanding the past.  Once you have a prediction, you can then inform this prediction with a historical view, providing context for the execution against the prediction.

To many folks, Score will be an unbelievable geekfest of historical tracking capability.  “Look at all the ways we can assign value to visitors, create scores, rank them, trigger content based on the scores”, etc.  Sure, a much easier to work with execution of the basic Content Groups idea; advanced Frequency analysis.  That’s important; easier is always good.  Broader applicability to all kinds of events is good too.  But they’re still events, and they are still history.

Can I ask you something?  Why have web analytics folks over the years always thought it was important to segment out New from Returning visitors?  Right, because the behavior of Returning visitors is often quite different from New visitors.  And it’s worth knowing this, because Returning visitors are good, right?  If they are coming back, they must be happy with the site, “engaged” with the site, don’t you know.  So it follows these visitors have higher value and should be tracked as a unique segment, because they are worth paying attention to.  The have high historical value from repeated visits, and have an implied likelihood to come back, since they are repeat visitors who have value in the future.

This is a prediction.  A Repeat visitor is more likely to come back to the site than a New visitor.  Simple.

So, let’s take a New Visitor who interacts with a wide variety of site components, or spends a long time on the site, or both.  Compare this visitor with a Returning Visitor who interacts with the same wide variety of site components, or spends the same time on the site, or both.  Which is more valuable to the company, do you think?

Both visitors have the same “realized” or historical value.  If you stop there with the analysis, you don’t have anything actionable.  But if you toss in that one visitor is New and the other is Returning, all of a sudden you have an actionable difference.  One has higher “potential” value, value in the future, because a Returning visitor is more likely to come back than a New visitor.  Given a dollar to spend, and betting on where you would get the highest ROI, which segment would you invest in given they have the same behavior on the site, the New visitor or Returning visitor?

The answer is this: it depends on whether you care about building a business with legs under it. 

If you just want to churn through New visitors and don’t care if they come back, you choose option 1, and invest in the New visitor.  You don’t change your marketing, content, or navigation to create satisfaction and repeat visits.  You focus on the physical engagement with the site.  Plenty of examples around of how that works out in the end, though nobody seems to think that will ever happen to their site. 

Or you invest in the Returning visitor who is building the business, who is providing a forward revenue stream after the first visit, who is suggesting the site to others, blogging about it, etc.  The visitor who is truly engaged with the site on an emotional level.

Here’s a suggestion: before we spend years creating really elegant reporting on history only to realize that history is the 2nd most useful dimension of the 2 value dimensions provided in WebTrends Score, I would like to remind folks of a couple of things:

1.  Just because I thrash around your site and interact with a lot of elements, doesn’t mean I am happy with your site; indeed it could mean the opposite – I can’t find what I am looking for, I hate the interface, etc.  This “thrash problem” is in fact the same argument often used against duration as a measure of engagement, “just because I spent a long time on the site doesn’t mean I am engaged”, for some of the same reasons above and others. 

So why don’t we just agree that neither “time spent” nor “elements visited” is really a good measure of engagement?  That “history” is not really relevant to engagement?  It’s very relevant to the value of the visitor, it’s “realized” value, value in the past.  But by itself, it’s not very predictive of anything.  At best it says “visitor used to be engaged”.

After all, it’s history.  And history means “used to be”.

Put another way, I have 2 visitors who were both very active on the site, interacting with all the cool features.  Last visit of one was yesterday, last visit of the other was 6 months ago.  Which visitor is more engaged, has higher potential value to the company in the future?

2.  Just remember that you can’t trigger Score-based profiles on a visitor who doesn’t come back.  This has always been the soft underbelly of on-site personalization; it’s a complete waste of resources customizing all kinds of trigger-based scenarios for visitors who never come back.  So if you can predict the likelihood of a return visit, you can optimize the system to address visitors with the highest potential value.

It’s true “visitor engagement is not something you can measure using only a stopwatch“.  But it’s also true that you can’t measure engagement based on number or breadth of events.  That’s a very shallow view of engagement, unless you are in a business where you simply don’t care if visitors ever come back.

If you have such a business model, WebTrends Score allows you to base your engagement metrics on historical event Frequency, the physical interaction with the site.  For everyone else, Score also allows you to base your engagement metrics on potential / emotional value, a prediction of the value of the visitor to the business in the future, and then execute strategy based on the historical context (Frequency) of the visitor.

It’s good there is a choice, as it’s clear some sites will prefer historical “realized” value over potential value as the primary measure of engagement.  These sites will embrace viewing history as more important than predicting the future.

Personally, I find management folks to be more interested in understanding the future than understanding the past.

What do you think?  If the above makes sense to you, let me know.  If you think it’s irrelevant or misguided, tell me why.

Bonus: If you are dealing with multi-channel or multi-system customer analysis, tell me how much easier it would be if you could summarize the value of the web component with these two variables – realized value and potential value – then represent the web value of a customer with a 2 digit score in the offline customer record? 

How many of you multi-database folks are already doing something like this with the web data?  C’mon, give it up…we won’t tell!