On Engagement

I’ve had some bad luck with connecting to the web lately, trying to catch up on blog posts as the latest trip winds down.

The panel on Engagement at the WebTrends customer meeting was a lot of fun, probably best described as “productive friction” if forced to describe it with a phrase.

Based on comments from the audience, the panel was quite useful in terms of vetting some of the ideas floating around out there and answering their burning question, “Am I missing something here?  Why should I care about this engagement thing?”

This in itself is an interesting issue: generally, the audience perceives “engagement” as yet another buzzword of the week that like most buzzwords, is simply another word for stuff most of the audience deals with all the time, namely customer service and retention – or customer “experience” if you prefer last week’s buzzword.  This was the insight I gained from the well-lubricated crowd at the party after the panel, so please take this fact into account as well.  Do people tend to say what they really think after a few drinks?  Or were they just tired of talking about web analytics the whole day?

Some of the more interesting discussion among the panelists actually took place right before and after the panel, when we had a chance to really first explain our positions and then challenge each other to defend them.  Great conversation.

For what it’s worth, here’s a breakdown of what I thought I heard being said.  My perception and reality may of course be different and I encourage participants to correct any misperceptions I may have had!

Andy Beal – as the only “generalist” on the panel, I think Andy was a bit steamrolled by the hard core “get the facts” thing web analytics folks do.  He maintained web analytics could measure only one area of customer engagement with a company (the web), and that you would never get the full picture of engagement because some of it is unmeasurable.  Probably true in a strict sense, though I bet there’s a lot that can be measured on the web through customer conversations and so forth.  However, we left this “can’t be measured” question to simmer, because the rest of the panel and the audience wanted to talk about web analytics so that was what we were going to do.

Anil Batra / Myself – I’ll go out on a limb and say our positions were very similar; I”m sure Anil will chime in.  Basically, the formula is this:

The difference between Measuring Activity and Measuring Engagement is Prediction.

In other words, when you start using the word Engagement, you are implying “expected” activity in the future, with this expectation or likelihood being valued or scored with a prediction of some kind.  Activity without an implication of continuity is simply Activity, it’s history and stands alone.  Same stuff web analytics has always done, nothing new.

Jim Sterne – Jim was a bit more global in his thinking as you might expect, and seemed to be concerned more about how Engagement fits into the greater Marketing picture rather than looking to hang parameters on it.  How Engagement is related to Customer experience and Brand, how it does or does not turn into Loyalty, and so forth.

Gary Angel / Manoj Jasra – not sure either of these fine folks fully buy into the “prediction” requirement Anil and I support, though they might be talked into it.  Gary and I had a long conversion which included June Dershewitz after the panel, where we traded examples and generally wrestled over what I would call the”advertising / duration conundrum”. 

I maintain advertising is an outlier in this discussion, which is strange since those folks basically started this whole engagement thing and stoked the fire hard with the Duration variable that got web analytics folks in general so pissed off.  Not sure Gary or Manoj will ever accept Duration in any form as a measure of Engagement, where I maintain that if you isolate Advertising as a unique conversation, it makes a lot of sense.  The reality of buying online display ads is you need an absolute standard or the networks and buying process absolutely fall apart; you simply cannot look at a unique Engagement metric for every site or the buy would never get done.  So you hold your nose, say Duration is important to advertising as a metric, and do the deal.

In other words, there is a huge difference between being Engaged with a site and being Engaged with an ad on the same site.  These are two completely different ideas and unless you believe that Engagement with a site always spills over to Engagement with the ads on the site (I do not) then these two ideas deserve two different treatments.

June wanted to get into it all over again at the eMetrics Summit…feel free to post your comments here June!

3 thoughts on “On Engagement

  1. Hi Jim,

    I saw you speak at eMetrics and I definitely count you among the top speakers who presented meaningful and cogent arguments at this conference (I won’t name the others here but it was a very close race and I don’t intend to rank any of you within this group).

    In short I agree with the argument that the idea that you can’t measure engagement in the way that was forecast by the vendors. Their presentation showed a very glossy image which in my opinion draws a very “long bow” given the extreme volatility in the data used in web analytics. The case presented related to the process of purchasing a car. During the process of research their website activities were scored according to an arbitrary scale and their interests recorded and reported based on this activity. Based on the presentation it seemed that their specific interests that could be provided in some way to the sales team on the floor when this person arrived at the showroom which would better enable the sales staff to close the deal.

    Take the relatively simple example of someone goofing off at work and surfing the web for a new car and then going home to talk to their partner about going to the dealer. A plausible and I suspect relative common event. An alternative and equally plausible scenario is where two or more users of the same computer (e.g. couple or family) all have separate user accounts in Windows XP (Outlook uses this to set up the email accounts) or a similar operating system. In each case their browser will record separate cookies and hence their behaviour recorded independently of each other even though they would be the same purchasing unit.

    In both models presented above this would cause the whole system that was proposed to fall apart unless it used alternative factors to correlate each of these events – this is unlikely based on my knowledge of the product. The optimist would say that these factors combined with the problem with cookie deletion were a relatively small and abhorrent affect. Personally as a sceptic, I see this is significant potential noise that will artificially skew the data in a significant way. As such I saw the presentation as little more than a very big sales pitch for a nice idea.

    This is not to say that such a technique is not completely without merit, but rather that the message being sent – that we could have such detailed view of the customer and their purchase intentions – is misleading. It is not possible given the current situation with the Internet to reliably measure this type of information in any significant manner.

    In any case a decent sales person would learn most of what the solution proposed within the first 5 to 10 minutes of the conversation with the customer on the sales floor. I suspect that any car sales staff not doing this would have relatively short careers. At best this gives the sales staff a conversation starter which is good, but at what cost?

    Engagement has similar problems. Without full details of intent how can we possibly contextualise the activities of an unidentifiable visitor? Sure we can take hints from their activities such as the search queries that they used, pages viewed etc, however with aggregated data it is difficult, nay, impossible to decipher their intent. To infer that high page views and high time on site = engagement is simplistic at best. I have an article at http://www.panalysis.com/web_analytics_time_on_site.php which presents an (albeit still simplistic but less “dumb”) approach to deciphering time on site and page views per visitor. I don’t dare to label this engagement but rather a means of categorising behaviours based on limited variables.

    Website analytics without segmentation is a very dumb idea in my opinion. To summarise in the words of William Blake – “To generalise is to be an idiot” and that tends to be what we do unless we focus on the specifics. Unless we drilldown to the specifics then we waste the opportunity of web analytics. Unlike other CRM, data mining or market research analysis we are looking at the whole potential population of a website regardless of whether they have the capacity and willingness to purchase, register, …” and this fundamentally pollutes the data.

    You had some very interesting points and I look forward to hearing more from you in the future.

  2. Hi Rod,

    To begin, thanks for attending the conference last week and for checking out my session on scoring. Too bad we didn’t have a chance to chat on this topic at length face-to-face.

    Regarding your comments above, I don’t want to confuse my trite and perhaps flawed examples or the constrained format of the presentation with the value of the concept. That said, and technology and terminology aside I hope most can agree on this: past behavior is a lead indicator of future behavior, no matter if you’re talking about web site traffic or personal eating habits. Lead indicators are just that, indicators. Within a population a given member may deviate from the indicators, but as a whole the segment should follow what those indicators predict if 1) the model is well designed and 2) the technology upon which the model is implemented properly executes the intent of the model and is sufficiently lacking in undermining flaws (e.g. the shared home PC).

    My first attempts to design WebTrends Score began with trying to devise a composite (uber) metric of other metrics that today are commonly associated with “engagement”. Long story short it was a miserable failure. Aside from being a mathematical abomination, it wasn’t tailorable to the unique characterstics of a given web site and worse, wasn’t a lead indicator of anything … except potentially more engagement. To be useful I felt that scores had to be lead indicators of future events (ideally) or indicators of visitor interest (or should I say engagement) in something specific and tangible from which one can apply segmentation and targeting activities (at a minimum). While I acknowledge the usual collection of features I wish we could have gotten into the first release, I think the mission was accomplished: scoring allows you to isolate segments of visitors based on common score values, which themselves are measures of engagement in one or more specific subject matters, and take action specific to those visitors.

    Regarding some of the other specific comments, I concur that a showroom sales person could more effectively collect the data about a person face-to-face. Let’s chalk this up to a bad example that I’ll work to fix. Scoring is a means to qualify visitors and move then along the sales/conversion funnel (assuming such applies to your business model). Once they arrive at the proverbial showroom floor, its job is done. Put another way, scoring is no longer needed once it has taken the individual to the point where you can engage her/him in a 1:1 manner. Instead, scoring is far more useful in turning your 1-to-many targeting activities into 1-to-few activities in an automated fashion and in a means that is far more insightful and quantifiable then merely looking at their time on the site or the last product they viewed.

    Regarding the shared computer, I certainly acknowledge the issue. But I don’t believe it undermines the usefulness of scoring. Worst case scenario is that our multiple personality visitor exhibits behaviors that thwarts our ability to place him/her/them in a specific segment. But can we agree that this constitutes the exception case? Who knows, maybe someday we’ll have the inferencing logic to split apart the interests of our conjoined visitor. We have the data. Hmmm…

    Barry Parshall
    Director, Product Management
    WebTrends

  3. Hi Barry,

    Thanks for your thoughtful response, and yes it was a shame that I didn’t get to speak to you personally.

    Based on my understanding of the situation, the WebTrends scoring process relies upon the foundation principles of:

    1) That visitor behaviour is a predictor of future behaviour and purchase intention,

    2) That visitor behaviour beyond a single session is a key component of understanding visitor behaviour over time, and

    3) That the information available in the visitor transactions made on a website is sufficient to infer what the visitor’s intent was during these sessions.

    If these principles are not true, then please correct.

    In the case of principle 1, you will have no argument from me. Much work has been done by far brighter minds than mine on this topic and the data mining industry is based on this belief.

    In the case of principal 2, I do not believe that the underlying data is sufficiently consistent, reliable and cohesive to warrant this assumption. I base my conclusion on this upon the available evidence (which is thin, I agree) that visitors delete cookies far more than web site owners would like to believe. The Red Eye Report white paper (http://www.redeye.com/bestpractice/white_papers.php) estimated that cookie based counting overestimated visitors by a factor of 2.3. A similar but less convincing study by ComScore (http://www.comscore.com/blog/2007/06/comscores_cookie_deletion_whit.html) drew similar conclusions. I am not yet aware of any formal academic or more systematic and detailed study into this area and would be pleased to hear of one.

    If either of these conclusions is even close to being a true reflection of the current situation, then a miscount factor of 2.3 or worse is, in my opinion, fundamentally at odds with the foundational principle #2. If this is the case then this principle is invalid to a point where little if any sensible data can be taken from methods based on it.

    Looking at the behaviour that I see in myself and my peers with regards to cookies and general website usage, I tend to think that the Red Eye Report may be closer to the truth than we would like to believe.

    I would like to put out a challenge to all major web analytics vendors to sponsor a university or similarly independent body to conduct formal peer reviewed research into this effect with a sufficiently large body of website owners to test whether the premises of the Red Eye white paper hold true in a larger study. For such a study to add value to the industry and to resolve the question, it must take into account the rate at which cookies are deleted, not accepted as well as the time period by which they are deleted.

    In the case of principal 3, this is debatable; however I do believe that some information regarding intent can be inferred in a limited number of cases. However this can only be inferred where an originating keyword from a search engine exists and is matched with views of multiple pages of content on a website or an “internal” search query from the website’s own search engine exists that states the intention of the visitor to locate certain information. It is far, far more difficult to infer this from click stream data and hence I see this as limited value.

    I don’t intend to post again about this topic on this blog and encourage you to post any response to the WAA forum on Yahoo for this topic.

    Best

    Rod Jacka

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.