Category Archives: Analytical Culture

Tortured Data – and Analysts

Fear and Loathing in WA

You may recall I wrote last year about the explicit or implicit pressure put on Analysts to “torture the data” into analysis with a favorable outcome.  In a piece called Analyze, Not Justify, I described how by my count, about 50% or so of the analysts in a large conference room admitted to receiving this kind of pressure at one time or another.

Since then, I have been on somewhat of a personal mission to try to unearth more about this situation.  And it seems like the problem is getting worse, not better.

I have a theory about why this situation might be worsening.

Companies that were early to adopt web analytics were likely to already have a proper analytical culture.  You can’t put pressure on an analyst to torture data  in a company with this kind of culture – the analyst simply will not sit still for it.  The incident will be reported to senior management, and the source of “pressure” fired.  That’s all there is to it.

However, what we could be seeing now is this: as #measure adoption expands, we find the tools in more companies lacking a proper analytical culture, so the incidents of pressure to torture begin to expand.  And not just pressure to torture, but pressure to conceal, as I heard from several web analysts recently.

Continue reading Tortured Data – and Analysts

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Control Groups in Small Populations

The following is from the January 2010 Drilling Down Newsletter.  Got a question about Customer Measurement, Management, Valuation, Retention, Loyalty, Defection?  Just ask your question.  Also, feel free to leave a comment and I’ll reply.

Want to see the answers to previous questions?  Here’s the blog archive; the pre-blog newsletter archives are here.

Q: Thank you for your recent article about Control Groups.  Our organization launched an online distance learning program this past August, and I’ve just completed some student behavior analysis for this past semester.

Using weekly RF-Scores based on Recently and Frequently they’ve logged in to courses within the previous three weeks, I’m able to assess their “Risk Level”– how likely they are to stop using the program.  We had a percentage who discontinued the program, but in retrospect, their login behavior and changes in their login behavior gave strong indication they were having trouble before they completely stopped using it.

A: Fantastic!  I have spoken with numerous online educators about this application of Recency – Frequency modeling, as well online research subscriptions, a similar behavioral model.  All reported great results predicting student / subscriber defection rates.

Q: I’m preparing to propose a program for the upcoming semester where we contact students by email and / or phone when their login behavior gives indication that they’re having trouble.  My hope is that by proactively contacting these students, we can resolve issues or provide assistance before things escalate to the point they defect completely.

A: Absolutely, the yield (% students / revenue retained) on a project like this should be excellent.  Plus, you will end up learning a lot about “why”, which will lead to better executions of the “potential dropout” program the more you test it.

Continue reading Control Groups in Small Populations

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Relational vs. Transactional

The following is from the September 2009 Drilling Down Newsletter (original title:  Customer Retention for Restaurants).  Got a question about Customer Measurement, Management, Valuation, Retention, Loyalty, Defection?  Just ask your question.  Also, feel free to leave a comment.

Want to see the answers to previous questions?  Here’s the blog archive; the pre-blog newsletter archives are here.

Q:  I am hoping you can help answer a question for our team.  By way of introduction, I am the CEO of XXXX.  We are a specialty retailer / restaurant of gourmet pizza, salads and sandwiches.  We would like to know  restaurant industry averages (pizza industry if possible) for customer retention – What percentage of customers that have ordered once from a particular restaurant order from them a second time?  I am hoping with your years of expertise and harnessing data you may be able to assist us with this question.  Look forward to hearing from you.

A:  Unfortunately, in those said years of experience, I have found little hard information on customer retention rates in QSR and restaurants in general (if anyone has data, please leave in Comments).  It’s just the nature of the business that little hard data, if collected, is stored in such a way that one can aggregate at the customer level.  The high percentage of cash transactions doesn’t help matters much; there’s a lot of data missing.

Over the years, sometimes you see data leak out for tests of loyalty programs, and of course clients sometimes have anecdotal or survey data, but this is not much help in getting to a “true” retention rate.  More often than not you discover serious biases in the way the data was collected so at best, you have a biased view of a narrow segment.  Often what you get is a notion of retention among best customers, or customers willing to sign up for a loyalty card, but not all customers.  And the large “middle” group of customers is where all the Marketing leverage is.

What to do about this predicament?  

There are really two issues in your question; the idea of using industry benchmarks when analyzing customer performance, and the measurement of retention in restaurants.

Continue reading Relational vs. Transactional

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Norms of Reciprocity

Social Marketing Doesn’t Rely on Social Media

Do you believe human beings share certain fundamental traits that define “being human”?

If so, do you believe that human beings tend to behave in certain ways under certain circumstances?

If so, do you then believe since human behavior has these tendencies, it can often be predicted?

If so, then do you think perhaps the study of Psychology and Sociology might provide you some clues to creating successful businesses, campaigns, products, and services?  While your friends and competitors are all iterating their way into oblivion?

On the web, time and time again, we see the same themes repeating.  Yet with each introduction of a new technology, these themes tend to be treated like a new discovery, even though the theme has been well established in the past.

Norms of Reciprocity is a constant human theme.  You may know the expression of these norms as “Sharing”.  Web old timers will probably recognize this idea as “Give, then Take” from the I-Sales discussion list as early as 1995.  In various forms, this theme goes back to the beginning of human history, all the way back to the handshake and other greeting gestures.  This same theme is embedded in countless Religions all over the world: “Do onto others as you would wish them do onto you”.  At least a couple centuries old, this idea.

Norms of Reciprocity simply means this: When you do something nice for a human being, help them in some way, this human tends to feel Gratitude towards “the doer” and tends to do something nice back.  Gratitude drives the desire to Reciprocate, because it’s just what humans do, it’s normal, a “norm”.

Norms of Reciprocity.

Continue reading Norms of Reciprocity

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Analyze, Not Justify

Does this issue affect the Web Analytics Maturity Model?

A conference call with a Potential Client last week jogged my memory on a couple of events that happened during the flurry of Web Analytics conferences this Spring.  Here’s a portion of the call…

PC: “We’ve tried proving the profitability of our Marketing efforts and can’t seem to get the numbers working correctly.  So Jim, what we’d like you to do is take all this data we have, and justify the Marketing decisions we’ve made by proving out the ROI.”

Jim: “I’m sorry, did you say justify?  To me, justify means “find a way to prove it works”.  Is that what you are asking me to do?  Wouldn’t it be more beneficial to analyze the results, and then optimize your Marketing based on these results?”

PC: “Jim, around here we’re pretty clear our Marketing works, and Management knows this.  But Finance is asking for some backup, some numbers to justify the spend, not to analyze it.  We don’t need analysis, we need your ‘expert credibility’ to help us out with this.”

Jim: “I see,” thinking this is not a job I’m going to enjoy.  It’s the old ‘buy an outside expert’ routine, which I detest.

PC: “Jim, the team is united behind this mission, are you on board?”

Jim: “Well, perhaps I could be on board, as long as what you want is an analysis, which may also justify the decisions you have made.  But it might not, so I just want to be clear on what…”

PC: “You  know what Jim?  I don’t feel we’re going to have a fit here, I’m getting you’re not a team player.  Thanks for your time”.  CLICK

Sigh.  I’m actually grateful they hung up, I really dislike explaining to people why I won’t work with them.

Continue reading Analyze, Not Justify

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

eMetrics “ShootOuts” We’d Like to See

I was in Vancouver for a presentation to CAUCE [kay-yoose, thanks Raquel] and was able to grab a quick dinner with fellow WAA BaseCamp stakeholders Andrea Hadley, Raquel Collins, and Braden HoeppnerWe’re rolling out a new 2-day format for BaseCamp and got to talking about web analytics education in general. 

We started talking audience segmentation and content at the eMetrics Summit, and specifically the “shootout” format from the old days.  You know, 10 vendors on the stage at the same time taking questions from the audience.  Those sessions were both educational and hilarious at the same time, as the vendors side-swiped each other on topics like accuracy, how visitors are counted, cookie structures, and so forth.

But that was back when the technology was in flux, and now that issue has settled down a lot.  Braden brought up the concept of returning the “shootout format”, but more on the business side.  You know, get some practitioners, vendors, and consultants up on stage and have them thrash out stuff like:

1.  Attribution – does it really make sense to even bother with attribution at the impression / click level when there is often not a strong correlation to profit?  I mean, just because someone sees or clicks on an ad does not mean the ad had a positive effect; in fact, it may have had a negative effect.  Why not go straight to action or profit attribution, instead of using creative accounting?

Continue reading eMetrics “ShootOuts” We’d Like to See

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Got Discount Proneness?

Discount Proneness is what happens when you “teach” customers to expect discounts.  Over time, they won’t buy unless you send them a discount.  They wait for it, expect it.  Unraveling this behavior is a very painful process you do not want to experience.

The latest shiny object where Coupon Proneness comes into play is the “shopping cart recapture” program.  Mark my words, if it is not happening already, these programs are teaching customers to “Add to Cart” and then abandon it, waiting for an e-mail with a discount to “recapture” this sale – a sale that for many receiving the e-mail, would have taken place anyway. 

The best way to measure this effect is to use a Control Group.

When I hear people talking about programs like this (for example, in the Yahoo analytics group) what I hear is “the faster you send the e-mail, the higher the response rate you get”.

That, my friends, is pretty much a guarantee that a majority of the people receiving that e-mail would have bought anyway.  Hold out a random sample of the population and prove it to yourself.  There is a best, most profitable time to send such an e-mail, and that time will be revealed to you using a controlled test.  The correct timing is almost certainly not within 24 or even 48 hours.

That is, if you care about Profits over Sales, and trust me, somebody at your company does.  They just have not told you yet!

When you give away margin you do not have to give away on a sale, that is a cost.  Unless you are including that cost in your campaign analysis, you are not reflecting the true financial nature of the campaigns you are doing.  If you are an analyst, that’s a problem.

If you are using cart recapture campaigns, please do a controlled test sooner rather than later.  Because once your customers have Discount Proneness, it will be very painful to fix.

Continue reading Got Discount Proneness?

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Marketing Science (Journal)

As I said in the Heavy Lifting post, I think the Web Analytics community is becoming increasingly insular and should be paying more attention to what is going on outside the echo chamber in Marketing Measurement.  I also think the next major leaps forward in #wa are likely to come from examining best practices in other areas of Marketing Measurement and figuring out how they apply to the web.

For example, did you even know there is a peer-reviewed journal called Marketing Science, which calls itself “the premier journal focusing on empirical and theoretical quantitative research in marketing”?

Whoa, say what?

This journal is published by the Institute for Operations Research and the Management Sciences, and articles are the work of premiere researchers in visitor and customer behavior from the best known institutions around the world.  In case you didn’t know, “peer-reviewed” means a bunch of these researchers (not including the authors, of course) have to agree that what you say in your article is logical based on the data, and that any testing you carried out adhered to the most stringent protocols – sampling, stats, test construction, all of it.

And, most mind-blowing of all, they show you the actual math right in the article – the data, variables, formulas, graphs – that lead to the conclusions they formulate in the studies.  You know, like this:

Continue reading Marketing Science (Journal)

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Marketing Jump Ball

Marketing Accountability.

Brand is what you do, not what you say

Marketing Alignment.

Here are 3 free webinars you might want to take advantage of.  You might not agree with these opinions, but hey, it’s a good idea to get out of the echo chamber once and awhile, don’t you think?  Try these online sessions for a little brain stretching:

———

Moving Marketing From “The Money Spenders” to The Money MAKERS
April 15, 2009  noon ET    Jonathan Salem Baskin, Jim Sterne, Jim Novo

With 10% of marketing executives being perceived as strategic and influential by the C-suite there’s clearly a crisis of confidence.  I’ve mentioned Jonathan’s blog and book before and here’s a chance to hear a bit of the inside story.  You’ll learn how to exceed expectations of both C-suite executives and customers, neutralize political feuds by organizing cross-departmentally, and how to stop thinking like a reporter and start acting like an advisor

———

Everything They’ve Told You About Marketing Is Wrong
April 21, 2009   1pm ET  Ron Shevlin

Are you sick and tired of reading the same old blah, blah, blah, from the so- called marketing experts who just tell you stuff you already know? Then you need to attend this session as the grumpy old man cuts through the morass of bad advice and introduces you to the must-dos in the new world of marketing.  I know Ron personally (as in offline) and even if you disagree, you will be entertained.

——— 

What Online Marketers Can Teach Offline Colleagues (and vice versa)
May 19, 2009  noon ET     Kevin Hillstrom, Akin Arikan, and Jim Novo

A WAA event, open to both members and non-members.  Web analysts are not the first to grapple with multiple channels.  Traditional marketers have always had to illuminate customer behavior across stores, call center, direct mail, etc.  So, rather than reinventing the wheel in each camp, what proven methods can you teach each other?  Three different but aligned approaches on solving the multichannel puzzle, should be something for everyone here.

———

Take your brain out for some exercise, will ya?

 

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Sales or Profits?

Seems the previous post (Best Seller Gone Bad) really hit home for people; perhaps we should drill into  this a bit.  So:

1.  Is the impact of your work evaluated against Sales or Profits?  (example)

2.  Do you think this evaluation approach is correct for your job and company?  Why? 

3.  Would you change this evaluation method if you could?

4.  What is holding you back from trying to make this change?

Personally, I always choose Profits if I can; the leverage is so much higher than Sales.  It’s much easier to generate $5 in Profits than $5 in Sales for any given $1 in budget, because there is generally so much waste in the Marketing system.

Update: OK, how about answering this question – when your work performance is evaluated, what percentage of this measurement is based on qualitative factors?  quantitative factors?

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss