Monthly Archives: August 2007

*** Customer-Centric IT Wins

Yes, I know for many marketing folks this seems to be an oxymoron, but the fact is that Marketers – especially those with some understanding of business process and the IT world – can influence the direction of IT and generate genuine customer-centric wins.  This in turn makes all your marketing efforts more productive

Web analysts, this is the kind of work you will be supporting with analysis in 5 years…it’s just a much larger version of optimizing a web site, isn’t it?  And in many ways, a lot more fun…

Requires a different mindset?  Sure, it’s not buying media or developing creative or analyzing response.  But these are the kinds of projects Marketing folks (especially data-driven ones) should be championing by providing the customer models for IT to base a plan on and forge ahead.

Here are 3 great examples, case studies from CIO Magazine:

Washington Mutual  – a classic example of cross-functional teams looking at “how we sell” versus “how they buy” barriers; reminds me a lot of the Check Shredding Example.  I wonder how many online Marketing folks at banks have asked “why do we need signature cards?” in the past 5 years – what is the Root Cause?  Ron, make sure you check this one out, especially given your post – what do you think?

Best Buy – the offline retail version of “people who bought this also bought that”.  I’m sure this one will sound simple to many folks – all except those working in offline retail analysis and store logistics, that is.  A tough, messy business to optimize and even small wins are remarkable.

Hilton Hotels – another seeming no brainer, just let people order online.  But not just any people, we’re talking about event / conference planners ordering meeting rooms, food and beverage, A / V etc. not to mention guest rooms for thousands of people.  This is not a small deal on the infrastucture side, with plenty of politics to go around.

Check the cases out here, and let me know what you think.

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Marketing Attribution Models

Interesting article in MultiChannel Merchant about sourcing sales across catalog and online using fractional allocation models.  I’m pretty sure “allocation” and “attribution” are really different concepts, though they seem to be used interchangeably right now.  Let’s just say from reading the article allocation sounds more like a gut feel thing and attribution, from my experience, implies the use of a mathematical model of some kind.

I know Kevin rails against a lot of the so-called matchback analysis done in catalog and I have to agree with him; that practice is a whole lot more like allocation then attribution in my book, particularly when it is pretty easy to measure the real source from a lift in demand perspective by using control groups.  Take a random sample of the catalog target group, exclude it from the mailing, and compare the purchase behavior in this group with those customers who get the catalog over some time period.  That should give you an idea of what the incremental (not cannibalized) demand from catalog is – just look at gross sales per customer.  We did this at HSN for every promotion, since the TV was “always on” and creating demand by itself.

So does a web site.

Just because someone was mailed a catalog and then at some point later on ordered from a web site does not mean they ordered because they received the catalog; heck, you don’t even know for sure if they even received the catalog – as anyone who has used seeded lists knows.  And just because someone was exposed to an ad online doesn’t mean the ad had anything to do with a subsequent online order – even if you believe in view-through.

Anyway, I see lots of people doing what I would call allocation rather than attribution in the web analytics space, and when Jacques Warren asked me about this topic the other day, I decided it might make a good post.

You have to understand this same discussion has been going on at least 25 years already in offline, so there is a ton of history in terms of best practices and real experience behind the approach many folks favor.  And there is a twist to the online version I don’t think many folks are considering.  So for what it’s worth, here’s my take…

For most folks, the simplest and most reliable way to attribute demand is to choose either first campaign or last campaign and stick to it.  The words simplest and reliable were chosen very specifically.  For the very few folks who have the right people, the right tools, and the right data, it is possible to build mathematically precise attribution models.  The word precise was also chosen specifically.   I will go into more detail on these choices below after some background.

Choosing first or last campaign for attribution is not ignoring the effects of other campaigns, but simply recognizes you cannot measure these effects accurately, and to create any “allocation model” will be an exercise in navel gazing.

Unfortunately, a lot of this kind of thing goes on in web analytics – instead of admitting something can’t be measured accurately, folks substitute a “model” which is worse than admitting the accuracy problem, because now you are saying you have a “measurement” when you don’t.  People sit around with a web analytics report, and say, “Well, the visitor saw the PPC ad, then they did an organic search, then they saw a banner, so we will give 1/3 of the sales credit to each” or worse, “we will allocate the credit for sales based on what we spend on each exposure”.

This approach is worse then having no model at all, because I often see these models used improperly, (for example) to “justify budget” – if you allocate a share of responsibility for outcome to PPC, then you get to keep a budget that would otherwise be “optimized” away.  A similar argument is being made by a few of the folks in the MultiChannel Merchant article above to justify catalog spend.

This is nuts, in my opinion.

I believe the core analytical culture problem at work here (if you are interested) is this:

Difference between Accuracy and Precision
http://en.wikipedia.org/wiki/Accuracy

I’d argue that given a choice, it’s more important to be precise than accurate – reproducibility is more important (especially to management) than getting the exact number right.  Reproducibility is, after all, at the core of the scientific testing method, isn’t it?  If you can’t repeat the test and get the same results, you don’t have a valid hypothesis.

And given the data stream web analytics folks are working with – among the dirtiest data around in terms of accuracy – then why would people spend so much time trying to build an “accurate” model?  Better to be precise – always using first campaign or last campaign – than to create the illusion of accuracy with an allocation model that is largely made up from thin air.

When I make the statement above, I’m excluding a team of Ph.D. level statisticians with the best tools and data scrubbing developing the model, though I suspect only a handful of companies doing these models actually fit that description.  For the vast majority of companies, the principle of Occam’s Razor rules here; what I want is reliability and stability; every time I do X, I get Y – even if I don’t know exactly (accurately) how I get Y from X.

Ask yourself if that level of accuracy really matters – if every time I put in $1 I get back $3, over and over, does it matter specifically and totally accurately exactly how that happens?

Whether to use first or last campaign is a matter of philosophy / culture and not one of measurement.  If you believe that in general, the visitor / customer mindset is created by exposure or interaction to the first campaign, and that without this favorable context none of the subsequent campaigns would be very effective, then use first campaign.

This is generally my view and the view of many offline direct marketing folks I know.  Here is why.  The real “leverage” in acquisition campaigns is the first campaign – the first campaign has the hardest job, if you will – so if you are going to optimize, the biggest bang for the buck is in optimizing first campaign, where if you get it wrong, all the rest of the campaigns are negatively affected.  This is the “leverage” part of the idea; on any campaign other than first, you can’t make a statement like this.  So it follows that every campaign should be optimized as “first campaign”, since you don’t normally control which campaign will be seen first.

Some believe that the sale or visit would not have occurred if the last campaign was not effective, and all other campaigns are just “prep” for that campaign to be successful.  Perhaps true, but it doesn’t fit my model of the world – unless you know that first campaign sucks.  If you know that, then why wouldn’t you fix it or kill it, for heaven’s sake?

All of the above said, if you have the chops, the data, and the tools, you can produce attribution models that will provide direction on “weighting” the effect of different campaigns.  These “marketing mix” models are used all the time offline, and are usually the product of high level statistical models.   By the way, they’re not generally accurate, but they are precise.  I do X, I get Y.

You can produce a similar kind of information through very tightly testing using control groups, but that’s not much help for acquisition because you usually can’t get your hands on a good control group.  So for acquisition you are left with trying to synch time periods and doing sequential or layered testing.

For example, in June we are going to kill all the AdSense advertising and see what happens to our AdWords advertising – what happens to impressions, CTR, conversion, etc.  Then in July we will kick AdSense on again and see what happens to the same variables, along with tracking as best we can any overlapping exposures.

Then given this info, we decide about allocation using the human brain and database marketing experience.

This approach is not accurate, but I’d rather be precise and “directionally right” then accurate and be absolutely wrong, if you know what I mean.  This test approach should give you directional results if executed correctly – the spread for the AdSense OFF / ON test results should be healthy, and you should be able to repeat the test result with some consistency.

Bottom line – it doesn’t really matter exactly what is happening, does it?  Do you need an accurate accounting of the individual effects of each campaign in a multiple campaign sequence?  No.  What you need is a precise (reliable and reproducible) way to understand the final outcome of the marketing mix.

Even if you think you have an accurate accounting of the various campaign contributions, what makes you think you can get that with data as dirty as web data is?  Despite the attempt at accuracy, all you have to do is think through cookies, multiple computers, systems issues, and web architecture itself to understand that after all that work, you still don’t have an accurate result.

Hopefully it is more precise than simply using first campaign.

Thoughts from you on this topic?  I know there are at least two “marketing mix” folks on the feed…

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Top 10 (IT) Projects in ’07

Interesting to see what our friends in IT are working on this year:

I would not have guessed Business Analytics / Intelligence was the Number 3 priority.  Good thing for analysts, someone is going to have to make sense of all that data and provide concrete direction…which seems to be the part hanging people up these days, not the analysis.  I imagine this BI activity has a significant role in driving success for #1 – BPM.

Full article with spending estimates and more details at Innovations Magazine here.

 

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

One (Customer) Number

Ron’s post Why Do Marketer’s Test? reminded me of an incident that keeps repeating itself. 

The presentation I do as part of the Web Analytics BaseCamp includes a section on the importance of measuring marketing success at the customer level as opposed to the campaign level.  Then I get this question: “If you were to measure just “one customer number” what would that be? 

Putting aside all the reasons why measuring one customer metric is a faulty approach for the moment, I reply “Percent Active”, meaning:

What percent of customers have initiated some kind of transaction with you in the past 12 months, or 24 months if you are highly seasonal?  Higher percentage is better.

Initiated being the key concept.  Just because someone is “balance active” or is receiving a statement doesn’t mean they are “Active”, or if you prefer, “Engaged”.  And for some businesses, for example utilities or help desks, a lower percentage will be better – the lower the percentage of customers who have initiated a trouble call or a billing problem, the better.  “Transaction” can be most anything, define it for your business – what generates profit or cost for you?  That’s a good place to start, among other things like inquiries and so forth.  Adjust for your business, keep it simple. 

If you don’t sell anything, consider shortening the 12 month window.  If you are a highly interactive business and depend on that interactivity as a business model (MySpace, Facebook) consider using 3 months.

It is truly amazing to me how many folks don’t know what this number is for their business.  And often, truly shocking to them when they find out what the number is.  I have seen their faces.

This number is so simple to calculate and track, and simple to measure success against, why don’t people have it?  It’s a very powerful predictor of the future health of a business.  It’s like a searchlight showing you the way, giving you the head’s up when things are not right in customer land.  All this crap about being customer centric and not one number to fly by, it’s really pretty sad.

All I can conclude is folks simply don’t want to know what the number is.  Am I wrong? 

Why don’t you know this number for your business, or why doesn’t your boss care about this number?  I want to hear all the excuses and have a list of them right here so we can refer to them in the future!

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

PRIZM Clusters Not as Predictive as Behavior

The following is from the August 2007 Drilling Down Newsletter.  Got a question about Customer Measurement, Management, Valuation, Retention, Loyalty, Defection?  Just ask your question.  Also, feel free to leave a comment. 

Want to see the answers to previous questions?  The pre-blog newsletter archives are here, “Best Article” reviews here.

PRIZM Clusters Not as Predictive as Behavior

Q:  I am on an interesting project (and my first DB Mktg one): the client has a large loyalty program, and loves his PRIZM clusters.  However, when I told him a little more about Recency and suggest that we spread all members across based on it, he was surprised to see that his PRIZM segments were not a predictive indicator at all!

A:  Yes, and here is something many people don’t realize about PRIZM and other geo-demo programs, including census-driven.  They were developed for site location – where should I put my Burger King, where should I put my mall? They are incredibly useful for this.  However, think about all the sample size discussions for web analytics in the Yahoo Web Analytics Group related to A/B testing, and now imagine what your PRIZM cluster looks like.

In most cases, you are talking about 1 or maybe 2 records in a geo location – what is the likelihood these households reflect the overall “label” of the PRIZM cluster?  Combine this with the fact that for customer analysis, demographics are generally descriptive or suggestive but not nearly as predictive as behavior and you have a bit of a mess.

Here’s a test for you.  It only requires rough knowledge of your neighbors, so should not be very difficult (for most people!)

1.  What is your “demographic”?
2.  If you were to walk around the block and knock on doors, how many households would you find that are “in your demographic”?

Right.  Maybe a handful, unless you live in a brand new housing development or other special situation.  Now think about walking your zip code, or walking out 10 blocks or so from your house in any direction, and knocking on doors.  Do you find most of these people are in the same demographic as you are?  Did you ever find the “cluster average” neighbor?

We certainly know from web analytics that dealing with “averages” can be very dangerous indeed.  So too with taking a demographic “average” of a zip or other area and tying it to a specific household.  The model falls apart at the household level of granularity.

So now what to you think of all those websites and services that claim to know demographics based on a zip code they captured?

Now, if you think about an e-commerce database, with most records being one of a very few in a zip or cluster, you can see how the cluster demos would really break down at the household level.

Again, nothing wrong with using these geo-demo programs for what they were intended to be used for.  When you are looking for a mall location or doing urban planning they can be very helpful.  But the match rates at the individual household level are poor.

Couple this with the fact that e-commerce folks are usually looking for behavior from customers, and the fact demographics are not generally predictive of behavior by themselves, and you have yourself analytical stew.

Better than nothing?  Absolutely, and for customer acquisition, sometimes all you can get.  Best you can be?  Not if you have the behavioral records of customers.  In fact, what we often see is a skew in the demographics being called “predictive” when the underlying behaviorals are driving action.

In other words, let’s say a series of campaigns generates buyers with a particular demo skew.  A high percentage of these Recent responders then respond to the next promotion.  If you look just at the demos, you would see a trend and declare the demos are “predictive” of response, even though they are incidental to the underlying Recency behavior.

I suspect something like this was going on with your client.  Not looking at behavior, over time the client becomes convinced that the PRIZM clusters are predictive, when for some reason they are simply coincident in a way with the greater power of the behavioral metrics.  Given the client has behavioral data, that should be the first line of segmentation.

Q:  After reading you for some years, I now understand how one must be very careful with psycho-demographics.

A:  Well, at least one person is listening!  And now you have seen how this works right before your very own eyes.

I think this situation is really a function of Marketers in general being “brought up” in the world of branding / customer acquisition.  Most Marketers come up through the ranks “buying media” or some other marketing activity that focuses on demographics to describe the customer.  And most of the college courses and reading material available focus on this function, so even the IT-oriented folks in online marketing end up learning that demographics are really important.  And they can be, when you don’t know anything about your target.

Then the world flips upside down on you, and now people are looking at customer marketing, and that’s a whole different ballgame.  The desired outcome is “action” that can be measured and the “individual” is the source of that outcome, as opposed to “impressions” and “audience”.  

In the past, if your tried and true weapon of choice for targeting was  demographics, that is what you reach for as you enter into the customer marketing battle.  Problem is, it’s just not the best weapon for that particular marketing engagement.

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

More Tips on Evaluating Research

To continue with this previous post…other things to look for when evaluating research:

Discontinuous Sample – I don’t know if there is a scientific word for this (experts, go ahead and comment if so), but what I am referring to here is the idea of setting out the parameters of a sample and then sneaking in a subset of the sample where the original parameters are no longer true.  This is extremely popular in press about research.

Example:  A statement is made at the beginning of the press release regarding the population surveyed.  Then, without blinking an eye, they start to talk about the participants, leaving you to believe the composition of participants reflects the original population.  In most cases, this is nuts, especially when you are talking about sending an e-mail to 8000 customers and 100 answer the survey. 

Sometimes it works the other way, they will slip in something like, “50% of the participants said the main focus of their business was an e-commerce site”, which does not in any way imply that 50% of the population (4000 of 8000) are in the e-commerce business.  Similarly, if you knew what percent of the 8000 were in the e-commerce business, then you could get some feeling for whether the participant group of 100 was biased towards e-commerce or not.

Especially in press releases, watch out for these closely-worded and often intentional slights of hand describing the actual segments of participants.  They are often written using language that can be defended as a “misunderstanding” and often you can find the true composition of participants in the source documentation to prove your point. 

The response to your digging and questioning of the company putting out the research will likely be something like, “the press misunderstood the study”, but at least you will know what the real definitions of the segments are.

Get the Questions – if a piece of research really seems to be important to your company and you are considering purchasing it, make sure the full report contains all the research questions

I can’t tell you how many times I have matched up the survey data with the sequencing and language of the questions and found bias built right into the survey.  Creating (and administering, for that matter) survey questions and sequencing them is a scientific endeavor all by itself.  There are known pitfalls and ways to do it correctly, and people who do research for a living understand all of this.  It’s very easy to get this part of the exercise wrong and it can fundamentally affect the survey results.

So, in summary, go ahead and “do research” by e-mailing customers or popping up questionnaires, or read about research in the press, but realize there is a whole lot more going on in statistically significant, actionable research than meets the eye, and most of the stuff you read in the press in nothing more than a Focus Group.

Not that there is anything inherently wrong with a Focus Group, as long as you realize that is what you have.

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

*** Call it E-RFM

By way of Multichannel Merchant, we have this article: Call it E-RFM

by Ken Magill.  His idea will probably be a stretch for many in the e-mail space, but it’s a great example of what I was talking about in How to do Customer Marketing testing.

The gist of the article is you can reduce spam complaints and better manage reputation by anticipating which segments of subscribers are going to click the spam button.  Yes, anticipate.  You know, predict?

For some reason, online marketers seem like they are not really into the prediction thing – or at least are unwilling to fess up to it.  Test, measure, test, measure, web analytics is mostly about history, as opposed to predicting the future.  How about predict, measure, predict, measure?  Same thing, only much more powerful – if you can guess what customers will do before they do it, you have real marketing power.  Perhaps this is why folks don’t talk about it much…

The prediction model discussed, RFM, is one of the most durable and flexible models in the entire BI quiver.  As Arthur Middleton Hughes says in this article, “There isn’t a predictive model in the world that doesn’t have RFM inside of it”. 

And the RFM model is free.  You don’t even have to hire a statistician!

The RFM model sometimes gets a bad rap because people use it with very little imagination, simply reproducing the basic catalog model from the 1950’s, instead of understanding the guts of it and using it in new ways.  This Call it E-RFM article is a good example of how to use RFM in a new way; a broader explanation of using modified RFM for e-mail is here.

Those of you interested in how to really take advantage of the new Webtrends Score product should pay attention to this prediction area, because “Potential Value” – a prediction – is absolutely fundamental to optimizing a Score model.  You could use Score to predict which segments are most likely to click the spam button.  And then you could test, track, and fine-tune those predictions until you get them right.  Sounds like fun, huh?  Does to me, anyway…

But you don’t need something like Score to predict likelihood to click the spam button; sending an e-mail every week for 3 years to somebody who never clicks through should be a rough indication…

So, do you use predictive models in your work?  Why or why not? 

If you don’t use prediction, is it because coming up with a great campaign for a prediction is the problem?  Or because nobody really cares about customer marketing, it’s all about customer acquisition?

An overview of the Potential Value idea is here, or for a more comprehensive version including marketing direction on what to do with the results, get the PDF.

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

*** How to do Customer Marketing testing

“We don’t need testing. We know what works.”

“If you do no testing at all, no one will complain.”

OK, the title of this article by way of DM News is actually How to do direct marketing testing, but I figured some folks who should read it might not with “direct marketing” in the title.

Arthur Middleton Hughes is one of the great educators in database marketing, and this article hits on several issues that are very well known in the offline customer marketing business, but few folks in online practice.  Control groups, half-life effects, best customer segmentation, effects of promotion beyond the campaign.

He also briefly addresses a problem I run into all the time.  Things are “going great”, so we don’t need to test.  Underlying this statement is frequently a very weird emotion peculiar to many online operations, especially when I talk about control groups.  It’s the “what if we find out our results are not as good as management thinks” problem.

In other words, the “not broke, why fix it” issue.

Not sure why this occurs so much with online when compared with offline, though it probably is simply an issue of undeveloped analytical culture.  Why else would people be afraid of failure, if failure is truly embraced as a learning experience?

Perhaps a culture problem:  Testing is OK as long as it doesn’t rock the boat too much, doesn’t push the edge of knowledge out too far, is safe and sterile and won’t result in any quantum leaps in knowledge.  “Safe testing” only.

Perhaps an idea problem:  The testing culture is fine, but has become too robotic, no really new ideas, people don’t know of any high-impact, meaningful tests to conduct?

What’s going on where you work?

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Research for Press Release

I think one of the reasons “research” has become so lax in design and execution is this idea of doing research to drive a press release and news coverage.  Reliable, actionable research is expensive, and if all you really want to do is gin out a bunch of press, why be scientific about it?  Why pay for rigor?  After all, your company is not going to use the research to take action, it’s research for press release.

So here’s a few less scientific but more specific ideas to keep in mind when looking at a press release / news story about the latest “research”, ranked in order of saving your time.  In other words, if you run into a problem with the research at a certain level, don’t bother to look down to the next level – you’re done with your assessment.

Press about Research is Not Research – it’s really a mistake to make any kind of important decision on research without seeing the original source documentation.  For lots of reasons, the press accounts of research output can be selectively blind to the facts of the study. 

If there is no way to access the source research document, I would simply ignore the press account of the research.  Trust me, if the subject / company really had the goods on the topic, they would make the research document available – why wouldn’t they?  Then if / when you get to the research source document, run the numbers a bit for your self to see if they square with the press reports.  If not, you still may learn something – just not what the press report on the research was telling you!

Source of Sample – make sure you understand where the sample came from, and assess the reliability of that source.  Avoid trusting any source where survey participants are “paid to play”.  This PTP “research” is often called a Focus Group and though you can learn something in terms of language and feelings and so forth from a Focus Group, I would never make a strategic decision based on a non-scientific exercise like a Focus Group. 

Go ahead and howl about this last statement Marketers,  I’m not going to argue the fine points of it here, but those wish to post on this topic either way, go ahead.  Please be Less Scientific or More Specific than usual, depending on whether you are a Scientist or a Marketer. 

For a very topical and probably to some folks quite important example of this “source” problem, see Poor Study Results Drive Ad Research Foundation Initiative.  If you want a focus group, do a focus group.  But don’t refer to it as “research” in a scientific way.

Size of Sample – there certainly is a lot of discussion about sample sizes and statistical significance and so forth in web analytics now that those folks have started to enter the more advanced worlds of test design.  Does it surprise you the same holds true for research?  Shouldn’t, it’s just math (I can feel the stat folks shudder.  Take it easy, relax).

Without going all math on this, let’s say someone does a survey of their customers.  The survey was “e-mailed to 8,000 customers” and they get 100 responses to the survey.   I don’t need to calculate anything to understand the sample is probably not representative of the whole, especially given the methodology of “e-mailed our customers”.  Not that a sample of 100 on 8000 is bad, but the way it was sourced is questionable.

What you want to see is something more like “we took a random sample of our customers and 100 interviews were conducted”.  It’s the math thing again.  Responders, by definition, are a biased sample, probably more of a focus group.  This statement is not always true, but is true often enough that you want to verify the responders are representative.  Again, check the research documentation.

OK Jim, so how can political surveys be accurate when they only use 300 or so folks to represent millions of households?  The answer is simple.  They don’t email a bunch of customers or pop-up surveys on a web site.  They design and execute their research according to established scientific principles.  Stated another way, they know exactly and specifically who they are talking to.  That’s because they want the research to be precise and predictive.

How do you know when a survey has been designed and executed properly?  Typically, a confidence interval is stated, as in “results have margin of error +- 5%”.  This generally means you can trust the design and execution of the survey because you can’t get this information without a truly scientific design (Note to self, watch for “fake confidence level info” to be included with future “research for press release” reporting).

More rules for interpreting research

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Marketing / Technology Interface

I’m a marketing person that in one way or another has been tangled up with the technical / engineering world all of my professional life.  Cable Television, TV Shopping, Wireless, Internet.  I have always been dealing with brand new business models having no historical reference, while swimming in data to make sense of, and dealing with engineering folks as the people who “make things happen”.

I have also been really fortunate to work with many Ph.D. level statisticians who had the patience to answer all my questions about higher level modeling and explain things to me in a language I could understand.

Because of this history, I’ve been a long-time student of the “intersection” between  Marketing and Technology.  I’ve in effect become a “translator” in many ways – taking ideas from each side and converting them into the language of the other side.  Distilling the complexity of Technology down to the “actionable” for Marketing, while converting the gray world of Marketing into the White / Black – On / Off world for Technology.

With no offense to either side, to generate some kind of tangible progress, sometimes you just have to strip out all the crap from both sides to get to the core value proposition of working together.  You have to start somewhere.  Then you can build out from there.

And so I try with posts like Will Work for Data to define this intersection for others, to help both sides understand each other, and it’s tough, especially with an unknown audience varying widely in their knowledge of either side.  I try to create a “middle” both sides can understand.

Marketing folks are in the middle of a giant struggle right now with the whole accountability thing.  But it’s not so much accountability itself, because many of the best marketers have always been accountable in one way or another.  No, it’s the granularity of the accountability that is the issue; the movement from accountability defined at the “impression” and “audience” level to accountability at the “action” and “individual” level.

Here’s the challenge for Marketers: the data is different.  Impression and audience are defined by demographics but response and individual are defined by behavior.

Perhaps this will “translate” poorly, but the Technology parallel would be folks who have built a skill set around a certain programming language and then are told that language is now obsolete.  This is extremely disruptive when you have spent 20 years understanding your craft from a particular perspective.

So here’s what we need to do to make this work.  We have to find common ground.  This will mean being a little “less scientific” on the Technical side and a little “more specific” on the Marketing side.  And we work down through all this to the core.

This is the same struggle web analytics folks deal with every day, but due to the early work of many writing on this topic, the web analysts were always urged to connect analysis to business outcome.  Many are getting pretty good at it; they don’t suffer the “too much science” problems their peers in marketing research seem to run up against.

But web analytics is just a microcosm of the whole Analytical Enterprise, which may or may not be (background info this link) Competing on Analytics at this time, but is probably headed in this direction.

I submit it’s a bit early to teach most Marketing folks about statistical significance, about what types of data sets CHAID works best with, the difference between Nearest Neighbor and Clustering models, and so forth.  We can always get there after we reach the core understanding.

Right now, what we need to do is figure out how to get to the core. 

I think where I might take this is to propose some fundamental rules of understanding and see if we get both Marketers and Analysts to understand and agree on them.

You up for that?

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss