I’ve been trying to get a few things straight in my head about the way people Learn Online Marketing.
Will you help me?
Here’s a Premise:
The Online Marketing world seems to repeat the same mistakes over and over; it’s almost like every new generation of technology is a clean slate and somehow people expect an approach that was flawed in a previous generation won’t be flawed this time.
Sure, technology changes, but the fundamentals of human behavior are much more difficult to change. So you would expect there to be some constants, right?
For example, putting a high value on “quantity” of activity (remember Hits?) when every past generation has found that “quality” ends up as a more important metric.
When people talk about MySpace, they talk about how many millions of accounts there are. People forget the many companies that have fallen on the sword of the “total accounts” number in the past.
What you really want to know is how many accounts are active (say, any activity past 3 months), and whether the percent active is rising or falling. This one simple metric is a fabulous predictor of the health of an online business – from the very earliest days of interactivity right up until now (example).
The quantity of accounts doesn’t drive revenue generating activity - it’s the quality of the accounts. Quantity just drives costs.
So, why does every source from the media to bloggers to conferences ignore this? Why doesn’t anybody challenge the value of “we have this many accounts” every time it comes out of the mouth of a company spokesperson?
In other words, despite the “testing” mentality online, people seem to continually ignore the results of the past, like it’s different this time – and every time.
Question: Why does this happen? Is it because:
1. The Teaching is failing – books, conferences, courses, blogs, newsletters, etc. just are not conveying the correct principles. “Group Think” in the blogosphere might be making this condition worse.
2. The Learning is failing – people simply don’t want to rely on the lessons of the past and want to experience every new platform as a blank space with no constraints.
3. Other – Your reasons? Or problems with the Premise?
Please help me sort this out!
Here’s an example from the prior comments that may be helpful.
I don’t expect someone with 2 years experience to be as “smart” as someone with 10 years experience. What I might expect is a person with 2 years experience would have the fundamentals right e.g. Visits are preferred to Hits as a base measurement of site traffic.
So, while I think most people understand this particular idea of Hits versus Visits now, what I’m really asking is how did Hits take the fall, from a Teaching / Learning perspective? And why has this not happened with other fundamental ideas, like quality versus quantity?
What was the Teaching / Leaning process that ejected Hits? What were the dynamics surrouding this process? Why / How is it different now?
Here’s some possible ideas:
1. The “crowd” was much smaller then, so each voice was louder and an idea like Hits being meaningful was soundly thrashed across the entire community. There was no opportunity for the Hits idea to be “passed on” as part of the conversation.
2. (With a nod to Jacques from previous comments) There is something “commercial” that is crowding out true Teaching, an economic reward system of some kind.
For example, blogs that purport to Teach are more interested in page views than Teaching, so the content is flawed. Speakers at conferences are perhaps doing the same thing, speaking the “popular” meme and keeping real knowledge to themselves (if they have it). There actually may be economic benefit to “Teachers” in misleading people or keeping them in the dark; the “Wisdom of Crowds” has become (perhaps intentionally, probably unintentionally) corrupt.
Other ideas? Does the audience just insist on making all these mistakes again themselves, or perhaps the “crowd of loudest voices” is just teaching the wrong stuff?
Hi Jim,
You ask a very hard question.
I am a sociologist by training, and someone who worked quite a lot in epistemology back then. You are basically referring, I think, to how *we* learn as a group/community. For example, does the scientific model can teach us anything about how we build knowledge? Maybe, but at the same time, I think we are in a field where economic incentives (for “teachers” maybe (like myself), but certainly vendors) may skew the process.
Does this mean we keep repeating/reproducing the same knowledge, and false/basic assumptions, because of those interests? That would be quite a strong statement, since many vendors, to their credit, have been hanging out there in the dry for years, trying to convince the market to adopt better practices (MVT testing anyone?).
Does this mean that Marketing does not belong to the same type of knowledge corpus as science? One would believe that with all the *proofs* we (with web analytics in mind here) produce, there would be a strong base of hard facts, refutable through verification, etc. Is it then an execution problem, i.e. the empirical applications of that knowledge is so hard, that we keep applying the very basic stuff, if at all, and stay in a theoretical loop of sexy marketing “principles” that are the bread and butter of speakers, consultants, and vendors?
Again, a very hard and fascinating question to which I have no answer. If you can recommend any good readings in marketing epistemology, I am buying!
Jacques – Now that’s a lot closer to what I’m really talking about! I obviously lack the correct language / knowledge base to communicate these concepts very effectively.
Definition: Epistemology primarily addresses the following questions: “What is knowledge?”, “How is knowledge acquired?”, “What do people know?”, “How do we know what we know?” (Wikipedia)
That is really what I am asking about, and the potential “failure” of the model we are using. Indeed, “we” as a group / community.
Sometimes I think it has something to do with the Tech / Marketing Interface. Something like “Hits versus Visits” is pretty easy to define technically, and to choose “correct”. Quality versus Quantity perhaps not so – even if you have the long trail of evidence.
Sometimes I think it has to do with “Authority”, or lack thereof, in the blogosphere. Given all the “sources”, and given you can’t really trust any of them, you collect opinions and see where they reinforce each other, the “Crowd”. Problem with that model is the self-replicating process that is the blogosphere, mis-truths get spread very rapidly as “truth” with no effective way to challenge.
I’m just trying to put together a simple model of this in my head. I often see (what I think are) very similar issues in the publishing world. For example, fact-checking, as in offline editors (usually) do and online editors don’t. What’s more important to the community, speed or accuracy?
Why care about this? Well, perhaps there’s a business model lurking in all this somewhere, or maybe a book on marketing epistemology?
Jim – Marketing Epistemology; definitely sounds as something I would love to buy a book about.
In fact, I have been torturing myself over those questions: what do I really know, since my clients pay me good money for what I tell them? What are the principles on which I base those “truths” I sell? What are the processes involved in the production of that knowledge? Are they pure abstractions (in a cartesian sense), or empirical models that seem efficient? What is the influence of my peers?
The lack of real principles or models to follow is something I have thought about a lot; I think these exist but perhaps need better translations or affirmations from a wider group to become respected. Goodness knows I have tried to push a few of those.
Clearly, if people are operating against a defective model it’s no wonder they keep making them same mistakes. Perhaps I’ll take another run at it.
Or, perhaps I will write the 4,578th blog post on Google Chrome and get some page views…
In any competitive industry, all it takes is one player using the larger metric to force all competitors to do the same- regardless of the metric’s accuracy or worth.
Take banking for example. Pretend that all banks play nice and report their # active accounts metric; and that regardless of actual number of total accounts, they all have roughly the same number of active accounts.
Bank A- 1M accounts, 90% active rate = 900K active accounts
Bank B- 900K accounts, 100% active rate = 900K active accounts
Bank C- 2M accounts, 45% active rate = 900K active accounts
Using your suggested metric, the banks are all equivalent.
But now Bank C decides to buck trend and claim they have 2M total accounts, they’re tired of just being equals. The typical street analyst is going to hear “2M”, compared to 900K and not notice the verbal footnote that delineates active versus gross accounts. So Bank C all of a sudden becomes the darling of the banking industry, no matter that they have a crummy active rate.
Banks A and B have no choice but to follow suit- no CEO is going to take one for the team and report a number that sounds any lower than he has to.
And thus we end up in the problematic scenario you describe. While you and your readers may be diligent about specifics, the market and the people who ultimately drive strategy off of metrics (C-levels) are nto always so eagle-eyed. I’ve often heard, “get me the biggest number you can get, I don’t care how you get to it”.
Neither the teaching nor the learning are failing, in fact quite the opposite. From those before me I learned how to produce a useful metric and also how to produce a metric that makes my execs happy. Sometimes the two are not the same. And I will pass those same skills on to those who come after me with equal success.
Thanks for expanding the conversation, AW.
Are we getting to the root cause here? It’s the street analysts who are clueless, and by feeding their flawed notions to the media and VC’s, end up perpetuating myths which result in Repeating the Past. After all, funding is funding, as long as it’s someone else’s money, I guess.
Then the bloggers pick up these same notions and go along for the page view ride. Since most bloggers just repeat what they hear elsewhere, you get a nice hype stream going. And then a lot of people who don’t know any better assume all of this “knowledge” is meaningful.
Not that any of the above is new, I mean, we all know about hype.
But this scenario does call into question the “Wisdom of Crowds”, doesn’t it? And the related question of source “Authority”?
I have to say, not all street analysts are clueless. Can’t find a link, but I remember a day (1999?) one asked this question during the Amazon conference call:
What percent of your best customers are 12-month active?
First time anybody asked this question of a dot-com. Based on the answer, stock price was cut in half that day. Apparently, none of the other analysts realized the best customer counts (based on Frequency) Amazon used to dazzle people with in the early days might include people who had stopped buying.
That analyst was a catalog analyst, someone who understood there is a dramatic difference between KPI’s for the “mass” model and KPI’s for the “personal” model.
Wonder which one is right for the web?
Jim,
First, I think that the reason we are often doomed to repeat the past in this way is because most people that do not have a skewed view are in no position to bring their ideas fruition. Right now, I know the way that we should be doing a lot of things, and unfortunately, there are the HIPPOS out there that want it done their way. And as we all know what HIPPO wants, HIPPO usually gets, regardless of whether or not we know better ourselves.
Secondly, the people that were formerly the little people, will eventually turn into the HIPPOS and exert their influence on those below them, thereby repeating the cycle of not learning from past experience.
The problem is also that what often should be learned from past experience is that we often fail to learn from past experience. Confusing, but true.
Lack of innovation in thinking is also a big contributor. Most people spend no time whatsoever thinking about how to do things better or differently. I try to spend one hour every day just creating a mind map to think up new things and new ideas. Most people get so wrapped up in task completion, that they don’t really take time to simply think and come up with new ideas.
Jason, I appreciate your participation in this complex but important topic. Actually, I am sort of astounded I have received any comments at all!
I was discussing this topic with one of my analytical mentors yesterday, he’s actually older than I am (and I turn 50 this week).
We went at it from all the angles – is it an age thing? Is it a background thing? Is it a money thing? Is it a culture thing?
One of the more powerful ideas (at least I think) he came up with is similar to one of yours: “people just don’t analyze failure anymore”, with a possible root cause of “there’s no money in analyzing failure”. Said another way, when something fails, people just walk away. There’s no interest in figuring out why it failed.
I have to say this is pretty disturbing idea on several levels and I’m presently trying to wrap my head around the implications.
I often see a micro version of this attitude in campaign and site optimization. When a test cell fails, online marketers / analysts are typically not very interested in figuring out why.
Their attitude is very much, “Why bother? It didn’t work”. But Offline, figuring out why a test failed is more important than figuring out why a test worked, because you learn stuff that prevents future failures.
This is mostly a “hypothesis” problem, I think. If you don’t have a concrete reason why something should work before the test, if you’re just testing “blindly”, then there is nothing to be curious about when the test fails, I guess. Me, I’d be more curious about a failure, because if the test failed, my hypothesis was (probably) wrong and further analysis is required to figure out where I made my mistakes and learn from them. If the test worked, my hypothesis was (probably) correct and I don’t need to know anything else. Given the choice of figuring out the “why” of a success versus a failure, I’d choose the failure every time.
For anybody who is still following this fairly academic thread, I’d appreciate any comments / ideas you may have on the following thoughts Re: “people don’t analyze failure” because:
Background:
Offline marketing has a long tradition of analyzing failure as a learning process; is it possible that online marketing folks, who tend to have a technology background, have a different learning mode or tradition that doesn’t include failure analysis? I mean, it’s not that online folks don’t recognize failure and iterate away from it, but they often don’t stop to analyze why the failure is occurring – they simply make changes and try again.
It’s not clear to me if this situation is born more from a task perspective or a knowledge perspective. Perhaps Marketing failure analysis is typically not part of the task. Or, the right kind of knowledge to enable successful Marketing failure analysis (Psychology / Sociology) is simply not present. For educators, this is a critical distinction and worth investigating.
Also, as Jason implied above, lack of failure analysis may be more of a business culture issue; there’s only so much time in the day and if your boss isn’t interested in analyzing Marketing failure (because they can’t?) then it probably does not get done.
Sector:
Technology folks and online marketers seem to be endlessly fascinated by anything “new”. In fact, “new” drives the entire ecosystem, from software development to the endless spin cycle through FUD and consulting, advertising and publishing. There’s no point in analyzing failure because by definition, failure is “old”, so it literally doesn’t matter. On to the next new idea, please.
Culture / Age:
To care about analyzing failure, a person probably has to:
1. Be devoted to the company they work for
2. Stick around long enough for the analysis to deliver benefit
Online marketers / analysts – as long as they are not significant owners of the company – tend to be younger and have a different view of work & company than myself or my older friend. So instead of analyzing failure because they’re going to stick around and try to fix the failures (which would be my inclination), people just move on. It’s easier / faster / more lucrative just to move on than try to unwind a clusterf**k.
Dumb Money:
Related to / possibly root cause of the Culture issue, there seems to be an endless list of people who will invest in ideas that have already failed / spend money on advertising that is worthless. The ecosystem is severely distorted by this money.
For example, VC’s will invest in an idea that failed twice before even though it was fundamentally a bad idea from a Marketing / User / Advertising perspective, but a new “platform” is supposed to fix that (and it never does). The bloggers / press / trades / shows chase these ideas around not because they are really new or even worth serious consideration, but to generate content for selling ads / services. The whole system feeds on itself and a lot of people make money, so there’s really no reason for analyzing failure or even bringing it up – nobody really cares anyway.
Example: Paying audience members to view advertising does not work for the advertiser, never has, never will, online or offline. The fundamentals of this approach are broken, the Intent and Desire created are ego-centric (getting paid) and the ads are meaningless, just a mechanism to get paid. Yet we have seen “get paid to visit web sites”, “get paid to read e-mail”, “get paid to click on ads”, and none of them worked, those companies collapsed. I hear “get paid to read feeds” and “get paid to read social newsfeeds” are next. Do you think these “new” ideas will work – for the advertiser paying dear money, that is?
OK, end of my proposed reasons why online marketers / analysts don’t analyze failure. Any of the above make sense to you? Other ideas? Anything from the sociologists out there?
To be clear, I don’t really think Repeating the Past comes from any one of the above, but from some strange brew that involves various elements of the above combined.
Comments on any of the above?
Hmm, this conversation si not getting any simpler; definitely more interesting though.
Interesting addition, Jim. This Repetaing of the Past could be a mix of all the factors you describe. I like “Dumb Money”: isn’t it fascinating how greed can rename stuff in hope of a quick buck?
More seriously, one could wonder whether the corporation is really a learning environment (who talks about Knowledge Management anymore?). Business does not necessarily structure knowledge like, say, Academia, i.e. formalized process for idea validation, peer review, consensus, until the next paradigm, etc. Although there *are* highly learning companies out there.
Maybe this is because corporations do not exist to make knowledge, but to create financial value (even though one can be very much the source of the other). Capital is allocated, resources are mobilized, and returns have got to show up quickly. Add to this the very important human factor such as climbing the corporate ladder, etc., and you have now individuals more concerned about getting promoted rapidly (the above mentioned HIPPPOS) through quick wins/quick fixes, than acquiring knowledge for future generations. And if you have the freedom of a lot of budget, or flexible technology (cf. Web), you can just go shotgun and try a lot of stuff in hope something will work big enough.
Well, I’m sorry I am not bringing this discussion a lot further. I really think however that you are on something very relevent that deserves a lot of thinking. I hope you will keep us posted on how your ideas develop.
Congratulations on the big Five-O.
Keep the comments coming Jacques (and others), you are very much guiding me on this walk through the Online Marketing Teaching / Learning wilderness.