Monthly Archives: October 2007

What’s the Frequency?

The following is from the October 2007 Drilling Down Newsletter.  Got a question about Customer Measurement, Management, Valuation, Retention, Loyalty, Defection?  Just ask your question.  Also, feel free to leave a comment. 

Want to see the answers to previous questions?  The pre-blog newsletter archives are here, “Best Article” reviews here.

Q:  I ordered your book and have been looking at it as I have a client who wants me to do some RFM reporting for them.

A:  Well, thanks for that!

Q:  They are an online shoe shop who sends out cataloges via the mail as well at present.  They have order history going back to 2005 for clients and believe that by doing a RFM analysis they can work out which customers are dead and Should be dropped etc.  I understand Recency and have done this.

A:  OK, that’s a great start…

Q:  But on frequency there appears to be lots of conflicting information – one book I read says you should do it over a time period as an average and others do it over the entire lifecycle of a client.

A:  You can do it either way, the ultimate answer is of course to test both ways and see which works better for this client.

Q:  Based on the client base and that the catalogues are seasonal my client reckons a client may decide to make a purchase decision every 6 months.  My client is concerned that if I go by total purchases , some one who was  really buying lots say two years ago but now  buys nothing could appear high up the frequency compared to a newer buyer who has bought a few pairs, who would actually be a better client as they’re more Recent?  Do I make sense or am I totally wrong?

A:  Absolutely make sense.  If you are scoring with RFM though, since the “R” is first, that means in the case above, the “newer buyer who has bought a few pairs” customer will get a higher score than the “buying lots say two years ago but now buys nothing” customer.

So in terms of score, RFM self-adjusts for this case. The “Recent average” modification you are talking about just makes this adjustment more severe.  Other than testing whether the  “Recent average” or “Lifetime” Frequency method is better for this client, let’s think about it for a minute and see what we get.

The Recent average Frequency approach basically enhances the Recency component of the RFM model by downgrading Frequency behavior out further in the past.  Given the model already has a strong Recency component, this “flattens” the model and makes it more of a “sure thing” – the more Recent folks get yet even higher scores.  

What you trade off for this emphasis on more recent customers is the chance to reactivate lapsed Best customers who could purchase if approached.  In other words, the “LifeTime Frequency” version is a bit riskier, but it also has more long-term financial reward.  Follow?

So then we think about the customer.  It sounds like the “make a purchase decision every 6 months” idea is a guess as opposed to analysis.  You could go to the database and get an answer to this question – what is the average time between purchases (Latency), say for heavy, medium, and light buyers?  That would give you some idea of a Recency  threshold for each group, where to mail customers lapsed longer than this threshold gets increasingly risky, and you could use this threshold to choose parameters for your period of time for Frequency analysis.

Also, we have the fact these buyers are (I’m guessing) primarily online generated.  This means they probably have shorter LifeCycles than catalog-generated buyers, which would argue for downplaying Frequency that occurred before the average threshold found above and elevating Recency.

So here is what I would do.  Given the client is already pre-disposed to the “Recent Frequency” filter on the RFM model, that this filter will generally lower financial risk, and that these buyers were online generated, go with  the filter for your scoring.

Then, after the scoring, if you find you will in fact exclude High Frequency / non-Recent buyers, take the best of that excluded group – Highest Frequency / Most Recent – and drop them a test mailing to make sure fiddling with  the RFM model / filtering this way isn’t leaving money on the table.

If possible, you might check this lapsed Frequent group before mailing for reasons why they stopped buying – is there a common category or manufacturer purchased, did they have service problems, etc. – to further refine list and creative.  Keep the segment small but load it up if you can, throw “the book” at them – Free shipping, etc.  

And see what happens.  If you get minimal  response, then you know they’re dead.

The bottom line is this: all models are general statements about behavior that benefit from being tweaked based on knowledge of the target groups.  That’s why there are so many “versions” of RFM out there – people twist and  adopt the basic model to fit known traits in the target populations, or to better fit their business model.

Since it’s early in the game for you folks and due to the online nature of the customer generation, it’s worth being cautious.  At the same time, you want to make sure you don’t leave any knowledge (or money!) on the table.  So you drop a little test to the “Distant Frequents” that is “loaded” up / precisely targeted and if you get nothing, then you have your answer as to which version of the model is likely to work better.

Short story: I could not convince management at Home Shopping Network that a certain customer segment they were wasting a lot of resources on – namely brand name buyers of small electronics like radar detectors – was really worth very little to the company.  So I came up with an (unapproved) test that would cost very little money but prove the point. 

I took a small random sample of these folks and sent them a $100 coupon – no restrictions, good on anything. I kept the quantity down so if redemption was huge, I would not cause major financial damage.

With this coupon, the population could buy any of about 50% of the items we showed on the network completely free, except for shipping and handling.

Not one response.

End of management discussion on value of this segment.

If you can, drop a small test out to those Distant Frequents and see what you get.  They might surprise you…

Good luck!

Jim

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

*** From Failing to Thriving

Looks like 1-to-1 Magazine has decided to unlock some of their archives, maybe releasing them to search after the next issue is published?  Who knows, but there was an important article on Failure published last month that is worth a read.

One of the major challenges the Analytical Culture faces is Fear of Failure; it’s just so uncool to fail in many companies today.  Yet some of the most spectacular wins often come after spectacular failures, and we have to teach managers that without Failure, there is no Learning Process.  Do it like they do at 3M and IBM, using the real stories of how failure went unpunished and was ultimately rewarded. 

You want the Analytics to free people, not have them seek out least common denominator “safe harbors” that have (perceived) immunity from failure.  I’m not sure many folks get how important this cultural issue is; if you don’t address it, the Analytics can actually make you worse off as people avoid risk by satisficing.

Check out the article here, more from me on this topic here.

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Web Data: Randomly Erratically Variably Unpredictably Incomplete?

So there I am at the eMetrics Summit, sitting with WAA President Richard Foley who also has the impressive title of World Wide Product Manager and Strategist for SAS Institute.  He asks me what I’m going to talk about for my “Guru” (hate that word) session with Avinash and John Q and I respond with the Accuracy versus Precision thing.  You know, that web analytics folks are generally far too obsessed with Accuracy when the data is really too “dirty” to support that obsession.

Well, don’t you know, (and this is 90 minutes before the Guru gig, but I have a Track presentation first), Richard responds, “Web Data isn’t dirty, it’s some of the cleanest data around.”

Hmmm, I think.  This has to be another one of those Marketing / Technology Interface things.  Clearly a semantic rift of some kind.  But he’s a SAS guy, so there must be substance behind this statement!

So we spend the next half hour or so Drilling Down into the meat of the issue.  Turns out none of his analysts would call web data “dirty” because it’s created by machines, don’t you know.  No mistakes.  Data is “clean”.  You haven’t seen dirty data until you start looking at human keystroke input, for example.  Think large call centers.  Or how about botched data integration projects.  Millions of records with various fields incomplete or truncated.  That’s dirty data.

Dirty, from both an Operational and Marketing perspective, you see.  But web server logs, they might be dirty from a Marketing perspective, but they’re not dirty from an Operational perspective.  They just are what they are; super-clean records of what the server did or the tag read or the sniffer sniffed.

OK, I’m with Richard on this idea, having seen some horrendously dirty data in my time by his definition.  So what do we call web data, if it’s clean?  Even a 404 Error isn’t really “dirty”, right?  It sure is dirty from a customer / user perspective; but from an already widely-used Operational / BI definition, it’s not dirty, it just “is”.  

So how do we get to this idea of all the problems with web data that can lead an analyst down the wrong track if they focus so much on Accuracy they never get Precision?  You know, cookie deletion, network serving errors, crashing browsers, multiple users of a single machine, single users of multiple machines, tabbed browsing, etc. etc. etc.?

What do we call that kind of data, if not dirty?

We start going through all the lingo, like trying on different sets of clothes, looking for something that fits.  What other kinds of data are like web data?  What is the precise nature of the “problem” with web data?  We finally arrive at the notion of Incomplete that seems to fit pretty well.  It’s not that the data is dirty, it simply is often “not there” for the end user or analyst, as in missing a cookie, or serving a page that is never rendered in the browser, or a tag that never gets to execute properly.

But that’s not quite it, we decide, because there has been a solution for “incomplete” data around a long time – modeling.  As long as you can get a set of reliable data, you can interpolate or “fill in” the missing data, right?  Like is often done with geo-demographic modeling?

There’s a word, we think – “reliable”.  Web data is certainly not reliable, but that’s not quite it.  Why is it not reliable?

Well, because at a fundamental level, the incompleteness is Random, so it cannot be modeled very well.

And there we have it. 

Web data is not dirty, it is Randomly Incomplete.  A label that works for both the Marketing and Technology folks at the same time.  A beautiful thing, don’t you think?  A great example of being a little “less scientific” on the Technical side and a little “more specific” on the Marketing side, I think.  We wrastled it to the ground.

So I rush off to change the phrase “data is dirty” in my Guru presentation to “data is Randomly Incomplete”.  The panel is right after my Track presentation, so I rush up on stage with Avinash and John Q.  We’re late so Avinash starts right away; we don’t even have time to mention to each other what we will be presenting.

Avinash is riffing on Creating a Data Driven Boss and his Rule #2 is:

Embrace Incompleteness

Yikes.  That’s some coincidence, don’t you think?

But more importantly, do you think web data is dirty, Randomly Incomplete, or some other definition?  Because if there are no objections, I’m moving from “dirty” to “Randomly Incomplete” – at least when I talk with BI folks!

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

On the eMetrics / Marketing Optimization Summit

I had to bolt the Summit a day early to speak at the Direct Marketing Association annual conference in Chicago.  Too bad, the conference was humming and there was a ton of great content along with the usual great people.

The most interesting trend going on (for me, remember I favor a behavioral approach to marketing, online and off) is the killing off of e-mail subs once they become unresponsive.  The most excellent Jay Allen from Cutter and Buck kills them off at 6 months because he simply gets more pain than gain from mailing them – basically zero response and lots of spam complaints after 6 months dormant.  Reputation management, don’t you know. 

Hard to figure out why more people don’t do this, but I have a good guess – folks simply can’t (or don’t) segment behaviorally so they can’t really see where the sales come from.  If they could, they’d kill off the “haven’t opened in 6 months” subs too.  These e-mail “purge” practices are simply a manifestation of the reality of Engagement – there is a time-based predictive element that tells you when it is over. 

The smartest marketers will realize they can predict this degradation of the relationship and take action before it is too late – in other words, before 6 months of no opens.  Check with your (offline?) BI folks for any patterns that might be useful in managing these LifeCycles, hopefully they have seen these patterns before.  Use segmentation; source of customer is highly predictive of these patterns, as is entry / first content and first purchase product.

Beware the average LifeCycle of interactive relationships are typically quite short compared with offline.  For example, catalogs can get decent ROI mailing all the way out to customers who have been dormant for 2 years.  In TV shopping, we considered folks dormant at about 6 months.  Online, the majority of the value is generated in the first 3 months.  Put another way, in catalog you get a 20 / 80 Pareto.  In TV shopping, more like 90 / 10.  Online, 95 / 5.

In the end, this behavioral knowledge ties directly to the “customer experience” idea so many people comment about in vague prose but never quantify.  You have sales people, products, procedures, and business rules that create customers likely to defect.

Sure, you have online customers that stick.  But the percentage of those that stick is smaller, and since they generate huge sales volume, it’s incredibly important to pay attention to what they are doing behaviorally.  You can predict when they will defect by the parameters mentioned above; isn’t it your responsibility to take action on this knowledge?

For the Brand folks out there, Rachel Scotto from Sony Pictures also kills off her e-mail subs after 6 months of no opens, a rule that varies a bit with the type of list and topic (movie, TV show, etc.)  For her, Brand is everything and she simply does not want the negative experience of unwanted e-mails to tarnish the Brand.  If someone demonstrates through their behavior they are no longer interested, then why continue to send them e-mails?  Good question.  Brand folks, please respond.

Jay also had a great shopping cart recovery example.  They e-mail folks that abandon carts with a simple, subtle message featuring the product and no discount – and get  fabulous response.  The folks sending discounts in this kind of program really need to do some controlled testing – they are giving away the store.

I’ve had a lot of positive feedback on my Summit presentations and I thank you for that.  Feel free to leave any comments or questions.

That’s it on the eMetrics / Marketing Optimization Summit from me.  Between WAA stuff and speaking / travel logistics I did not get to see many presentations, but the ones I did see demonstrated significant progress in grasping and leveraging visitor behavior.

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

On Engagement

I’ve had some bad luck with connecting to the web lately, trying to catch up on blog posts as the latest trip winds down.

The panel on Engagement at the WebTrends customer meeting was a lot of fun, probably best described as “productive friction” if forced to describe it with a phrase.

Based on comments from the audience, the panel was quite useful in terms of vetting some of the ideas floating around out there and answering their burning question, “Am I missing something here?  Why should I care about this engagement thing?”

This in itself is an interesting issue: generally, the audience perceives “engagement” as yet another buzzword of the week that like most buzzwords, is simply another word for stuff most of the audience deals with all the time, namely customer service and retention – or customer “experience” if you prefer last week’s buzzword.  This was the insight I gained from the well-lubricated crowd at the party after the panel, so please take this fact into account as well.  Do people tend to say what they really think after a few drinks?  Or were they just tired of talking about web analytics the whole day?

Some of the more interesting discussion among the panelists actually took place right before and after the panel, when we had a chance to really first explain our positions and then challenge each other to defend them.  Great conversation.

For what it’s worth, here’s a breakdown of what I thought I heard being said.  My perception and reality may of course be different and I encourage participants to correct any misperceptions I may have had…

Andy Beal – as the only “generalist” on the panel, I think Andy was a bit steamrolled by the hard core “get the facts” thing web analytics folks do.  He maintained web analytics could measure only one area of customer engagement with a company (the web), and that you would never get the full picture of engagement because some of it is unmeasurable.  Probably true in a strict sense, though I bet there’s a lot that can be measured on the web through customer conversations and so forth.  However, we left this “can’t be measured” question to simmer, because the rest of the panel and the audience wanted to talk about web analytics so that was what we were going to do.

Anil Batra / Myself – I’ll go out on a limb and say our positions were very similar; I’m sure Anil will chime in.  Basically, the formula is this:

The difference between Measuring Activity and Measuring Engagement is Prediction.

In other words, when you start using the word Engagement, you are implying “expected” activity in the future, with this expectation or likelihood being valued or scored with a prediction of some kind.  Activity without an implication of continuity is simply Activity, it’s history and stands alone.  Same stuff web analytics has always done, nothing new.

Jim Sterne – Jim was a bit more global in his thinking as you might expect, and seemed to be concerned more about how Engagement fits into the greater Marketing picture rather than looking to hang parameters on it.  How Engagement is related to Customer experience and Brand, how it does or does not turn into Loyalty, and so forth.

Gary Angel / Manoj Jasra – not sure either of these fine folks fully buy into the “prediction” requirement Anil and I support, though they might be talked into it.  Gary and I had a long conversion which included June Dershewitz after the panel, where we traded examples and generally wrestled over what I would call the “advertising / duration conundrum”. 

I maintain advertising is an outlier in this discussion, which is strange since those folks basically started this whole engagement thing and stoked the fire hard with the Duration variable that got web analytics folks in general so pissed off.  Not sure Gary or Manoj will ever accept Duration in any form as a measure of Engagement, where I maintain that if you isolate Advertising as a unique conversation, it makes a lot of sense.  The reality of buying online display ads is you need an absolute standard or the networks and buying process absolutely fall apart; you simply cannot look at a unique Engagement metric for every site or the buy would never get done.  So you hold your nose, say Duration is important to advertising as a metric, and do the deal.

In other words, there is a huge difference between being Engaged with a site and being Engaged with an ad on the same site.  These are two completely different ideas and unless you believe that Engagement with a site always spills over to Engagement with the ads on the site (I do not) then these two ideas deserve two different treatments.

June wanted to get into it all over again at the eMetrics Summit…feel free to post your comments here June!

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Airline Load Management by Search Results

From Vegas, Baby.

Scandinavian Airlines presents a case study today on how they are using Organic search results captured through the web site to forecast load management issues on the airline and help optimize the revenue management system. 

They track run rates on destination-oriented search phrases and noticed a correlation between spikes in destination search and sold out planes to those destinations a couple of weeks later.  Just to be clear, we’re not talking about on-site search or booking-engine data here but Organic search phrases coming in from search-driven visits.

Of course, the web folks going in to pitch this idea of actually scheduling planes based on search results faced an uphill  battle, just like the folks at Ford did with their production tweak suggestions based on visits to the web site car configurator.

But at some point, the repeated correlation over time (and pain of money left on the table) could not be denied and now destination search volumes are used as an input for the revenue manangement system to improve yield.

Kudos to Massimo Pascotto and team an Scandinavian Airlines for hanging in there, forcing the issue, and winning the metrics battle with the revenue management folks!
 

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Lab Store: Web Merchandising

This is a bit of rant against robotic thinking, best practices, and testing as the savior of all things web.  This after having so many conversations lately with people at all levels of web analytics who are infatuated with the idea that robots / software and “best practices” are the answer to everything web marketing. 

To be clear, I don’t have anything against the poor robots or testing – it’s the people using them.

All the way back in 2000, Bryan Eisenberg and I wrote the Marketer’s Guide to E-Metrics – 22 Benchmarks because nobody was measuring or testing anything, and that was silly, especially when it was so easy to do.   Now, it seems web analytics has taken that mantra and run all the way to the other side with it – testing is Strategy, and Marketing is whatever the robots say it should be after the tests are done.

Yes, web marketing seems to be going IT-centric again.  Worked out well last time, didn’t it?

Here’s the bottom line: I have no doubt you can improve a faulty execution with a lot of multivariate testing, but the real question is this: if the execution is Strategically flawed, will you ever get where it is you want to go? 

I think not.

I’m sure you are convinced your Strategy is on target, based on conventional web commerce wisdom.  The following is a bit of unconventional web commerce wisdom for you to consider when you sit down around the table with your robots.

——–

The Lab Store – my wife’s pure online commerce business where I am Chief Product Assembler and also do a lot of marketing testing on the customer base – services the exotic pet customer.

It’s a very odd experience going to the pet trade shows for this biz to review merchandise and make purchases, on many levels.  The root of this odd-ness can be summed up this way: we buy narrow and deep, and most everybody else in the pet business – which means retail stores, and many online stores – buy broad and shallow.

We work with one of the largest pet supplies distributors on the East Coast.  At their show, we get a bit of a discount if we place orders directly with the vendors, which are then managed by the distributor.

As we place our order with this one vendor, he asks, “Did you know this order is nearly 40% of the entire annual volume we do on these SKU’s with the distributor?”  We chuckle, hearing this all the time.  “Yea, well we do sell a lot of them” is basically the only thing we can say.

Another common conversation goes something like this: “Are you sure you want that many of this SKU?  No offense, but this is one of our slowest moving products, and I just wanted to be sure the quantity was correct.”  And our response is always something like, “Really?  That’s one of our best sellers, it’s a great product.”

Narrow and deep.  We only sell what the customer buys – a little trick I learned at HSN (not sure how they do it now).

Kind of makes sense though, doesn’t it?  “Customer-centric”, as they say.  And we are not afraid to completely re-build / re-brand any product we think has potential but has simply not been marketed correctly.  Or to take a “poor selling” product and change the intended use of it, turning it into a best seller. 

In fact, we routinely rip off all the packaging a product comes with and create our own packaging and new name for the product.  Any online retailer who has done a great job marketing a product only to find it appearing in a competitor’s store at a lower price should understand exactly why we do this.  We absolutely love this kind of product.

In many cases, multi-variate testing can improve the sales of any product, but can it turn a dog into a best-seller by completely rethinking it?  Nope, sorry.  Are there any “best practices” a human can follow to repackage a product successfully?

What, are you kidding?

Most pet stores stock a broad range of SKU’s and buy only a few units deep on each.  We buy only a few SKU’s and buy them as deep as makes economic sense – based on volume discounts, weight to value ratio (freight cost from distributor), storage considerations (is the product large relative to value) and so forth.

In other words, everything we do in the Lab Store is really based not on Sales, but on Productivity – how can we generate the greatest amount of profit for the least amount of time, money, and effort?  I realize this approach does not square with conventional wisdom, but the Objective of the store was to replace 1 income (my wife’s) with the least amount of effort possible.  If that is the Objective, then the Strategy is Productivity, not a focus on Sales.

For example, we turn our entire inventory 21.8 times a year.  I’m pretty sure most small (micro?) online retailers in our category ($1 – $5 million in annual sales) don’t care about that stat, but I’m also sure a few of the offliners out there are feeling their jaws hit the desk.  Most of them turn at 5 – 6 times with the really good ones at 10 – 12.  This stat is one of the most important in retail, it’s an “inventory productivity” thing.  And it also points out the economic difference between a narrow-deep and broad-shallow merchandising strategy. 

I know this is going to sound insane to a lot of small online retailers, but in our online store, you do not find a lot of variety, and this is intentional.  What you find is the single very best product for each need a customer has.  And most all except commodity products are priced that way – as the super-premium product in the category.  We carry the commodity stuff not because we want to, but because customers want access to it when they order from us.  It’s a Service decision, not a Product decision.

When customers ask, “Why don’t you have more variety?” we simply tell them we don’t see a need to offer anything but the best product for each need they have. “But don’t you have any cheaper ones?”

Notice, “variety” here is a code word for price.

“No, we don’t have cheaper ones.  You can find cheaper versions on eBay.  Or try a shopping search engine, if you are shopping only on price.  If you want products we have personally tested, are vet-certified for the particular exotic pet you are dealing with, and are absolutely guaranteed to satisfy your needs, we welcome your purchase.”

As a result, we clearly narrow the ability to attract a wide audience.  But we don’t want a wide audience.  We want an audience and a business we can easily defend against the constant price wars that are a reality of the web.  We knew that would be the evolution, and designed the business that way.  We want a Productive audience, one with high demand for the best, and a low Sales to Service ratio. 

Do you know your Sales to Service ratio (orders / service inquiries) and how to optimize it?  Do your robots?

If we did an on-site survey, I’m sure a lot of casual visitors would complain the store “lacks variety” and is “over-priced”.  That we’re not being customer-centric, don’t you know.  But we are, for the customer we want – she wants a high degree of quality, professional one-to-one advice, extremely fast and accurate execution, all with no hassles.  The rest of these high maintenance, high variable cost “customers” who are buying single items on price and suck the life out of the business if you let them can go to hell.  Really.  Those shoppers looking for value, which we deliver through aggressive product bundling and flat rate shipping, find it in our store.  And those are the customers we want.

It takes nearly as much effort to process, pick, pack, ship, and service a $40 order as it does a $140 order.  Given that, we prefer to drive higher value orders, and all our marketing is set up to do just that.  We actively discourage low value orders by using flat-rate shipping.  It’s that Productivity thing again; it’s the Strategy, and the store was built from the beginning with that idea in mind. 

Please sir, can you multivariate test that idea for me?

For example, we don’t have a search engine on the site, because we want to force (sorry, I mean “encourage”) customers to look at all our products, and not to cherry-pick the product they originally came to buy.  We specifically and intentionally designed the navigation that way.  And since we have less than 80 products by design, it’s easy for customers to review every product we have very quickly. 

The end idea is ease of use by the customer.  We do it by having fewer products and really smart navigation, not by substituting technology to fix a broken execution.

I know this also sounds insane given “best practices“, but you have to realize that a lot of these “best practices” tests related to on-site search were done on sites with terrible navigation, screwed up product assortments, and lousy merchandising.  In that case, I’m pretty sure a search engine increases conversion.  In our case, a search engine did not increase conversion, but it surely did lower Average Order Value. 

C’mon, do you think I didn’t test it?  Productivity again.

Can we get a multivariate test to confirm search improves conversion on poorly merchandised web sites?  Or can we just look at the site and know it will be true because the nav sucks?

One of things we do very aggressively is cross-merchandise, bundle, and package.  We do it precisely and intentionally within the navigation, which is why a search engine doesn’t help us.  Our approach is not an automated system, it’s a carefully considered marketing decision based on known behaviors.  People who buy this will be interested in this.  We don’t need a computer to do that for us, all we need is intimate knowledge of the customer and some merchandising savvy.  This bundling and packaging doesn’t change, it uses the same format over and over (so the customer gets used to it) and the bundles don’t change dynamically, they are the same for every customer.

Could we have a more sophisticated system?  Sure, but at what cost?  Given we already know what drives buying behavior, we understand pricing theory, we attract a specific audience, and we know what they want, why do we need a machine?  What would the incremental benefit be relative to the cost?

The store itself was built with a $70 copy of FrontPage.  Our monthly costs for hosting and the MIVA Merchant shopping cart (which is all but hidden from the customer except for checkout) is $40 a month.  When the package volume got to 15 boxes a day, we bought a back-end inventory / pack / ship label processing package for $500.  That’s it, that is all the infrastructure there is.  No employees.

Does the store look “slick”?  No.  Doesn’t need to.  Instead, it oozes personality from every pore – the product copy, the newsletter e-mails (which have no offers in them, we never discount or have a sale), the customer service communications – they all speak with one voice.  People adore the site, they think it’s the easiest to use site in the entire category.  People anticipate the newsletter and actually complain when they perceive it to be “late”.

Had any complaints recently from customers about not getting the newsletter?  How about the opposite?

The average product description on our site has over 500 words – even for the most mundane products.  We tell you absolutely everything there is to know about a product.  I noted that the big thing on e-commerce retailers “to do” list for 2008 is improve product descriptions.  Did they need a multivariate test to tell them that?

We have a no questions asked returns policy without a restocking fee.  We can do this because we anticipate product problems by extensively reviewing every product..  If a product is difficult to assemble, we assemble it before we ship.  If the assembly instructions suck, we re-write them and include them with the product.  Sounds like a lot of effort, until you find out we have a return rate of 3% on units and 1% on dollars.  Yea, it’s that Productivity thing again…

What is missing in web analytics today, with all due respect to both sides, is people who understand both the Marketing and the Technology aspects of web Behavior and Analytics.  Optimization is in the middle, not at the extremes.

Following “Best Practices” leads to commodity positioning, as everybody plays Monkey-See Monkey-Do (MSMD).  The constant benchmarking that is part of the IT culture is simply wrong-headed for Marketing; why does it matter what the other guys do, especially if they do a crappy job?  Do you take pride in the fact you benchmark better than some of the crappiest folks on the planet?  That your site / performance sucks less than theirs, but still sucks?

Do you have a Marketing Strategy, and do you execute in line with it, down through every fiber of the company?  Substituting brute force robotics or worship of MSMD best practices will never replace a great Strategy.  If you are at the point where all you can do is test things to death, perhaps you need to rethink your Strategy instead.

Please understand, I am not saying you should run your commerce operation like we do.  I’m just saying there are other, highly successful ways to do it and blindly following Best Practices and robotic testing – for any web operation, commerce or not – should be reconsidered.

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Your Ad Here (Everywhere)

Seems like every day I hear about a new way to stick ads in front of people online or through a mobile device.

Every new business model is advertising-based and is going to attract billions of dollars.  Companies are out there buying other companies that are basically worth nothing for billions of dollars based on the promise of ad revenue.  This despite the fact (for example) social media advertising has really sucked – and is getting worse.  Plus, there’s the fact nobody will pay for social media services.

Further, ask yourself this question: what if social media advertising does suck and will always suck because it is simply always out of context?  To be clear, by context I mean not the content surrounding the ad, but from the end user perspective.  If people hate seeing your ads while they are trying to do personal stuff, won’t the advertising always be ineffective?  That fundamentally, the advertising model for this kind of content is flawed and will not get better? 

Can you say GeoCities?

This situation reminds me of the dot-com “ads on your car” thing, which got so ridiculous that companies were actually giving away FREE CARS to people as long as they drove them around with ads on them.  How do you ever pay that back?  And how are those ads effective?  I guess there are a ton of marketing rubes out there who will buy any ad just to get the “exposure” – regardless of how out of context the exposure is.  Do they still sell ads on matchbooks?

But let’s not stop there.  For some reason I can’t get the economics of supply and demand out of my head.  If every single display surface online becomes a display ad, doesn’t that mean there will be an unlimited supply of online display advertising and so the value of online display ads will drop close to zero?  Perhaps a lot closer to the economic value most of these ads provide?

You tell me.

At least with eyeblack advertising, there is a limited supply – teams on televised sporting events (TV is actually the media, not the eyeblack).  That is, until somebody comes up with the idea of paying people to wear eyeblack ads – which can’t be too far away, can it?

Hey, I have an idea…want to make a billion dollars?  Know any marketing people just dying to buy this kind of “walking around” media?  Fortunately, I think the buy side has pushed back and is a lot smarter as a whole.

The above is not to say that specific exotic media will not work for certain very targeted applications.  The problem is in thinking any of this media is “mass” in nature, that it will be able to move the needle.  If the applications for this advertising are very narrow, then only certain narrow portions of the inventory have value, meaning the value of the companies is a lot lower than what is perceived.

Personally, I think the same thing will happen in mobile.  The killer advertising app for mobile is search, not display or audio, whether geo-intelligent or not.  Search fits the context of the user, just as search does online.  Free mobile services if you listen to an ad first?  C’mon folks, that model has been played and played online and it never works.  The combination of audience quality and the notion of being “forced” to pay attention do not equal great advertising results.

Your Ad Everywhere, as a whole, is an economically broken business model that delivers little value to either the advertiser or the audience.  Let’s just stop creating business models based solely on delivering display ads to people.

I suggest to you the test for the viability of an “network effect” display ad business model is very simple: ask the audience, would you pay for this service / application / access?  If the answer is no, the audience is not a viable advertising audience.  If the answer is yes, then you can look for ways to reduce billing by introducing the right kind of advertising.  This means, of course, that these networks will be much smaller, but have a high quality audience worth advertising to.

If you start with free, you have already poisoned the audience for any ad model relying on “impressions”.

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Speaking Schedule, WAA Projects, etc.

It’s been a ruthless couple of weeks, with tons of Web Analytics Association work on top of the usual client / Lab Store stuff.  Why do the folks in the pet supply industry change packaging and labeling going into the holiday season?  That’s nuts, if you ask me, unless you think all your customers are offline stores – which I guess most of them are.  Still, there’s a large enough mail order pet business out there you would think the suppliers would catch a clue or two.  I have plenty to do during the holiday season without having to re-write copy and re-shoot photography…

Anyway, the weeks that were.  First was a WAA Webcast on Money, Jobs and Education: How to Advance Your Career and Find Business Opportunities (site registration required, but you don’t have to be a WAA member) to get ready for and execute.

And there was the ongoing wrestling match to establish a framework for higher educational institutions to create course offerings in Web Analytics, leveraging the course content the Web Analytics Association has developed.  Very tricky stuff dealing with these Higher Ed folks, but we think we have it figured out.  The WAA’s first partner in this area will be the University of California at Irvine – not a bad start, methinks.

Then of course, it’s Conference season.  I’m going to be on a “Measuring Engagement” panel at WebTrends Engage October 8 -10.  The following week is of course the eMetrics Marketing Optimization Summit where I will be doing a conference presentation in the Behavioral Targeting Track and then sitting on a no holds barred “Guru Panel” with Avinash Kaushik and Bryan Eisenberg immediately after. 

Part of getting ready for the Summit this year was a review of the WAA BaseCamp teaching materials, a pretty substantial piece of work all by itself.  We’ve done some tweaking based on comments from students in previous classes.

Unfortunately, I have to split the Summit right after the Guru panel for the Direct Marketing Association Conference in Chicago, so if you’re going to eMetrics and you are looking to chat with me, make sure you hit me up before my presentation Tues at 1:30 PM (I will be there Sunday 10/14 @ 4 PM for the WAA meeting). 

At the DMA, I’ll be doing a presentation with fellow web analytics blogger Alan Rimm-Kaufman in the Retention & Loyalty Marketing Track called Smart Marketing: Advanced Multichannel Acquisition and Retention Economics.  Control groups, predictive models, oh boy.

The next day, I’ll still be in Chicago doing a real “stretch event” at the invitation of Professor Philippe Ravanas of Columbia College Chicago for The Chicago Community Trust.  Nine (9!) non-profit arts groups are battling for grant money to help execute their marketing plans, and yours truly is going to vet those plans and teach donor / membership marketing in a live format – with all nine institutions exposing their guts to me and each other –  in real time!  Budgets, response rates, web sites, direct mail, newspaper, radio, database marketing, it’s all on the table.

Should be a real kick – if I survive the format, that is.  As a musician, I have always had a great interest in arts / donor marketing and this will be a great opportunity to interact directly with the folks in the trenches.

So, I apologize for the lack of posts the past couple of weeks as we now join our regularly scheduled life (in progress).

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss