Monthly Archives: February 2007

Profiling Library Customers
Drilling Down Newsletter 2/2007

Update:

Robert just checked in with actual data, click here.

The following is from the February 2007 Newsletter.  Got a question about Customer Measurement, Management, Valuation, Retention, Loyalty, Defection?  Just ask your question.  Want to see the answers to previous questions?  The pre-blog archives are here.

Profiling Library Customers

Q: I work for a local council in England and was recently asked to provide some local demographic profiles to help our local libraries market themselves more effectively and hopefully increase book loans.

A:  Hmmm…this is definitely the first time for this question!

Q: I’m no marketer, I usually muck about with crime, economic and census data, but this seemed instinctively wrong to me after reading how Tesco’s had used its clubcard data to understand it’s market.  When looking for some help, I’ve obviously found your website, bought the book, got the software.

A: I would have to agree with you on the Tesco point.  If you want people to “do something”, you look at behavior.  If you want people to take out more books, you look at book loan behavior.

Q: Firstly, I just wanted to thank you for making your ideas so accessible, actionable and easy to understand.  I’ve picked up some other marketing textbooks for help and they seem to mainly consist of dry schematic diagrams, and bland statements.  Great for a degree I’m sure, not so great for the rest of us who have three weeks to write a report on the subject!

A: Well, thanks for the kind words.  That was the intent of the book – to give people the “how to do” as opposed to the “what to do”.

Q: Secondly, have you any advice or experience of this model working in the non-profit sector or specifically in libraries.  For example, predicting life cycle / trigger points seems a little more complicated than the examples you use.  People don’t seem to stop using the service gradually, but stop abruptly and then start up again without no warning.  I’m also dealing with thousands of records.  My data seems a lot less clear cut than the examples you talk about.

A: I’d agree it’s not a “clear cut” situation, but not because of the models or the channel.  This kind of behavioral profiling as been used offline for decades, and it works in all kinds of situations.  For example, you mention crime data, so you have probably seen that the more Recently someone has committed a crime, the more likely they are to commit another.  Not that you can really take any action on that information – you can’t lock people up for being “likely to commit a crime” – but interesting just the same.

And I think you face a similar challenge.  You can run the scoring models and generally predict who is likely to slow or defect from their book loan behavior, but the question is, what do you “do” about that?  I have worked in other educational situations (likely to graduate, likely to contribute to the school after graduation) where the incentive is not straightforward but nonetheless you can create incentives to encourage people to continue their behavior.  I’m struggling a bit with how to create one in this situation, since the product itself is free.

But before we tackle that issue, I want to run through a bit of a “model” for this “business”.  It seems to me you have a market or segment shift going on.  If the primary reason people go to the library is to research a topic, clearly access to the Internet has suppressed the need for people to take books out on loan from the library.  For example, many of the trade journals that used to be hard to access or expensive to subscribe to are now available on the web.  So you have this “research” segment you have to deal with.

There no doubt is another segment, “core readers”, who simply for the love of reading visit the library to discover books and read them.  This segment is probably what I would call “good customers” because the library provides a service to them they cannot get elsewhere, the “value proposition” of the loan program matches their needs precisely.  This in contrast to the “research reader”, who now can do a lot of research from work or home on the web.

For research readers, a possible alternative would be providing access to internet terminals in the library.  But now we’re starting to encounter a different definition of “customer” and “loan a book”, right?  Let’s say, for a library, the “profit” in the venture is the “contribution to the community”, and this contribution was always measured in the past by “books out on loan”.  This metric has been the library’s KPI (Key Performance Indicator), if you will.

Let’s also say that many libraries have installed internet access terminals, as they have in the US.  Because of these terminals, you would expect that “books out on loan” would fall for the “research readers” segment, correct?  So to get a proper valuation of the contribution the library was making to the community, you would have to look at “books on loan + number of web terminal uses” to approximate the old metric “books out on loan”.

Follow?  So let’s say the real issue at hand here is the local government is trying to be “accountable” for what they spend on libraries, and they measure the “profit” of this spending by looking at books out on loan.  The library administrators are feeling some kind of pressure to “serve the community better” (increase “profit”) because books out on loan have fallen.  The problem is that “books out on loan” is no longer a viable metric – the “base” has changed, if you will.  The research segment is being served through a new method – web terminals – and this has been overlooked in terms of measuring the “contribution” the library is making.

In terms of tracking, if there is no login required to use a web terminal, someone in the library simply needs to count the number of people in a week that use the terminals x 52 weeks and use that as an approximation.  Better would be a system where in order to access a terminal, you have to enter your “library card number” or some other unique identifier tied to the person or household.

In this way, you find out more about your segments:

Researchers = only logs into computer
Core Readers = never log into computer
Multi = both logs into computer and takes books out on loan.

These “multi’s” would typically be the very best customers, since they are engaging in more than one library offering.

Above this, the library probably offers other types of services and special events.  The more of these services and events a customer engages in, the more valuable the customer is.  These are the customers the library should strive to keep active.  And like the example of books on loan, if the “value” of the library is only being evaluated on books on loan, attendance at these other services and events really should be included in the evaluation in some way.

The landscape has changed, and it’s quite possible that the metrics have not kept pace.  Perhaps it is not your place to suggest this, but as the “evaluator”, I would certainly be curious about this metric “books on loan” and make sure it accurately reflects what people think it does and is serving the administrators in the way they think it is.

Now that we have a feeling for what the background might be, you still have the issue of what do you “do” about customers who appear to be defecting?  As I said, the behavioral models will give you this information, but then what can be done with this information, especially given what are probably fairly strict budget constraints?  It’s not like you can send them a discount on their next book loan!

In situations like this, I think the best alternative might be survey work.  That is, when the behavioral models identify customers who are likely to defect or have defected, the library simply asks them about it.  For example, the library conducts a telephone survey using a sample of these people – “We noticed you have not taken a book out on loan / used a terminal in the past 3 months, is there anything we have done to offend you?  Is there a particular kind of content you are now interested in that we do not provide?  What other events or services would you like to see us provide?”  etc.

And where possible, the library should respond and provide what these customers want.  It may not be able to keep these particular customers from defecting (too late), but over time the “mix” of content and services should improve in a way that attracts and retains high value library customers.

I humbly suggest that the above approach will be far more effective and less costly than a “CRM system”.  If the library doesn’t have one, what is really needed is a “tracking system” that simply keeps track of what resources of the library each customer is using.  This will make your models much more reflective of the true economic benefits provided by the library, and give you the customer samples you need to construct effective, segment-targeted retention programs.

This kind of work also provides very good tracking for “how we’re doing” as a library and can provide excellent justifications for budgets and new requests, because it is directly tied to what customers want – no guessing games.  Over time, budget should flow to the areas more desired by customers and away from areas that are perhaps less desirable or are “pet projects” of interested parties.

Q: Once again thanks, I can at least begin to show management that we should start doing this work in Access / Excel before spending on a CRM system, but any further advice would be appreciated.

A: Well, I don’t know much about how libraries really work, but I’ve given you my best guess as to what might be going on.  Hope that helps!  And do keep in touch on this as you get into the guts of it, should be a very interesting project!

Jim

Update:
Robert just checked in with new info on progress on this idea, see here.

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

“Scrap Learning”

New phrase I’d never heard before: “Scrap Learning” is training that is delivered to students that goes unused.  For example, employees sign up for training that has nothing to do with their job just to get out of working, or are forced into this situation by some misdirected “mandate”.  Or they are forced to take a course so far ahead of when they will need the knowledge they forget this knowledge by the time an on-the-job use opportunity rolls around. 

Apparently this is a pretty big problem in large companies and wastes millions of dollars in Training budgets every year.  Reducing / Eliminating Scrap Learning is one way to optimize Training budgets for maximum ROI; if you can get rid of Scrap Learning, you spend a lot less money and get virtually the same impact.  Kind of like segmentation in Database Marketing, right? 

Some estimates of Scrap Learning run as high as 50% of all Training delivered.  One of the easiest ways to reduce Scrap Learning is to simply trigger an e-mail to the employee’s supervisor and get confirmation the employee who signed themselves up really needs to take the course.  Hmmm…

Here’s the problem.  The success of many Training departments is measured by “volume” metrics like the total number of student hours consumed.  If that is your success metric, do you have any incentive to reduce Scrap Learning?

Just got back from the Training 2007 conference.  All sorts of stuff like the above is whizzing around in my head.  The “experts” say one way to drive Creativity is to Learn material outside your own knowledge domain, then try to connect this knowledge back to your own domain.  I have that going on in a big way…

Fear of Analytics is a huge problem in HR / Training; I’m beginning to think this is a pervasive problem across the entire enterprise.  We know this is a culture problem, but the question is, what is the right model for evaluating, addressing, and fixing this problem?  The issue has been addressed here and there for specific applications, but as far as I can tell, not in any kind of comprehensive way.  What can we do about Scrap Learning?  Scrap Marketing?  Scrap Sales?  Scrap Service?

And then, think about what this looks like from a cross-silo view, the Inter-departmental Scrap, the Scrap created because there are not clear metrics channels calibrated so the success metrics of one department do not conflict with the success metrics of another department.

Brother, that’s a lot of Scrap.  Are your metrics aligned with your mission?  Or are you incented to produce a lot of Scrap?

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Déjà vu (All over Again)

The top issue in Training today is:

Accountability

Execs want to know what the “ROI of Training” is.  To find out what the ROI of Training is, one should create:

KPI’s – that’s Key Performance Indicators, in case you didn’t know.

To facilitate the use of KPI’s – to provide something to measure – one should design Training so rather than being Content-based, it is Performance-based.  In other words, the Training should be designed to have a measurable outcome.

Another way to say this is the Training should have a clearly defined Goal which directly addresses the “Gap” between actual performance and desired performance.

Geesh…and I thought Marketing was up the creek…these folks are just getting started.

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

I’m at Training 2007 (the Conference)

Yea, I know, kind of weird.  What the heck is a Marketing / Web Analytics guy doing at this event?

The Training Conference and Expo is the largest conference of training professionals in the US.  It’s the first conference I have been to in 10 years that I’m not speaking at.  Probably the first conference I have been to in 20 years where just about everybody knows more about the topic than I do. 

And I have to tell you, that’s incredibly refreshing. 

I’m thinking I have to do this more often!  After all, what exactly is the point of going to conferences on material you already have deep knowledge of?  Unless it is to present, of course…

I’m here on behalf of the Web Analytics Association scouting out vendors to administer the Certification test we are developing for web analysts, and to learn everything I can about best practices in Certification.  One of the challenges is we are looking to certify folks not on “software” related issues like implementation / set-up (the vendors do a fine job here) but on the business side, where the issues are often not as quantifiable as they are in software land.  So we need a vendor that can work with us on a more flexible testing methodology than many are used to.  If you have any suggestions / advice on certification test vendors, let me know.  There are 9 vendors here.

Here are some interesting things I have learned so far:

1.  Virtually none of these Training / HR folks have ever heard of web analytics before.  They have no idea what the heck I am talking about, or that web analytics people even exist from an HR perspective.  The typical response is “we could have used somebody like that when we were setting up our Intranet … what is their typical job title and who do they report to?”

 2.  The primary model used in training course development is called ADDIE.  It stands for:

Analyze
Design
Develop
Implement
Evaluate

which is a formal sequence of tasks where “Evaluate” has an arrow looping back up to the top pointing to Analyze, meaning you repeat the sequence and there is a continuous improvement process.  Hmm, that sounds kind of familiar, where have I seen this before?  Perhaps filed under Best Practices for web site development?

3.  Lots of the communication and behavioral models used in Marketing are used in Training – Training is in many ways a specialized kind of Marketing.  I initially thought I was dead wrong about this but when I put forth the idea, nobody threw me out of the room or called me a Newbie.  So I think there is something worth exploring about this parallel, especially since e-Learning delivered through web interfaces is a big deal to these folks.

More to come as the event unfolds…

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

More Trouble for Unique Visitors

I’m minding my own business and McAfee wants to “Update”.  I think this is a simple update of the virus database and even though I am very busy doing something else, I go for the update.  So of course, without warning, I am treated to a monster update of the entire McAfee program, complete with all kinds of FUD links that lead to very poorly executed landing pages.  Terrible customer experience, and that’s what I was going to post about.  But since everybody probably sees the same thing all the time, I think the following sequence will be much more interesting.

Being a web analytics freak, about 30 minutes after the install, I checked out the way the new McAfee program handles cookies.  You guessed it:

(click on any of these images for bigger pic)

“Scan and remove tracking cookies” is automatically activated on install.  Then I go into the “Quarantine” section and here is what I find:

All my “tracking cookies” have been Quarantined.  Certainly looks like all the major ad-serving networks are represented, and what looks to be a bunch of Overture conversion cookies.

This cookie crunching doesn’t mean much to me because I don’t build any very important (KPI level) metrics using “Unique Visitors” as a base.  For one thing, I was doing web analytics before cookies were pervasive and I’m comfortable using “Visits” or “Sessions” as a base (Sales per Visit, for example, as opposed to Sales per Unique Visitor).  The other reason is that you simply cannot get an accurate Unique Visitor count, meaning there’s a lot of “noise” in the number.  I don’t like basing key performance metrics on a noisy base number, it’s asking for trouble.  And it appears this cookie situation will be getting worse over time – worse than it already is with all the anti-spyware scrubbing of cookies, the firewall problems, and so forth.

Yet I know a lot of people base everything they do on Unique Visitors because it “makes more sense to management” and it’s “more logical” and so forth.  Fine.  Here’s what is going to happen.  The cookie block / erase / quarantine problem is going to artificially increase the number of Unique Visitors you are getting to the site.  You’re not getting more, it will just look like you are due to loss of tracking at the Unique level.  This means Sales per Unique Visitor, for example, will start falling over time even though in reality, based on actual Unique Visitors (which you can’t measure) it may be staying the same or rising.

My advice to you is to start shadow tracking now using Visits or Sessions as the base in your most important metrics, the ones you are on the hook for.  You don’t have to show them to anybody, just keep track of them in Excel or something and note the trends.  Then when you start seeing your Unique Visitor based metrics collapsing on you, you can whip out the Visit / Session based metrics and say, “See!  See!  It’s really not happening!  We’re doing much better than you think!”

Makes more sense to management, indeed.  Until you try to explain why Management should now believe your Visit-based metrics instead of the Unique Visitor based metrics.  Good luck on that one.

(Note to Ron: This subject makes me very Cranky, could you tell?)

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

***** The Medium is the Metric for Online Ads

Article with the title above, published here, is apparently creating quite a stir in the advertising media community, particularly among the “brand” folks.  One of the core points is that all media will become measurable and thus “accountable” in terms of the effect ads placed in the media have.  While I’m not sure that’s going to happen in my lifetime, the initial thrust is that online media better get their crap together in the measurement area and define some standards, because the new “Agency of Record” is going to be an analytics shop that measures, in a centralized way, the effectiveness of all advertising a client is running.  The unspoken implication here is this agency would essentially have the power to fill or kill any campaign based on performance.  Neat idea.  Two comments:

1.  I was selling cable television ads in the mid 80’s when Nielsen, despite intense pressure from the broadcast networks, started metering cable homes “in the box” (wired into the set top controller).  When the first hard numbers came out, they absolutely blew away all the estimates of the cable viewing audience.  Turns out a lot of cable viewing was not captured in the paper diaries, and the meters picked it up.  Go figure.  You mean what people report to you in a survey doesn’t reflect their actual behavior?  C’mon, that can’t be true (being sarcastic for those who don’t know me).  Anyway, network cable advertising absolutely exploded after this, and all this money fueled better programming.  That’s when “Big 3” share really started to tank.

In other words, we have seen this movie before.  Money follows accuracy.  Instead of resisting this idea, these brand folks ought to embrace it and hang on – it’s going to be a wild ride.

2.  The idea of a central agency being the “Master Record Keeper” is an absolute must, since if each agency runs it’s own success metrics, each agency will claim success that really belongs to another agency.  You need to have a source of the “one truth”.  I have argued this same point many times with companies that have analysts spread out into each of the silos.  While it is possible this could work, you would need an iron fist to enforce consistency and remove the tendency of an analyst to paint a better picture of the silo his boss runs. 

Just trying to make sure every silo is being honest would take a huge amount of work – why not just centralize it in the first place, and do the work once?  If all the analysts report to a CAO who basically is a 3rd party with no axe to grind, then the CEO is going to get the straight picture – including all the cross-silo effects, which is usually where all the ROI is hiding.  For example, you are not going to get an analysis that includes the “true cost” of a marketing campaign that causes all kinds of problems in customer service from an analyst in the marketing department, it’s just not going to happen.  You need a 3rd party view to get to the Root Cause and start fixing broken processes that affect the customer experience and waste a ton of money.  There are very positive benefits to having a group of analysts who each are experts on a single piece of the company under one roof, interacting and discussing business issues.  That’s how you get breakthrough thinking, how you fix the broken cross-silo processes that drive customers crazy.

It’s great that we are all becoming more accountable, but let’s get down to the meat of the matter and kick analytics up to the C level and out of the silos.  How long will it take before the CEO finds out the silo analysts are “torturing the numbers”?  Do you want to be there when it happens?  It’s not pretty, let me tell ya.  I’ve seen it.

In case this doesn’t make any sense to you, here is an example.  How many customers does your company have?  Ask 5 people, you will get at least 3 different answers (if you get any answers at all) and they are all probably wrong.  The proper answer is “that depends on how you define a customer”.  Now, picture each silo with their own set of KPI’s, many based on the “number of customers” in some way and you start to understand what I am talking about.

Like I said, not pretty.  A full-on, CEO’s beating forehead vein kind of thing, a “We’ve been telling Wall Street we’re running this company based on analytics and now you tell me that we can’t even agree on how many customers we have?” kind of meltdown.

Maybe you should start thinking about centralizing your analysts now?  Or at least talking about it?

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

**** New Look at the Org Chart

Article here, from MultiChannel Merchant. 

Interesting theory on why the internal organization of multi-channel merchants differs (based on their roots) and some good discussion of reporting chains.  Personally, when it is all said and done, I think you end up with customer service reporting to Marketing – how else could a company possibly become customer centric?  I mean really, do you want success measured by talk time (Finance / Ops) or increased value of the customer base (Marketing)?

This depends, of course, on finding Marketing folks willing to step up to the table.  I know where you can find such Marketing folks – in the catalog industry.

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

*** Marketing and Finance: From Adversaries to Allies

A good top-level article on how Marketers can build a strong relationship with Finance; check it out here.  Pretty simple really, but once again, you have to stretch a bit, and learn to speak the language (as you should be doing with IT).

If you’re struggling to find a reason to begin this relationship with Finance, I can provide you with a conversation starter.  I think you will find it opens the door to a wide ranging discussion with Financial people about how marketing should be measured and what the contribution of Marketing is to the Company.  The basic idea is to take the periodic accounting approach that Finance uses and twist it around to a customer accounting view.  Check it out!

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Customer Accounting: How to Speak Finance

Let’s say you have decided to build a relationship with the CFO or a peer in Finance.  How do you get started?  Here are two report concepts and charts that will give you much more to talk about than you can squeeze into one lunch.  By taking Finance’s own numbers (Periodic Accounting) and recasting them into the numbers that matter for Marketing (Customer Accounting) you create a very solid bridge and basis for building out a plan.  Note to yourself: And the plan is?  Make sure you think about that first…how can you help Finance / the Company achieve their Cash Flow and other Financial goals?

Report 1: Sales by Customer Volume

Core Concept: The idea here is to decompose a CFO’s financial quarter (or any financial period) into the good, better, best customer volume components that make up the financial period.  It’s a “contribution by customer value segment” idea.  Benefit: Graphically demonstrates to the CFO the “risk” component of customer value in the customer portfolio and supports the idea Marketing could mitigate financial risk by “not treating all customers in the same way”.

Take any periodic statement time frame – a month, a quarter, a year.  Gather all the customer revenue transactions for this period, and recast them into the total sales by customer for the period.  Decide on some total sales ranges appropriate to your business, and produce a chart on the percentage of customers with sales in each range, including non-buying customers, for the chosen periodic accounting time frame.  For example:

 By Volume

Run this report each period, and compare with prior periods.  In general, you want to see the percentage of customers contributing high sales per period to grow over time, and the percentage of lower revenue customers to shrink.  This means you are increasing the value of customers overall.  If the numbers are moving the other way, this is the type of customer value problem you would expect CRM or a smart retention program to correct, and if you are successful, you should see the shift in customer value through this report.

Report 2:  Sales by Customer Longevity

Core Concept: This report is a “Flashcard”, if you will, that demonstrates the Customer LifeCycle.  If you have trouble communicating complex LifeCycle / LifeTime Value concepts to Financial people, this Flashcard takes their own numbers and decomposes them into a vivid picture of why the LifeCycle matters.  Benefit: Opens the door for your budgets to be determined by different metrics than are currently used; what good is a “quarterly budget” when the underlying customer issue can be much more dynamic?  Wouldn’t the CFO like you to “do what it takes” in the Current Period to preserve profits in Future Periods?

Take any periodic statement time frame – a month, a quarter, a year.  Gather all the customer revenue transactions for this period, and recast them relative to the start date of the customer.  In other words, when looking at the revenue generated for the period, how much of it was generated by customers who were also newly started customers in the same time period?  How much was generated by customers who became new customers in the prior period?  How about two, three, and four periods ago?  More than 4 periods ago?  Depending on the length of the period you use, you may end up with a chart looking something like this:

LifeCycle

You can run this analysis at the end of each period and track the movement of customer value in your customer base.  Generally, you want to see increasing contribution to revenue from customers in older periods, meaning you are retaining customers for longer periods of time and growing their value.

If this kind of idea interests you, the full background on explaining the LifeCycle / LTV to Finance is here.

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss

Reporting versus Analysis: The “Actionable” Debate

Gary Angel and Eric Peterson have been having a great exchange surrounding the definition of KPI’s, and more specifically, the requirement that they be actionable.   Gary started out with the position the “criteria of actionability is unsound in almost every way” but I think both he and Eric have resolved in the middle somewhere – it’s really about context.  Gary is right, to take any metric “naked” at face value without surrounding context is simply not good analytical practice.  But I would argue (and I think Eric agrees) that to build a KPI in the first place, you must already have the required context, or you don’t have a KPI.  So that leaves us with “how you define a KPI” as (I think) the final resting point, and there really isn’t anywhere to go after that.  Your comments on my analysis welcome.

However, I think the ideas Gary has exposed run deeper than just the KPI discussion.  The situation Gary is addressing – making sure people really understand that every metric requires business context to be functional – requires attention because web analytics is a very fast growing field with a lot of brand new people in it who may have not been exposed to proper analytical training. Or, not challenged to do any real analysis by weak managers.

These new people frequently don’t understand the difference between Reporting and Analysis.  A “Reporting” mentality (provide data) leads to the improper use of analytical ideas like KPI.  Analysis (provide insight) would automatically take into account a lot of other factors, as Gary has suggested.  Knowing all those factors (because you are doing real analysis), you can certainly take movements in a KPI as actionable.  As Eric says, that “action” is often a more focused analysis of some kind.  KPI’s are really just “tripwires” that alert you to a problem or opportunity that requires further analysis.

My concern (and in the end, I think Gary’s) is that often the Reporting mentality is Robotic and that the reaction taken to change in a KPI might be equally Robotic if you don’t have the proper context.  What often happens in Pay-per-Click testing is a great example of this, and a lot of the multivariate stuff people are now addicted to is an extreme example. 

You can look at conversion rates, make changes to landing pages, and try to optimize the “Scenario”.  This is Reporting, not Analysis.  Can you provide insight into why the changes you made worked?  For example, can you explain the improvement in terms of Psychology or Consumer Behavior?  Usability?  If so, that would be Analysis, and the answers would be applicable to a wide range of other challenges on the site.  Without knowing why the changes worked, you are left with simple Reporting that applies to only a single specific Scenario.  Nothing was really learned here.

Take this same idea to the extreme, and you get what often happens in multivariate testing.  You can certainly run a multivariate test on 5 variables at the same time, and find a “winning combination”, but this is Reporting, not Analysis – in fact, it’s black-box reporting in the extreme.  For example, how do you know that you chose the 5 most important variables to optimize?  How do you know the options you chose for each variable are the most powerful?  Isn’t it just as likely that the final optimization you achieved is suboptimal, a local maximum, as it is the solution is truly optimal? 

In other words, isn’t it possible that what you have created with the robot is better than you had, but is not even close to being the best it can be?

Dear Reader, you’re asking, why should I care about this Reporting versus Analysis issue?  Because here is what will happen without real Analysis: you are going to “hit the wall”.  One day, there will simply be nothing left you can do to improve on what you have done.  Reporting is only going to take you so far.  Frustrated, you will probably Analyze the situation and realize you have “optimized” yourself into a corner by taking something that was fundamentally broken in the first place and making it better than it was.  You can’t make it any better unless you wipe it out and start again.  That’s a huge waste of resources, right?

See CRM if you need an example of what can happen when you automate worst practices.  And they’re going to fix it 8 years later by bolting on Business Intelligence?  Um, shouldn’t the Analysis have come first?

Share:  twittergoogle_plusredditlinkedintumblrmail


Follow:  twitterlinkedinrss