(graph above from Optimize Magazine article)
Now, I generally think giving people access to data is a good thing, but I’ve seen it go horribly wrong in many cases. You can’t just open up data access and let people construct analysis without ground rules. You’ll have complete chaos, with people torturing the data in whichever direction best makes their case.
Rather, follow a couple of simple rules to prevent analytical chaos:
1. Please set up a “best practices” team or “center of excellence” concept so people who want to do honest analysis on their own can get help with the simple stuff. If you have experienced endless cycles of re-analyzing the same problem over and over with the business side, you know what I am talking about here. People need help really tightly defining the questions they are trying to answer and guidance on what the best (most accurate?) way to generate answers is.
Do these new analyzers know how seasonality affects the business? Do they understand how time-frames can distort an analysis? How cycle billing or lag times in fulfillment can affect sales recognition? Are they aware of any data quality issues?
2. If you can’t set up a best practices team, at least try to create a best practices manual / library of some kind.
3. Please, please define all the data people will be using. Some of this definition problem you can control with the drop-downs you provide, but make sure everybody knows specifically what the drop-down items mean. For example, if someone is doing a “customer count” of some kind, what is a customer, who is included in that count? If someone ordered and then cancelled – no net positive financial transaction ever took place – will that person be included in a “customer count”? If not, what is that person called, what is their “status”? Is that status available as a drop-down selection?
If they select “Sales” as a variable, are we talking about Gross Demand? Net of cancels / returns? Net of discounts? Net of bundling, packaging, volume pricing?
4. If an analysis will be used for strategic decision making, it must be “vetted” first by the best practices group. If people want to make “local” or silo decisions based on their own analysis, well, they are making their own bed. But when it comes down to major shifts in business practices, you simply cannot rely on a local analysis, there is too much risk involved.
You really have to think some of this stuff through first so you don’t end up in a meeting with the CEO where 3 different people present data that should be consistent but end up divergent. That’s the fastest way to induce a full-on beating CEO forehead vein I know of.
I am all for people being “exploratory” and going through their own “what if?” kinds of exercises as they try to discover more about their products, services, or customers. This is a good thing. But confusion and chaos due to lack of standards and a central authority on the validity of an analysis is a great way to doom this kind of effort.
Anybody else been through one of these “everybody in the data pool” rollouts? Do you have any other “rules” you would like to add to help people get through one of these implementations successfully?