To continue with this previous post…other things to look for when evaluating research:
Discontinuous Sample – I don’t know if there is a scientific word for this (experts, go ahead and comment if so), but what I am referring to here is the idea of setting out the parameters of a sample and then sneaking in a subset of the sample where the original parameters are no longer true. This is extremely popular in press about research.
Example: A statement is made at the beginning of the press release regarding the population surveyed. Then, without blinking an eye, they start to talk about the participants, leaving you to believe the composition of participants reflects the original population. In most cases, this is nuts, especially when you are talking about sending an e-mail to 8000 customers and 100 answer the survey.
Sometimes it works the other way, they will slip in something like, “50% of the participants said the main focus of their business was an e-commerce site”, which does not in any way imply that 50% of the population (4000 of 8000) are in the e-commerce business. Similarly, if you knew what percent of the 8000 were in the e-commerce business, then you could get some feeling for whether the participant group of 100 was biased towards e-commerce or not.
Especially in press releases, watch out for these closely-worded and often intentional slights of hand describing the actual segments of participants. They are often written using language that can be defended as a “misunderstanding” and often you can find the true composition of participants in the source documentation to prove your point.
The response to your digging and questioning of the company putting out the research will likely be something like, “the press misunderstood the study”, but at least you will know what the real definitions of the segments are.
Get the Questions – if a piece of research really seems to be important to your company and you are considering purchasing it, make sure the full report contains all the research questions.
I can’t tell you how many times I have matched up the survey data with the sequencing and language of the questions and found bias built right into the survey. Creating (and administering, for that matter) survey questions and sequencing them is a scientific endeavor all by itself. There are known pitfalls and ways to do it correctly, and people who do research for a living understand all of this. It’s very easy to get this part of the exercise wrong and it can fundamentally affect the survey results.
So, in summary, go ahead and “do research” by e-mailing customers or popping up questionnaires, or read about research in the press, but realize there is a whole lot more going on in statistically significant, actionable research than meets the eye, and most of the stuff you read in the press in nothing more than a Focus Group.
Not that there is anything inherently wrong with a Focus Group, as long as you realize that is what you have.