by Hubertus Hofkirchner -- Vienna, 15 Feb 2015
Those who have heard about Prediki will know that we are a rather sceptical bunch regarding conventional surveys at the best of times. There are many issues surrounding biases of respondents and their often mindless checkbox-ticking. However even we as prediction market specialists have to use old-school survey elements sometimes, e.g. when screening or profiling market participants.
Just the other day a survey provider recommended a survey question change which gave me pause. Something started to bother me in the back of my mind. The funny thing is that I had heard this recommendation a gazillion times before, it was so completely familiar that I normally would not think twice. Only this time it sunk in a little bit deeper, and then it really hit me: this was another symptom – if not the worst of all of them - for how unreliable the survey method is, despite still being the gold standard for many market research agencies and clients.
This time I am not talking about mere mindlessness. Not thinking may be somewhat forgivable, even though the CEO of any company should be weary of putting his faith into at least partially thoughtless data when making multimillion dollar decisions. Remember: Garbage in, garbage out.
I am also not talking about just biased answers, whether stemming from respondents or provoked by the very researcher designing the survey. Again, any executive receiving consumer or employee survey data should be weary whether he just spent a significant amount of money on research which probably leans towards what the researcher believed already - for free - before doing the study.
An Even Bigger Issue
This time I am talking about something much worse. What the survey specialists told me, in her best intention, pointing to a particular question: “Here, don’t just ask them if they consume that product. Ask them whether they use this, that or another product.” This sounds familiar and harmless enough, doesn’t it? Even I nodded at first, although then something started to bother me. Why must surveys ask a simple fact in this roundabout way? Every market researcher who has been around the block knows why: Respondents are experienced. If surveys do not ask the question like that, quite a number of them will unashamedly lie for a chance to continue the survey. Think again: They. Will. Lie.
Even with the artful question, who really believes that surveys can catch out the lying respondents so easily, somehow trick them into telling the truth? Surveys can count themselves lucky to eliminate a portion of the liars this way. All their careful socioeconomic sampling leaves them with a significant number of clever swindlers with unknown motivations and unclear knowledge. CEO’s who are given traditional survey results are supposed to make their decisions based on a - carefully analysed and powerpointed – mix of truth and lies. Does anyone still wonder why so many “extensively researched” product and marketing decisions go wrong?
Prediction markets bring an improvement on all of these fronts. Mindlessness of some participants is counterbalanced by the more mindful or knowledgeable traders who quickly seize opportunities for arbitrage which will move forecasts back to where they should be. Bias is reduced as proper prediction questions objectively ask for a fact that will be or what a target group will do and not for what a respondent or the researcher thinks will be or will personally do. Best of all, there is no incentive for lying: Participants win incentives only when their answers and trades are proven right by future facts.
In summary, a traditional survey will contain not only a significant quantum of mindlessness and bias but even a modicum of outright lies. It is high noon for a change in the ways of market research.