by Hubertus Hofkirchner -- Vienna, 8 Apr 2015
If you produce or use market research you are probably already aware of the manifold possible “survey question mistakes” and some techniques to avoid them. This piece will show that these issues are more fundamental in their nature than meets the eye. I say it is not just about avoiding certain survey question mistakes, it is a mistake not to avoid surveys for certain questions.
There are whole lists of such mistakes and experimental evidence for the biases caused by them. Ironically, even survey tool providers send out a stream of such mistakes for their content marketing, including suggestions for improvements. Without naming any supplier, this blog was sparked by one of those pieces.
It starts off by saying that survey design is all about details and continues with Mistake #1: “Failing to Avoid Leading Words”. According to the survey tool maker: “Subtle wording differences can produce great differences in results.” They demonstrate this with a survey question asking respondents if they agree with the following statement:
“The government should force you to pay higher taxes.”
They say this question is no good as “force” represents control and can bias results negatively. Their advice is that the researcher simply asks:
“The government should increase taxes.”
This has been shown to produce a 20% uplift in agreement. Whoa! Did someone just trick us into paying more taxes?
To rephrase or not to rephrase?
Such rephrasing is a serious matter. Who is to say which of the two results is better? The higher agreement may be better if the researcher thinks that the government will put the tax money to good use or if the researcher works for the government. On the other hand, others may think that a significant portion of tax money is spent inefficiently, so a lower agreement may be better from their perspective. And as a matter of fact, most governments can force citizens to pay taxes, so why must a researcher hide this?
It is self-evident that we have a big problem when our tax rate or public policy is decided using results from a method where a nuance in question formulation – unintentionally or on purpose – is known to change results greatly.
How realistic is it that our political leaders will look into the details of survey language to assess bias before taking decisions? Even if they did and found a bias from their subjective point of view, should they be allowed to change the result to what they believe it should be? This is a road towards quite unsavoury flavours of leadership. It is easy to see that these inconspicious mistakes touch on the very foundations of our democracy.
A better alternative
On a prediction market, respondents must always predict an objective future fact. For example, a prediction question for a tax increase would ask something like: “If the government increases taxes by x%, will [government policy goal] be achieved and to what degree?” Or even: “If the government increases taxes by x%, will [ruling party] be re-elected?”
Whatever the observable future result, respondents in a prediction market can only win their bets if they predict correctly. The prediction market question of “What will be?” gives far less room for mistakes or even manipulative tactics than asking “What shall be?” in a traditional survey. Clever wordsmithing is unnecessary as the very nature of the prediction question design requires the clearest possible definition of the future result to be predicted.
Power to the people
Returning to the original example, note that there is also a reversal of “force” in favour of the tax-paying citizens. Proper question design for a prediction market forces the government to explain precisely the intended amount of the tax hike and its specific purpose. While this may appear onerous at first, it promises a better functioning political leadership as well as a more satisfied and involved electorate.
It is high time that governments start using today’s predictive technology instead of repeating old (survey) mistakes.