• No questions for this search term

Pollsters Cheap Excuse for Bad Science

Summary

Modify
If pollsters "do not even attempt to predict election results" and thus explicitly prevent falsifiability, if they expressly insulate their statements from critical rationalism, how much worth do pollsters have as a guidance of public po...

Wiki article

Compare versions Edit

Hubertus Hofkirchner

by Hubertus Hofkirchner -- Vienna, 22 Feb 2018

In a recent Research World article ("Giving society a voice", December 2017), a "traditional" frequentist researcher seeks to defend the often criticised bad record of pollsters for inaccurate election forecasts with a most curious statement. I must have read (and tolerated) this statement in different words uncountable times. Today I say that enough is enough, something has to be said for better science.

Pollsters' cheap excuse

So what did the gentleman, obviously a representative of the old school of market research, tell to the esteemed Research World audience? I quote: "In any case, it is important to note that electoral polls do not attempt to predict or to anticipate election results: they only provide a description of declared voters' alignments at a specific point in time."

The importance of falsifiability

This apparently undisputed statement by a researcher is deeply troubling. 60 years ago Austrian philosopher Karl Popper published his famous "Logic of Scientific Discovery" where he argued that science must embrace the concept of falsifiability, that any worthy theory or statement must entail a possibility to observe a fact which could negate it, or put simply: a test. Statements which fail to fulfil this criterion, Popper firmly assigns to the realm of metaphysics, such as ghost stories and fairy tales.

Now clearly, if pollsters "do not even attempt to predict election results" and thus explicitly prevent falsifiability, if they expressly insulate their statements from critical rationalism, how much worth do pollsters have as a guidance of public policy? Karl Popper would have said: zero. The root of the problem runs even deeper. Missing falsifiability is also characteristic for the traditional "direct question" which old school researchers still use without hesitation: "How would you vote if there were elections on next Sunday?" From an Austrian point of view, such a traditional questionnaire answer offers no falsification mechanism, any respondent can give any answer for whatever reason (social desirablitiy, signalling to politicians, destructiveness, etc.) and leave us none the wiser.

What should be done?

Any research worth its name should fully embrace falsifiabilty. Obviously, not all methods can be as rigorous as prediction markets. Election predictions have built-in falsifiability: "How many Brits will vote for Leave or Remain?" can be perfectly validated with election day results. Even better, not just the aggregate prediction of the market's collective intelligence is falsifiable, in fact, every single prediction by every single participant adheres to Popper's principle: in other words, prediction markets make respondents responsible. And maybe it is this underappreciated concept of "falsifiability" which contributes to the significant gain in accuracy when comparing traditional methods to prediction markets.


  • Back to the Management by Predictions Blog
  • Follow this blog for more stories, new case studies and how-to's.

Versions

5 years, 11 months ago

Select two and

Quit comparison