The events of the past couple of weeks have brought home, almost as never before, the importance of opinion polling in Australian politics. Hence, the importance of having commentators who know how to read and understand the polls, an area where — as some of us at Crikey keep pointing out — we have a few shortcomings.
But it’s even more vital that the polls themselves should be accurate, or at least honest — that their numbers should come from scientifically conducted polling, rather than just being made up out of whole cloth. That’s the question at issue in a current American controversy, where DailyKos, a prominent left-wing website, is suing its former pollster, Research 2000, for fraud.
The problems became public a month ago when Nate Silver at fivethirtyeight.com published a comprehensive set of pollster rankings, based on analysis of thousands of polls back to 1998. They showed Research 2000 near the bottom of the pile, at which point DailyKos terminated its relationship.
Three statisticians — Mark Grebner, Michael Weissman, and Jonathan Weissman — then produced a report for DailyKos in which they showed that there was apparently more to Research 2000’s results than just bad luck or incompetence. From a study of three aspects of the published results, they concluded that “they could not accurately describe random polls”, and that one pattern of results in particular was consistent with what you would expect from data that had been fabricated.
DailyKos then sued, and Silver published some of his own additional concerns about Research 2000. Del Ali, the head of Research 2000, sent Silver a “cease and desist” letter, and in a subsequent long and rambling email defended himself against DailyKos’ claims in highly unpersuasive fashion. In a further post on Friday, Silver suggested that, rather than in the strict sense fabricating data, Ali could could have produced his results by aggressively massaging the numbers to fit his own intuitions.
Australian polling wars are very tame by comparison, for which there are two main reasons. First, we just don’t have anywhere near the volume of data: in the US there are dozens of serious pollsters polling hundreds of individual races, and a correspondingly large journalistic and blogging community scrutinising the results. We do our best here, but we’re not in the same league.
Secondly, Australian analysts are limited by our absurd and anachronistic defamation laws. In the US it is extremely difficult to make out a case for libel against a public figure or on issues of public interest, but in Australia remarks such as those from Silver and Kos, not to mention Ali’s response, would probably have been met with a writ.
Although no Australian pollster is suspected of outright fabrication of results, it is sometimes thought that dubious practices are engaged in to produce results in a less-than-scientific fashion. But no commentator will air such suspicions in anything other than the most general manner (as I am doing here) without first collecting very solid proof — which, in the nature of the case, is hard to come by.
Despite the differences between the two countries, the lessons are fundamentally the same. Don’t treat polls as holy writ; approach them all with a degree of scepticism, and be especially on the alert for results that fail basic tests of statistical credibility.
“…. approach them all with a degree of scepticism, and be especially on the alert for results that fail basic tests of statistical credibility.”
The same can be said of the numerous Government funded reports used as propaganda to justify policies.
Jenny Macklin’s announcements (“based on research”) to justify the NTER (the Intervention) and Income Management in particular come to mind.
Five minutes after Gillard became PM I read a poll, allegedly conducted by Morgan and Channel 7, which clearly showed that the Coalition would win the next election and that the new PM was on the nose. I’d have been interested to know just how they conducted that poll. One didn’t have to be a genius to know that it was complete and utter rot.
As someone who’s been polled, frequently, over the decades (even after changing addresses) I suspect that some pollsters keep records of who says what and contacts only those respondents whom they suspect will follow a particular line and answer in a particular way. Unless there’s an imminent election and the pollsters’ reputations are really on the line, don’t believe a word they say.
So does that mean that we can expect the Federal Police to lock up Essential Research?
They are without doubt the most shamelessly biased outfit in the countru.
They invented Push Polling.
And off course they work for Crikey…..say no more.
@Jungarrayi: good point. Indeed with those sort of things the situation is worse because there is usually no external check on their accuracy, of the sort that election results provide (imperfectly) for the opinion polls.
@Michael: As a new outfit, Essential Research don’t have enough of a record for us to say very much about their accuracy. I initially had doubts about their methodology (although they certainly didn’t invent push polling – it’s been around for a long time), but over the last year or so their results have been broadly in line with the trend of other pollsters. I’d be interested to hear why you think they’re biased. In any case, I can assure you Crikey’s relationship with them doesn’t impact on me at all.