Polling in the past week shows the Voice to Parliament referendum Yes vote has fallen to even lower levels. If they are on average correct, I estimate the Yes vote to have dropped to 42.3% on a two-answer basis and to be losing another 1% a week.
The major new polls have been: Newspoll (38% Yes, 53% No, 9% undecided); Essential (42, 48, 10); Redbridge (39% Yes, 61% No using a forced-choice method); Freshwater (41, 59); Resolve (43, 57).
In response, the website arguably still known as Twitter has been even more awash with polling denialism than at other stages in the Yes side’s slide down the slippery slope. The number of people recycling and reciting the same unchecked viral false claims has become so large that it’s almost impossible to manage a response.
Inspired by the AEC’s electoral disinformation register, I have started a register of commonly encountered false claims spread on social media about polling, mostly from people claiming to be on the left. A few right-wing claims are included too, but I don’t see those so often at the moment. (I do see a lot of right-wing electoral disinformation, especially the “three out of 10 voted Labor” preferential-voting denial.)
While many of the claims are just whoppers or have some grounding in something that used to be true but no longer is, some result from confusion between what polls said and what media commentary misinterpreted them as saying. Polling is blamed for the way that media sources misinterpret polls (often to play up false narratives that an election is close, but also sometimes to pander to an audience, or at times through more innocent errors).
This confusion between polling and interpretation means that polls that may have been accurate are falsely blamed and unjustly distrusted, while media figures who have wilfully or cluelessly misinterpreted the polls get away scot-free. Similarly, deniers blame good polls for the results of bad polls, again meaning that the good polls are wrongly suspected while the bad ones go unpunished.
In this way, poll denialism enables more bad poll reporting. Instead of falsely claiming that polls that showed their side winning easily were wrong, party supporters annoyed about media reports of cliffhangers should be complaining about the media that chose to misinterpret those polls.
Here are a few common examples of polling denial doing the rounds:
Claim: “Sure Australian pollsters can poll elections, but referendums are totally different and they’ve never done one before. I don’t think pollsters can poll referendums.“
Facts: While there has not been a referendum as such in federal polling since 1999, there was the 2017 marriage law postal survey, which was harder in theory to poll for because pollsters had to figure out who would “vote” in a voluntary ballot and who wouldn’t. And while active polling staff generally won’t have polled in previous referendums, there is still material about polling methods and results available — especially for 1999 but also for 1988, and even snippets back to 1951. There is also plenty of overseas evidence that ballot measures are pollable, even if it might not be as reliable as for elections. If anything, such polling is somewhat prone to overestimate support for changes to the status quo, no matter what side of politics proposes it.
Of course it’s still possible that something might cause polls on average to be significantly wrong in a specific referendum in either direction. But that’s also true of elections. And it often doesn’t matter as much in referendums, because the outcomes are usually not close.
Claim: “Newspoll only calls landlines.”
Facts: Newspoll ceased landline-only polling in 2015. It shifted to a hybrid mode of online polling and landline robo-polling for 2015-19. This method failed alongside all others at the 2019 election. In late 2019 it switched to online-only panel polling for national and state polls, and no longer calls phones. (It has used targeted phone polling for a small number of seat polls only.)
Claim: “Newspoll/polls generally only reach older voters because younger voters do not have landlines or answer mobile phone calls from unknown numbers.“
Facts: No major Australian regular voting intention pollster exclusively uses phone polling. For instance, Newspoll and Essential are exclusively online. Resolve is exclusively online except for an online/phone hybrid for final pre-election polls, and Morgan uses online/phone hybrid polling and SMS polls. Some less prominent polls still use phone polling — e.g. uComms robo-polls — but phone polling of random numbers is not very accurate because younger voters are so hard to reach and those who are reached tend to be unrepresentative. Phone polling of targeted numbers has been trialled by Newspoll/YouGov as a seat polling method with some success.
Even when young voters were first becoming hard to reach by phone, random phone polling remained a viable method until about 2016 because pollsters could simply adjust to that by upweighting the young voters they did contact, or by setting quotas and calling until they got enough young voters. Poll deniers generally think that if a polling method has an age skew then that must be reflected in the final outcome, but polls do not have to weight all respondents equally.
Claim: “Only old people have time to take online polling surveys; young people are not represented.”
Facts: Good online panel polls have access to large panels of potential respondents for each survey (often tens of thousands, if not hundreds). Even if a panel is skewed towards older voters, the pollster can deliberately target a higher share of the young voters and thereby get a more representative sample. A pollster can also use weighting to correct for any remaining skew.
Many panel polls frequently publish online age breakdowns for age groups such as 18-35, and Essential has even at times published numbers of respondents polled in different age brackets. Despite such things and despite explicit statements from the more transparent pollsters about the setting of quotas or weights by age, poll deniers continue to claim that younger respondents who pollsters have reported polling can’t possibly exist.
(Panel-polling participants receive small incentives for participation, so the idea that only older voters will bother is spurious — people of a range of ages and backgrounds may decide to do surveys in search of a little extra income.)
Claim: “Newspoll and its questions are owned and controlled by Murdoch/News Corp.“
Facts: Newspoll is an opinion polling brand that is owned by The Australian, a subsidiary of NewsCorp. An in-house company formerly existed to administer Newspoll but was disbanded in 2015 and since then the poll has been contracted out, making Newspoll a brand, not a pollster. First it was contracted to Galaxy, which later became YouGov Galaxy (YouGov purchased Galaxy), with this arrangement lasting until 2023. It is now contracted to Australian independent private polling company Pyxis.
The wording of Newspoll voting intention and leadership questions has been unchanged from 1985. Some other questions (e.g. budget) also have a long history of stability. The wording of referendum polls is based on the referendum question once known. While the client will from time to time commission once-off issue polls, the wording of which I will sometimes disagree with, the wording of major polling such as voting intention, leadership questions and referendum polling is neutral and beyond reasonable reproach.
Claim: “Polls are black boxes and we don’t know how they work! The details are not published!“
Facts: This one is true of some polls, but as a generalisation it’s untrue, and insulting to those who have worked hard within the industry to improve its transparency after the 2019 polling failure. Poll deniers employ this one against all polls, but what they really mean is that they don’t know the information because they have made no effort to find it. It’s also quite common for poll deniers to claim that, for instance, the sample size and notional margin of error for Newspoll have not been published, when they are routinely published in the print edition (which can be viewed free through NewsBank through participating libraries).
Australian Polling Council members (Pyxis which does Newspoll, Essential, SEC Newgate, the Australia Institute, Ipsos, KJC Research, Redbridge, Lonergan, uComms, 89 Degrees East and YouGov) are compelled to release fairly detailed information about their polling methods on their websites shortly after a poll is referenced in the public domain, with the very rare exception of genuine leaks. Some non-APC members (e.g. Resolve) release some level of methods information while falling short of the APC’s modest standards. A few others (e.g. Morgan, Freshwater) release very little.
As Newspoll attracts the most comment concerning this, here is a link to the Newspoll disclosure statements page. Also, Newspoll details such as sample size, effective theoretical margin of error, question wordings, dates and percentage of uncommitted voters are routinely published in The Australian print editions when Newspoll is released.
Claim: “Newspoll only polls readers of The Australian or audiences of other NewsCorp media.”
Facts: The sample base for Newspoll’s polling has never had the slightest thing to do with whether people read The Australian or not, or with what media they consume. In the old days it involved random phone numbers. As online polling has come in, it has involved market research panels that people have signed up for; these people may not even necessarily be aware that Newspoll and The Australian exist (Newspoll does not announce itself as Newspoll to people taking it). The previous YouGov panel was one people could openly sign up for, although signing up did not guarantee being polled.
People concerned about “polls” that only poll audiences of a specific medium should instead be concerned about the opt-in reader/viewer “polls” that are commonly used by commercial newspapers, radio and TV outlets and have even been used by some ABC programs (shudder). A difference between the results of professional polling and those of unscientific opt-ins of that sort should not be hard to spot.
Claim: “Polls said no would win the same-sex marriage postal survey but Yes won easily.”
Facts: No public poll found No even close to winning the 2017 marriage law postal survey. In fact, the final polls all estimated Yes winning more clearly than it actually did, with the exception of one insufficiently reported ReachTEL. Newspoll was also very close to the correct answer. The only thing that might be mistaken for a poll that had No winning was a social media analytics study by Bela Stantic, who despite thereby making a predictive failure on an embarrassing scale continues to be regarded as a Nostradamus of political prediction by some very gullible journalists.
Claim: “Newspoll has been consistently wrong at recent elections.”
Facts: Since revamping its methods after 2019, Newspoll has correctly predicted the winner of five state and one federal elections, predicting the vote shares of four straight elections in 2022-23 within 1% two-party preferred. (This is an outstanding feat as the global average error converts to 2.5%.)
Claim: “Polls skew to the left because people who support right-wing movements are afraid to tell pollsters what they really think. Morrison, Brexit, Trump etc.”
Facts: This is the modern version of what used to be known as the “shy Tory” theory, which was based around the idea that British voters feared being considered nasty if they told a pollster they were going to vote for the Conservative Party. This has always been an overrated theory, with most polling errors supposedly caused by it being explained by other factors, at times including just not calling enough people who were going to vote Conservative in the first place, or calling too many who were going to vote for the other lot. (The 2019 Australian failure was probably an example of this, primarily caused by unrepresentative samples and exacerbated by a kind of not necessarily conscious herding to a common expected outcome by some pollsters.)
The 2016 Donald Trump surprise was not even a failure of national polling (the national polls that year were good — the problem was in a small number of crucial states, and in some cases happened because a state was so written off that pollsters did not bother sampling it enough). US polling errors in 2020 were in fact worse than 2016 but did not affect the winner. The US 2020 case saw a new possible cause of shy Tory-like polling error, but it was not exactly the same: instead of lying to pollsters, some pro-Trump voters were just refusing to take polls. However, this issue seems at this stage to be just an American thing.
One major issue with shy Tory adherents is that they cherrypick — they notice every time when the polls overestimate the left, but fail to notice cases such as the UK in 2017, Victoria in 2018 and New Zealand in 2020 where the left greatly outperformed the polling. They also fail to notice cases where the polls are accurate.
Another is that there’s just no reason for any Tory to be shy when they are taking an online poll or punching numbers on a robo-poll — they’re not interacting with an actual human. It was quite different with methods with human interaction such as live phone polling or (more so) face-to-face polling. Shy Tory theory has no place in Australian poll analysis. See also Armarium Interreta’s comprehensive debunking of the theory as it applies to Australia.
Claim: “Because Newspoll is released by Murdoch it is skewed in favour of the Coalition.”
Facts: It is bizarre people still make this claim after the 2022 election when Newspoll predicted Labor to win and Labor did, and the 2019 election when Newspoll predicted Labor to win and Labor lost, especially when Newspoll overestimated Labor’s primary vote in both cases.
Newspoll has polled 13 Australian federal elections. Its average released final poll 2PP in that time has been 50.2% to Labor, and Labor’s average result has been 49.6, meaning that the Newspoll brand has actually slightly overestimated Labor, not the Coalition. The Newspoll brand has also displayed very little average error either way at state elections at least since 2010.
It may surprise people that The Australian would support a poll with a fine history of neutrality, but doing so is a selling point for a newspaper that claims to be a leading broadsheet. If the poll had a history of skew that selling point would be lost. Collectively, News Corp outlets make up for it by giving credence to a wide range of lower-quality “polls”.
Claim: “Polls are failing more and more! They’re not as accurate as they used to be!”
Facts: The perception of a sudden surge in poll failure has been a common one after high-profile examples such as Brexit and Trump 2016 (though the latter was only a failure in certain states). But this is another case where confirmation bias is the culprit — when polls for an election are excellent, those who believe polls are failing more and more ignore that, cherrypick specific polls that were wrong, or complain about the polls not being exactly right. The claim that polling errors were getting worse over time was debunked by a worldwide study in 2018. In Australia, average polling has been significantly more accurate since the start of Newspoll in 1985, which is partly why 2019 was such a shock.
Claim: “This poll polled only 1500 people nationwide, 1500 out of 15 million voters. That can’t be accurate.”
Facts: People making this claim are ignorant of basic random sampling theory, a fundamental of statistics. I invite them to perform this experiment. Take a bunch of 10 coins and toss them together, write down how many were heads and how many tails (or you can do it electronically via Excel “random” numbers if you like). Do it again, and add the number of heads and tails to the first set of numbers — check what percentage have been heads so far. Keep going until you’ve done this 150 times and you will notice something amazing. While the early percentages may not be anywhere near 50%, by the time you get to 150 throws (=1500 coins) you might have 52% heads or 49% heads but it will not be 60% or 30%; it will pretty much without fail be pretty close to 50-50. Random sampling works at getting rather close to a population average efficiently, but to get very close requires luck or a huge amount of sampling.
We can also tell that this works by looking at how final polls perform at elections. If poll numbers based off a few thousand voters had no relation to reality then polls would on average be all over the place, but even in bad years they are still mostly within a few percent.
In reality polls are not as simple as random sampling as there are all kinds of factors that make them not truly random (see also my “Margin of Error” polling myths), but some (such as respondent targeting for those polls that do it, or the use of last-election preference flows to model two-party-preferred) can in effect reduce the random factor.
This is an edited extract of Kevin Bonham’s Australian Polling Denial and Disinformation Register. You can read the full version here.
Do you think the Voice polls are not telling the whole story? Let us know by writing to letters@crikey.com.au. Please include your full name to be considered for publication. We reserve the right to edit for length and clarity.
Thanks for the explainer, I suffered from a misconception or two in there, very helpful.
Yes, thanks from me too. A very comprehensive and informative article.
This is great stuff. It’s interesting how similar are the lies told about both virtual democracy (sampling voters on issues and parties) and real democracy (getting voters to stuff paper in large cardboard boxes). Obviously neither method is worth as much as having meetings with ministers (Joyce’s or Rinehart’s Voice to Parliament). But there you are.
The analysis still does not clarify or explain, since the time of Howard, Crosby & imported GOP techniques, related issues of social science research 101 for credibility i.e. how are panels selected*, methodology &/or framing of questions, how plethora of daily polling gives media and reporters something to talk about, ‘horse race calling’, ‘push polling’ and used to ‘get people talking’ under the guise of research by data illiterate media?
*Related are specific polling companies eg. internationally YouGov (founded by a now Tory MP) is self selecting and rewards those who do many polls.
The referendum is still a month away, constant media obsessions with polling could also be viewed as the ‘bandwagon’ effect i.e. voters following the purported majority and as Lynton ‘dead cat’ Crosby has said, ‘ignore the polls’?
Yes they perceive a loser- the mob turns away ; pushing myopic and lazy perception stories ; Oh no Speerzie even though ya seem benevolent and chill ya go for the gotcha sub standard questions- rather than drill down into the substantive ! tsk, tsk, tsk
The regrettably real problem is that the public is now bored and disengaged. It’s only politicians and commentators who give a stuff. Other unsettling issues have taken precedence. Albo didn’t handle this at all well and the timing is beyond “unfortunate”.
labor own worst enemy – as in get real people to communicate not inward navel gazing yes men hired by algorithmic vacuum peeps from corporate think tanks
It must be frustrating for Dr Bonham to go to great lengths to explain the intricacies of what a poll can and can’t tell us, only to have a subeditor’s ham fisted, “Do you think the Voice polls are not telling the whole story? Let us know…” directly underneath the article.
Nice work crikey!
The subeditors ask these stupid questions all the time. I don’t know why they bother.
Forget the polls and whether anybody trusts them. The fact is that the referendum is dead in the water. It is there because for some obscure reason the Government handed the wreckers the means to make up whatever concerns they wanted in the absence of the legislation to implement the Voice.
The Government can reverse this anytime they want. They would do it if they really wanted the referendum to succeed. I find their actions or lack of them inexplicable.