While most journalists recognised the trend in recent federal voting intention polls was probably significant, there is one ongoing certainty about media coverage of polling: for many journalists it will be an opportunity to demonstrate a quality they share with many other Australians — innumeracy.
Whatever the outcome of the polls you can be certain they will be endlessly (well at least until the next one) interpreted. The third-last Newspoll showed a surprising line ball result, similar to a Morgan poll around the same time, but the second last “showed” a shift back to the Coalition largely explained, by journalists, as a result of Tony Abbott’s wife defending him with the aid of widespread page one advertorials provided by the News Limited press. The latest is back to line ball.
The simple statistical reality is that many of the poll shifts are within the margin of error or, for other reasons, may or may not be significant. Despite this the media generally report them as highly significant. In fact they may be significant for two reasons only: because the media think they are, and because they raise questions about media numeracy.
This is good and bad news for PR people — you can always get a story written based on a quick question in an omnibus poll which demonstrates something positive about your case or client, but your opponents and competitors can easily do the same thing.
The same applies with scientific literacy, which generally ranks along with statistical innumeracy in media coverage. It’s good news for lobby groups and activists who can always get a good run on their latest scare about cancer, the air, the water, food and all the other things which are allegedly killing us — despite the fact our longevity keeps getting longer — but bad news for the PR people trying to counter them.
Last month Tony Jacques, a research fellow at RMIT and publisher of the Issue Outcomes website, drew my attention to an article by a crisis manager blogger on the US Food and Drug Administration’s report on a 20-year study of rice and arsenic which illustrates the problems. The FDA tested some 1200 different rice products over 20 years for arsenic levels and found that in cooked rice they were about 6.7 micrograms — that’s millionths of a gram if you can’t work it out in your head quickly. The FDA pointed out that arsenic occurred naturally but that human activities also add arsenic to the environment.
They concluded there was not more arsenic in rice, nor was it dangerous in the quantities found, but that they were getting better at measuring it. They ended the report with the obligatory comment about the need for a balanced diet.
None of this discouraged ABC News in America which said the FDA has issued a “troubling warning to limit how much rice we eat” because of the arsenic levels. The ABC story went on to claim the FDA report “confirmed” an earlier Consumer Reports article which said rice products had “worrisome levels” of arsenic. It’s a fairly safe to bet the ABC story came not from the FDA but from Consumer Reports’ PR efforts to promote their own report.
Some time ago, working for a manufacturing industry client, we had a similar problem with a newspaper which claimed the client was discharging mercury, arsenic and other things to the environment. This combined both innumeracy and scientific illiteracy. The basis for the claim was the company’s EPA discharge licence — such licences mandate the disclosure of all trace elements in discharges regardless of the quantity. In fact the discharges were so small as to be almost certainly a result of natural background factors and about the concentrations you would get in a bottle of beer, lemonade or anything else with water and other elements in it. The article was the page one lead and we found it almost impossible to rebut, even with the help of the EPA itself.
The MMR controversy in the UK, which spilled over to Australia and other countries, was a similar situation when dodgy experiments were presented as demonstrating that the MMR vaccines contributed to autism. As a result many parents were too afraid to have their children vaccinated which lead to falling vaccination rates and the rising risk of much greater health problems through disease outbreaks. In Australia then health minister Mike Wooldridge aided by a campaign through the Department of Health by Royce Communications got the rates back up. In the UK it took years and massive efforts by doctors, governments and researchers to expose the flawed research. Even then the media reported the story as “new” research disproving the former research, rather than dodgy research causing fears magnified by tabloid journalism.
Alan Jones is getting some compulsory remedial journalistic training. The decision has been met with some amusement amongst many journalists, horror among shock jocks, and derision by Jones’s critics. But how much better are most real journalists? Do they understand statistics? Do they understand the significance of replicability in scientific research? Do they pre-suppose that lobby groups are good guys and industry spokespeople are simply trying to hide the truth? Do they actually know when the industry people actually are (and why) hiding the truth around issues such as climate change denialism?
It would be easy to conclude that the reporters are just swamped by too much information and work being processed by too few people on news desks. Equally they may just be suckers for PR people who play to their prejudices or weaknesses. But the evidence — replicated in the data disseminated in the daily news cycle every day — seems to point very strongly to innumeracy and scientific illiteracy.
Of course, whether PR people share that innumeracy and scientific illiteracy, or just exploit it, is another question.
I’m a journalism student and I’m shocked by what qualifies as ‘journalism’ these days, I stumbled across a story recently about a guy who was found with 12 kgs of Marijuana and he told the court he wanted to ‘smoke himself to death’ because he was ill. This comment was repeated 5 times in a 300 word article http://www.qt.com.au/news/former-soldier-caught-with-12kg-of-cannabis/1601670/ Another story by this ‘bright’ young journalist is a doozy, Man filmed himself having sex with horse to send to ex, if we keep feeding people this crap, they will want more. I’ve talked about in my studies, whether we are dumbing down the news or if people are getting dumber, and I think if we keep providing dumb stories like this, the dumb will get dumber. How does this help or educate anyone. It will get read, mark my words, but it really is pathetic.
Geez Noel journos already do words – now you want counting too!!! You folks are just insatiable.
I’ve been flailing about for several months here on Cr*key trying to explain the “issues” embedded in the methodology of the Essential market research results which are dutifully regurgitated here each fortnight without criticism or comment.
The results to date – none from Cr*key whatsoever, but at least Essential now admit that the self-selecting respondents are paid for their opinions, and have included a more detailed explanation of their methods in each report. But these limitations and flaws are never discussed or mentioned by Cr*key’s long-suffering advertorialists. Never.
The Essential polls are simply presented – along with all the rest – as if they were a reflection of “what we all think”.
So, if you have the time, take a look at any report on the Essential website and have a scroot of their methodology (usually on page 11).
It’s not that journalists can’t count – it’s that they don’t care. It’s that they are told to puff the thing, to take it seriously and tell us we should too.
I wonder why. I’d be thinking a contra-trade deal for some free market research work myself. Luckily that’s only corrupt or unethical when Alan Jones or Murdoch do it.
Admittedly, Peter, I haven’t read Essential’s methodology but I really doubt that this is the knockdown argument you are presuming.
Traditionally, scientific polling has relied on ubiquitous landlines and pretty high response rates to achieve randomness and decent sample size. Nonetheless the sample is adjusted to weigh it so it is representative of the population.
The problem with the old approach is low response rates and the changing ubiquity of landlines.
One option to overcome this is the brute force approach using robo-calling to reach enough people that you overcome the low response rates, and using mobiles. But these have draw-backs in terms of increased costs and dubious randomness.
Pollsters like YouGov/Polimetrix have gone a different way to confront the problem. They invite a large pool of internet users to register in the first instance, and recruit a sub-sample and weight it in line with demographic information.
Certainly there is some self-selection with being a somewhat internet user but it’s not obvious this is any more problematic than the self-selection that already exists for having a landline in the modern age and picking up the phone during business hours.
Now, I’m assuming Essential is doing something similar to YouGov, and, if so, it’s important to note that this approach is NOT unscientific like a user poll on a website.
Under normal circumstances I’d be agreeing with you Will. A randomish sample from phone polling isn’t too bad a strategy and allows a degree of balancing and geographical selectivity etc. Rough but cost effective and manageable.
But the Essential lot don’t use phones at all.
Bit hard to explain it in full but this is the gist:
What they do use is a market survey panel of some 100,000+ email addresses belonging to a market research firm. Essential draft the questions and they are inserted into a generalised consumer survey – sandwiched in between the toothpaste packaging and cereal crunchiness.
They send out some 7-8,000 of these by email and get about 1,000 replies back, more or less. The respondents earn credits and rewards for replying. Is this polling?
It is one thing to cold call 1,000 folks right on dinner time – it is entirely another to send out a parcel of questions to the same folks week after week and see who gets back to you. Nothing random about this at all. Nor are folks who join up to earn rewards for their opinions reflective of the wider community. Even the toothpaste fellas know that.
I have no idea of how they balance this all out for gender, geography, electorates and the like. They reckon they do. But they don’t say how. I’m not sure than can given the sample size and the locations being unknown.
Anyway it’s a far cry from standard polling by phone – landline or not. In no way comparable to them in fact. Right out on its own this process.
And worthy of some serious critical analysis rather than Cr*key’s slavish reporting of what the alleged data purportedly shows about “how we all think”.
Thanks for the article. It’s a thing that really bugs me about the ways these matters are presented in the media. I especially am annoyed by the over-used statement, when boosting some public health message or wonder drug: “up to” – as in “researchers report up to three times more cases of… (insert alarming illness or hazard here)…” It’s especially galling when it is then revealed that the risk of the hazard was initially something like 6000 to one and is now, in the worst possible case, 2000 to one. Unforetunately sensationalism is the norm these days and appears unlikely to change.