There was a fascinating story in Saturday’sAustralian
about scientists in northern Italy who’ve developed “a new approach to
measuring the extent of drug use.” Their method consists of testing for the
presence of cocaine and its by-products in the rivers, reasoning that that’s
where they’d end up after passing through the users and entering the
sewerage system. As the report says: “The method has previously been used to
measure the by-products of widely-used prescription drugs, and has produced
results consistent with known prescribing patterns.”

Testing
on the River Po, they estimated levels of cocaine use that were almost three
times the official figures, which are based on survey data. “Assuming there
are 1.4 million young adults in the Po river basin, the official statistics
suggest there are 15,000 cases of cocaine use a month. But the evidence from
the water suggests the actual usage is about 40,000 doses a day.” Tests from
smaller cities yielded comparable results.

Why should a
psephologist be interested in this? Well, if these results are even roughly
correct, they provide a useful reminder that people don’t always tell
pollsters the truth. Sometimes this is for obvious reasons, such as when it
comes to illegal drug use. In other cases it might be due to ignorance,
vanity, carelessness, or just a willingness to say whatever it is they think will
most quickly get the pollster off the phone or away from their front
door.

In polls that gauge voting intention, we have an objective
check in the form of actual election results. They are our equivalent of the
cocaine traces in the river: we can tell which polls were right and which
were wrong. As a result, those polls have developed a high level of
accuracy, although they are still not infallible. But where there is no such
check – as with polls for approval ratings, “attitudinal surveys,” and
most market research – a healthy scepticism is the best approach.