Social media giants are increasingly looking at refining systems used to scour their sites for conspiracy theories given the speed at which QAnon conspiracies, coronavirus misinformation, misleading information on vaccinations, and white supremacist propaganda spread online.
Research reports published by The Soufan Center and Rand Corporation demonstrate the near-impossible task Facebook and Twitter face in getting conspiracy-related material off their sites without employing a form of automation to detect and then drill down into content.
Rand was commissioned by Google’s Jigsaw unit to work out how it could get better at identifying material associated with conspiracy theories, which Rand reports are a part of “truth decay”, given the volume of material on multitudes of sites.
“Truth decay” is the drift away from facts and analysis in public commentary as a result of arguments over facts and analysis, “blurring of opinion and fact”, increases in the volume of opinion and personal experiences as opposed to facts, and a decline in trust in media and institutions.
Rand’s research team took a deep dive into conspiracy theories related to aliens coming to earth, dangers of vaccinations, the origin of coronavirus, and white genocide, and then tested a more holistic approach to detecting misinformation online.
A problem it had designing a model to sniff out the conspiratorial from the sensible was ensuring it did not just get the right subject matter, but also the nuance of the language in which conspiracy theorists cloak their kookiness.
One thing it used is what they call “stance analysis”, which is an approach that mines text to determine how language is used to paint a picture of the world. Rand’s team used “stance analysis” in a previous study to nut out how the Russians interfered in elections just by analysing the style of rhetoric in posts.
“It is one thing to identify an anti-vaccination topic; it is a very different thing for a machine to interpret the conspiratorial dimension of anti-vaccination talk, and the latter angle is critical if we want to distinguish between talk that promoted conspiracy theories and talk that opposes or simply addresses them,” the Rand report said.
Researchers found that the machine learning model they tested worked as it helped them detect conspiracy theory topics and weed out conspiratorial content within posts. They also found the model gave different weights to different aspects of language in the posts to the point where they were able to better understand how conspiratorial rhetoric works in the online jungle.
“We found that matters of public virtue, such as health and safety, were the most important features that our model used for predicting anti-vaccination talk,” the report said.
Revelations from the research — as well as a review of relevant literature — allowed Rand to look at developing interventions that could help minimise the chance of harm arising from conspiracies spread online.
Content science firm Limbik specialises in analysing content online and it partnered with The Soufan Center to look more closely at the QAnon phenomenon in the United States over the past year. A recent report found that almost a fifth of the posts analysed by Limbik that amplified the various QAnon conspiracy theories originated in Russia, China, Saudi Arabia or Iran.
Data analyst Leela McClintock tells Crikey that the initial results from the automated searches — which involve compiling hashtags and keywords relevant to QAnon — were then reviewed manually so that the origin, context and relevance of posts were confirmed.
The Limbik analysts also found that other trends presented in the data.
“Another pattern that we notice more generally for inauthentic behaviour online that’s quite specific to China and Russia is a clustering of accounts that will often post the same types of information at exactly the same time,” McClintock said.
“This is a very peculiar happening in which certain accounts that are special interest-focused, for example, like a group of accounts with a Chinese admin that have lifestyle content will suddenly one day post something about human trafficking at the US-Mexico border, and then there will be a series of comments saying ‘Save the children’ or ‘The Democrats are trafficking children’ despite these accounts seemingly having this lifestyle-oriented nature.”
Is the truth REALLY out there? Let us know your thoughts by writing to letters@crikey.com.au. Please include your full name to be considered for publication in Crikey’s Your Say section.
Good start, although I would prefer to see classes given to both children & adults, as they have been doing in Finland, to help people know how to sort genuine news and disinformation from you know who.
Maybe there a course designed to identify disinformation and I think Ruv Draba could be a good person to design such a course. Are you interested, Ruv?
Aware of Finland too. Both Vic and think NSW HSC (now VCE) had one quarter of the curriculum and syllabus for General English Expression till early ’80s focused upon clear or critical thinking, especially round media, advertising, and other subjects segued into environmental science too e.g. Ozone, urban design etc..
Then related content was disappeared….. no wonder people struggle with science narratives and analysis round global warming, carbon emissions etc.
see Zuckerberg, detecting “fake news” etc is possible … but you already knew that, didn’t you
It’s education, but you will not find many LNP let alone Labor MPs promoting the need for all in society (not just school age) to learn skills of digital and critical literacies; but then again who wants empowered citizens?
PLEASE add Murdoch media to the list of misinformation spreaders. Imagine the world without Murdoch. He does more damage than even Facebook because so many people still think Murdoch rags are legitimate media.
That rubbish mast head of his The Australian (oh the irony) is an example of what I am referring to.