#29leaks formations house
(Image: Getty)

Researchers, governments, businesses and independent think tanks are more and more using artificial intelligence (AI) to try to stop bad actors doing bad things in the digital world.

Work by research houses across the globe has found more efficient ways of identifying the spread of harmful ideologies by using the power of computer and algorithms to weed out multitudes of posts pushing conspiracy theories — which can range from the ideological dregs of QAnon, vaccination myths and white genocide, to individuals, businesses and national government infrastructure.

Rand Corporation published a study earlier this year (at the request of Google’s Jigsaw) that resulted in the development of a way to search for conspiracy theories that meant both the theories and the nuance in the language used by the peddlers of them was being identified.

Rand, the Soufan Center and the Oxford Internet Institute each undertook variations of studies on how best to grapple with what Rand calls “truth decay”, which is caused by influences including state actors such as Russia and China wanting to mess with the minds of already polarised groups in a country.

Work by the Oxford Internet Institute on industrialised disinformation also points to the United Kingdom and the United States using similar tactics to spread propaganda from their side of the fence, about and to others.

Bad actors move quickly, swiftly and in greater numbers in a digital environment, and governments and businesses need machines to help them detect and quash different forms of digital darkness, regardless of whether it is disinformation being spread by foreign powers, cyber attacks on government or private institutions, or the all-too-common efforts by organised crime to steal identities and funds.

Machine learning has also been used to develop solutions for companies and not-for-profit agencies that have wasted time dealing with queries related to conspiracies spread online.

An advocacy body for victims of human trafficking, Polaris, had to call in the Soufan Group and the content science firm Limbik to help it develop tools enabling it to deal with queries related to child abuse conspiracies circulated by QAnon adherents. Polaris can now reliably predict which online conspiracy trends might translate into phone calls to its counsellors, and a briefing is provided so people are aware of what might come through the system.

Anjana Rajan, Polaris’ chief technology officer, recently told Crikey the human trafficking conspiracy theory involving a retailer called Wayfair generated more phone calls to its human trafficking hotline in July 2020.

“When we look at our data we know that a typical trafficking case results in 2.5 calls to the hotline,” Rajan said. “The Wayfair case alone was 536 calls — each of which contained no actionable information for us to use.

“What that translates to is that the time we spent responding to disinformation about Wayfair could have instead been spent replying to an additional 42 trafficking cases.”

Putting technological solutions in place is critical for organisations such as Polaris, given that some counsellors are mandatory reporters under law and would have to alert authorities to allegations of child trafficking.

AI is not being used just to dig through a large jumble of online data to sift for facts and fiction or pick friends from foes. Seven universities are involved in looking at how AI can be used with human intervention to deal with cybersecurity threats.

University of Wisconsin-Madison, Carnegie Mellon University, University of California San Diego, and Penn State, along with three Australian universities — University of Melbourne, Macquarie University, and University of Newcastle — have formed a multidisciplinary team to look at developing the technology behind what they call human-bot cybersecurity teams (HBCT).

This project has scored $3 million from the federal government as part of the US-Australia International Multidisciplinary University Research Initiative to look into how AI or machine learning can help lessen the load for those involved in managing cybersecurity threats.

Professor Benjamin Rubinstein, the principal investigator, told Crikey that 16 experts from those universities will be involved in finding answers to conundrums such as how AI systems can better interact with humans, and also grappling with how to make it harder for attackers trying to ram open the digital doors of institutions.

“Despite the immense benefits of automation, attackers can quickly adapt to changing conditions and find flaws in automated systems,” Rubinstein said. “Effective coordination of human-bot teams is therefore a grand challenge for cybersecurity and the focus of this initiative.”

Rubinstein believes AI should be used to supplement human decision-making rather than be something developed to run on autopilot.

“No high-stakes decisions should be made solely by an AI, however AI can support human decision-making,” he said. “We must remain cognisant of the well-documented issues involving bias, fairness, explainability, privacy and transparency in AI decision-making.”

The project is in its initial stages but at least one thing is certain: anybody expecting to be able to delegate significant decisions on national security to lines of computer code will be rather disappointed.