Donald Trump is back in the picture, and that should scare everyone — at least everyone who cares about living in a free democracy. The possible resurgence of the former US president who faces 91 criminal charges is a symptom of a civic landscape that is fractured and an information ecosystem that is broken.
This year, several major democracies will hold elections, including the US, the UK and India. And while Australia is not likely to have a federal election until 2025, the results of the global elections will have a wide impact.
It is harder than ever for populations to come together, with social media continuing to facilitate disinformation, laws that guard our privacy and rights online being outdated, and artificial intelligence (AI) set to turbocharge digital harms and amplify misleading and fake content.
Through AI, the public will have to contend with a tsunami of cheap propaganda that could overwhelm credible information. There are no hard rules against deepfakes — a form of AI called deep learning to make images or videos of fake events — and digital platforms have deprioritised ethics and election integrity teams.
OpenAI recently announced it will not allow political candidates and their campaigns to use ChatGPT in the US elections, but that won’t stop propaganda from being rife while the general public has access. Other digital platforms have struggled with moderation in the past, and OpenAI will be no different.
A World Economic Forum survey named AI-generated misinformation and disinformation as one of the top global risks for 2024 — ahead of cost-of-living pressures, the economy and civil unrest. Despite this, the Australian government’s initial response into “safe and responsible AI” is worryingly vague.
At the heart of the response is a “risk-based approach”. This is a sensible way of trying to account for the wide spectrum of impacts AI will encompass. However, the details of these risks are still to be developed by an advisory group later this year. Misinformation and disinformation are mentioned in the context of the work being developed under Communications Minister Michelle Rowland. There is also mention of risks that AI could contribute to the “undermining of social cohesion”.
One of the more concrete examples provided is a watermark system that labels AI-generated content. This is intended to help the public distinguish between human- and AI-generated materials. There are inherent complications with this. The widespread adoption of AI, which the response encourages, will mean that more people will use AI as part of their regular content generation and consumption, including for professional purposes — so what then will the label communicate? Will it imply that the AI content is less credible? If AI use is sanctioned in newsrooms, for example, will this not render the label obsolete if everyone starts to use AI for news content?
This approach once again puts the work of navigating misinformation and disinformation into the hands of individuals who are burdened with sifting through what is credible and what isn’t, while digital platforms remain largely unscrutinised over the quality and veracity of their labelling efforts.
This is just one example of the complexity of the work ahead, and why the government’s initial response feels lacking in urgency and effort. AI’s ability to produce a storm of false information, facilitated through unscrupulous digital platforms that have still not been able to resolve these issues from years past, will result in an AI-powered election disinformation war.
An AI turbo-charged election disinformation war should worry all areas of government, whether they’ve been given an AI remit or not. Certainly the ministries responsible for AI regulation need to take it seriously given this issue could directly affect their reelection chances.
Is AI doing more harm than good? Let us know by writing to letters@crikey.com.au. Please include your full name to be considered for publication. We reserve the right to edit for length and clarity.
There are now many open sourced trained models that you can run on your own computer. One of the models I have has been trained to mimic Trump. It is a bit of a laugh to ask Trump questions about the world.
These models won’t be covered by the government’s new AI laws.
If you are interested then search for ollama and ollama-webui.
There are many other programs out there which allow people to run AI models on their own computers bypassing the big tech companies.
Right, so mandatory watermarking is an unenforceable non-starter.
We (mostly) got used to spam texts, emails and phone calls; we’ll (eventually) get used to fake videos too.
Sadly I believe that a good deal of the conspiratorial nonsense was originally generated by human trolls. I did it myself once on FB and had friends going, oh yeah, that true. I couldn’t believe it. What I wrote was, I thought, so obviously a joke but they took it seriously. AI will take this to the next level, as any fool can now make a coherent sounding argument for anything. I said coherent sounding. I don’t mean logical or reality based, it just has to sound ok and the ignoramuses will lap it up.
I have a friend that has always been into UFO’ s, etc. His latest is flat earth. Occasionally I watch some of the videos he posts. The latest was a very slick production. It “sounded” very intelligent and wasn’t too bad actually. It was definitely an AI generated job. It didn’t discuss flat earth directly, what it did was debate the meaning of the word cult.
The title was is Flat Earth a Cult. And of course flat earth is not actually a cult. Very clever. Avoid the actual subject while attacking it’s detractors.
It is also going to be a brave new world when GenAI gets unleashed on phishing content- pity the less computer literate trying to discern real mail from a phishing attempt without incorrect addressing, spelling etc.
The hysteria about the threat of AI seems to be oblivious to the existing influence of religion & politics which have been propagating misinformation and disinformation since the dawn of humanity, and more recently marketing / advertising and social media.
We are already saturated in misinformation and disinformation. AI simply adds a bit more.
Agreed. There must be a threshold tipping point where there is a general consensus that the origin and credentials of an article is critical if it is to be believed.
With unhinged cartoon characters standing in for politicians in recent years, the no voice algorithm etc, newscorpse after dark, a cognitive development in understanding must be due.
Conservative media is likely going to get caught out lying , omitting and layering misinformation more often , it cannot help itself .