Stained glass window of a small brain with wires coming out of it
(Image: DALL-E)

Last weekend I spent time with artificial intelligence. Not with self-styled Replikas that take the idea of “being your own best friend” to new levels, but sitting down with my husband to play Wordle and then enjoying analysis provided by WordleBot, an AI tool that explains how you could have solved the puzzle faster. 

Perhaps this is not the best way to begin a piece on the dangers posed by unregulated AI research and development. But not all AI applications pose profound, devastating and existential risks to life as we know it. Of unquestionable value to our species are many specific machine-learning algorithms and functions designed and run by humans.

Such benefits include accelerated drug discovery, as AI is much better than teams of human sciences at predicting protein structures from their amino-acid sequences. Also better cancer detection, because AI may turn out to be more skilled at reading medical images. AI also played a role in the success of NASA’s recent DART mission, which proved that in the future, humans might be able to avoid the tragedy of an asteroid collision with earth. 

So much for the good news. 

The bad news is that the kind of AI barrelling towards us poses profound risks to every aspect of how we live individually and collectively. It also has a 5% chance of wiping us out, according to a 2022 survey of experts publishing in the field, with half those surveyed giving a 10% likelihood of high-level machine intelligence causing an “extremely bad outcome” for humanity if its current developmental trajectory isn’t altered.

What is that trajectory? Right now, it’s the one resulting from profit-seeking competition between Meta, Google and Microsoft. This includes chatbots that have the moral and truth-telling capacity of a hallucinating six-year-old becoming part of the internet ecosystem overnight, upending that trivial endeavour known as educating our young without even a heads-up or apology to the teachers and professors still trying to clean up the mess. 

Chatbots (and self-directed, human-acting characters in Sims-like computer games) may sound benign, but they’re not. Neural networks are designed to act and learn like humans do, but their ability to process information is infinitely better. GPT-4 — the fourth version of the chatbot GPT — was trained on the entire internet and its ability to manage 100 trillion parameters brings it into competition with the human brain. 

Future generations of intelligent AI may continue to struggle with the human ability to discern truth from lies and right from wrong — we don’t know how they’ll evolve on this dimension — but they are already showing signs of goal-directed behaviour, and will become increasingly capable of making strategic decisions without human oversight. 

But who would be crazy enough to let an amoral, fact-impervious, super-intelligent agent slip the leash of human control? The list is long. From hubris-filled researchers who risk intelligence take-off by allowing AIs to recursively write their own code, to nefarious world leaders like those running Russia and China, to old men who want to live forever courtesy of the medical innovations AI makes possible, and the chance — post-singularity — to become one with timeless machines

Not to mention the senior corporate geeks who in their quest to ensure the tech space remains “cool”, “interesting” and “fun” stifle dissent by suggesting you need to be a tech expert to comment on artificial intelligence. A shut-up-and-shop technique that implies, in case you missed it, that an existentially disruptive technology that will change life for every single person on the planet in ways we can’t fully predict, but we can already see won’t in many instances be good, is something that a handful of privileged men in Silicon Valley have a right to control. 

I say they don’t. Indeed, I’ve been here before. In the early ’90s in Australia, when competition between powerful men — this time, fertility specialists vying to be the first to have a baby born from IVF — was changing the way human life could be created. The claim at the time from the public was the same one I’m making now: that they had no right to do this behind closed doors. Such foundational questions were ones in which all humanity had a stake and so needed to be understood and debated by society more broadly and — if required — regulated by those with the public’s interest in mind. 

Which brings us to politicians in Europe and the United States and — hopefully soon — in Australia. As always, the European Union is ahead of the curve, in the middle of legislating the first attempt at global norms around risk — though technology Professor Margot Kaminski contends that if regulators want to truly address harms from AI they will need more than light-touch risk regulation to get the job done. 

In the US, a paper proposing five principles to which AI must conform was released by the White House last year, though according to University of California Professor Stuart Russell, who wrote the standard textbook in the field, this has not stopped American developers from failing to evince what he describes as the “maturity” or even basic risk management practices that will stop artificial general intelligence from – and I’m using his words here — destroying the world

Which leaves Australia, which has a voluntary code of practice that complements our usual approach to such things, which is focused on not missing out instead of ensuring we diversify the decision-makers and support them to take a precautionary approach to immediate and future risks. 

More promising is the pursuit by Labor backbencher Julian Hill of a five-year AI commission that could take seriously the risks that Russell sees as certainties if we don’t regulate AI in the same way we do climate change and nuclear energy. Namely in ways that ensure the democratic “we” remains in control of the technology and ensures its alignment with human rights, human needs, human purpose and human values.

But Hill gave his speech in Parliament in February, and there hasn’t been a peep from the Albanese government since. Which means it’s over to us to support his call for a more inclusive, aligned and risk-responsive approach to “the first sparks” of artificial general intelligence Russell argues can be seen in ChatGPT-4.

Before it’s too late.

Is artificial intelligence getting too intelligent too quickly? Let us know your thoughts by writing to letters@crikey.com.au. Please include your full name to be considered for publication. We reserve the right to edit for length and clarity.