Last weekend I spent time with artificial intelligence. Not with self-styled Replikas that take the idea of “being your own best friend” to new levels, but sitting down with my husband to play Wordle and then enjoying analysis provided by WordleBot, an AI tool that explains how you could have solved the puzzle faster.
Perhaps this is not the best way to begin a piece on the dangers posed by unregulated AI research and development. But not all AI applications pose profound, devastating and existential risks to life as we know it. Of unquestionable value to our species are many specific machine-learning algorithms and functions designed and run by humans.
Such benefits include accelerated drug discovery, as AI is much better than teams of human sciences at predicting protein structures from their amino-acid sequences. Also better cancer detection, because AI may turn out to be more skilled at reading medical images. AI also played a role in the success of NASA’s recent DART mission, which proved that in the future, humans might be able to avoid the tragedy of an asteroid collision with earth.
So much for the good news.
The bad news is that the kind of AI barrelling towards us poses profound risks to every aspect of how we live individually and collectively. It also has a 5% chance of wiping us out, according to a 2022 survey of experts publishing in the field, with half those surveyed giving a 10% likelihood of high-level machine intelligence causing an “extremely bad outcome” for humanity if its current developmental trajectory isn’t altered.
What is that trajectory? Right now, it’s the one resulting from profit-seeking competition between Meta, Google and Microsoft. This includes chatbots that have the moral and truth-telling capacity of a hallucinating six-year-old becoming part of the internet ecosystem overnight, upending that trivial endeavour known as educating our young without even a heads-up or apology to the teachers and professors still trying to clean up the mess.
Chatbots (and self-directed, human-acting characters in Sims-like computer games) may sound benign, but they’re not. Neural networks are designed to act and learn like humans do, but their ability to process information is infinitely better. GPT-4 — the fourth version of the chatbot GPT — was trained on the entire internet and its ability to manage 100 trillion parameters brings it into competition with the human brain.
Future generations of intelligent AI may continue to struggle with the human ability to discern truth from lies and right from wrong — we don’t know how they’ll evolve on this dimension — but they are already showing signs of goal-directed behaviour, and will become increasingly capable of making strategic decisions without human oversight.
But who would be crazy enough to let an amoral, fact-impervious, super-intelligent agent slip the leash of human control? The list is long. From hubris-filled researchers who risk intelligence take-off by allowing AIs to recursively write their own code, to nefarious world leaders like those running Russia and China, to old men who want to live forever courtesy of the medical innovations AI makes possible, and the chance — post-singularity — to become one with timeless machines.
Not to mention the senior corporate geeks who in their quest to ensure the tech space remains “cool”, “interesting” and “fun” stifle dissent by suggesting you need to be a tech expert to comment on artificial intelligence. A shut-up-and-shop technique that implies, in case you missed it, that an existentially disruptive technology that will change life for every single person on the planet in ways we can’t fully predict, but we can already see won’t in many instances be good, is something that a handful of privileged men in Silicon Valley have a right to control.
I say they don’t. Indeed, I’ve been here before. In the early ’90s in Australia, when competition between powerful men — this time, fertility specialists vying to be the first to have a baby born from IVF — was changing the way human life could be created. The claim at the time from the public was the same one I’m making now: that they had no right to do this behind closed doors. Such foundational questions were ones in which all humanity had a stake and so needed to be understood and debated by society more broadly and — if required — regulated by those with the public’s interest in mind.
Which brings us to politicians in Europe and the United States and — hopefully soon — in Australia. As always, the European Union is ahead of the curve, in the middle of legislating the first attempt at global norms around risk — though technology Professor Margot Kaminski contends that if regulators want to truly address harms from AI they will need more than light-touch risk regulation to get the job done.
In the US, a paper proposing five principles to which AI must conform was released by the White House last year, though according to University of California Professor Stuart Russell, who wrote the standard textbook in the field, this has not stopped American developers from failing to evince what he describes as the “maturity” or even basic risk management practices that will stop artificial general intelligence from – and I’m using his words here — destroying the world.
Which leaves Australia, which has a voluntary code of practice that complements our usual approach to such things, which is focused on not missing out instead of ensuring we diversify the decision-makers and support them to take a precautionary approach to immediate and future risks.
More promising is the pursuit by Labor backbencher Julian Hill of a five-year AI commission that could take seriously the risks that Russell sees as certainties if we don’t regulate AI in the same way we do climate change and nuclear energy. Namely in ways that ensure the democratic “we” remains in control of the technology and ensures its alignment with human rights, human needs, human purpose and human values.
But Hill gave his speech in Parliament in February, and there hasn’t been a peep from the Albanese government since. Which means it’s over to us to support his call for a more inclusive, aligned and risk-responsive approach to “the first sparks” of artificial general intelligence Russell argues can be seen in ChatGPT-4.
Before it’s too late.
Is artificial intelligence getting too intelligent too quickly? Let us know your thoughts by writing to letters@crikey.com.au. Please include your full name to be considered for publication. We reserve the right to edit for length and clarity.
I recall the head of HR lecturing us on the existential threat of Y2K… the existential threat turned out to be the reemployment of a legion of business analysts, project managers and other “non-technical” types across the resultant hell scape of corporate technology.
Currently the AI seen in ChatGTP is just a very sophisticated database query result. There is no internal motivation, creativity or drive. It’s a sophisticated tool.
The danger is not from AI itself but from it’s (mis) application by business types to tasks currently performed by humans. The trivial goal is destroying jobs as fast as possible.
See the current problems experienced by outsourcers – many (most?) now running big “IP Repatriation” projects (because they doubled not halved their costs)
AI will be the core of the next round of silver bullets.
Yes there will be amazing advances and unforseen benefits. But as with all goldrushes only a few will get rich.
As for “AI” – these “Large Language Models” are not Intelligent – SkyNet will not be trying to eliminate humans anytime soon – capitalism is doing a fine job of that already.
These large language models aren’t trying to be intelligent. All they/AI is trying to do is…think like humans. I see no evidence whatsoever yet of any conceptual or computational barrier to it doing so with functionally indistinguishable efficacy, in all but vanishingly differentiated emotional and aesthetic mind-states that even we can’t articulate, express or manifest biomechanically.
And exponentially accumulating evidence that in the case of at least a fair majority of humans currently living or now-dead, it already can. This evolutionary twilight isn’t about feeling out the limits of machinery’s intelligence, it’s about reaching the limits of humanity’s. And stepping beyond it.
Worth noting in passing btw the reasonable evolutionary argument that the advent of linguistics, literacy and lately computerisation has actually regressed human ‘intelligence’ and how it ‘thinks’ over the last few thousand years, maybe even deliberately…to a point where binary computing is able to mimic (or parallel/graft onto) the neo-organic computing processes, which is what’s occurring with AI now. ie It makes the old us dumber to prepare to make the new us smarter, say using quantum computing and unlimited solar energy, which will propel human intelligence far beyond what would otherwise have been an evolutionary dead end. Kind of like backing up a few erroneously taken railway spurs/junctions, to select the evolutionary mainline again.
We assume that the linear, accumulative, utilitarian rationality of modern man is the high tide mark to date of our ‘intelligence’ only at our arrogant peril. What will prolly endure of us in the end is crazy stupid love. The rest, just organic garbage in, organic garbage out, above and beyond future evolutionary requirements.
The people who get rich during a gold rush are those selling shovels…and ‘services’.
Thank you. This is such an important article- an incredibly important issue. Like Climate Change, however, it’s too big for many of our brains so it’s easy to put our collective heads in the sand. Therefore we are reliant on leadership from our experts through government policy.
I hope we can get ahead with regulation. I am sick of the tech industry running things for their own gain whilst our society reels with the results. For example, I’m still frustrated by our censorship laws that were introduced to protect our children, but were never adequately adapted to manage the internet with Pornhub and gambling sites easily accessed by our kids. Self regulate? I have many skills but my kids are better at IT.
Psychologists still struggle to unequivocally define what is meant by ‘human intelligence’. It is a bit of a stretch, therefore, to apply the sobriquet ‘artificial intelligence’ to the gladbag of technologies that are currently included within that umbrella term. The people who sell those technologies are keen to apply it of course, because it helps them sell more of them. That’s no reason for the rest of us to follow their lead.
These products are NOT intelligent – not in the sense that most laypeople would understand it, at least. That is the source of most of the problems associated with their use.
One way to resist, therefore, is to not use the term. Don’t buy into the tech marketing department BS.
‘These products are NOT intelligent – not in the sense that most laypeople would understand it, at least.’
I think you’ve got that a bit the wrong way around, Graeski. I think that what AI is actually starting to show us is that…’human intelligence’ is not as intelligent as we’ve always thunk it is…at least in the sense that most laypeople would understand it.
The real problem with regulating it, btw, is of course deciding who gets to play the omnipotent omniscient umpire/arbiter role. Not quite enough to say ‘Anyone But Musk!’, is it. God is laughing quietly at us at again, with a fond and still-faintly proprietorial curiosity. It’s going to be a thrilling next decade or so. Strap in, Grae.
It seems that our only hope is a decent sized solar flare to obliterate the entire Net so that we bipeds are no longer just a trapped fly for wanton boys to torment.
I, for one, am looking forward to such an event.
Well… Given that our whole society has become entirely dependent on the internet within a couple of decades, that’s not a pleasant prospect. Such solar activity would also take out a great deal of our power infrastructure, which couldn’t be fixed for YEARS…
We really should be building up a supply of transformers for the power grid.
Or dispense with national or large scale grids – the higher the voltage and the wider the area the greater the vulnerability.
BTW, it tends to be transformers that are banjaxed by coronal mass ejections (CMEs), clouds of electrically charged particles unleashed from the sun’s outer atmosphere and that was true in predigi days when it was plain old copper wire & PCBs (delightful forever chemicals).
Today’s infrastructure is far more vulnerable due to overdependence on exceptionally brittle technology.
A Faraday cage (little more than a chookwire enclosure) sufficed to protect earlier technologies (always keep an old SW/MW (forget FM) radio with extra batteries in a biscuit tin for the Big Day) from anything other than a life threatening solar event – today a decent lightning storm can wreak havoc over vast areas that would make the Quebec or New York blackouts seem like a low participation Earth Day.
I agree AI and new technology in the lines of biometrics etc should be regulated the same way new drugs are before they are released onto the public. They have so much potential for harm and have already caused harm to people. At the very least AI steals a lot of creative works made by humans without the permission of the humans. There’s great potential in AI but it does need to be regulated and have unbiased oversight and not left up to the companies the tech gurus who create it/us it for profit.
Who’s going to program the ‘unbiased oversight’? God?
More likely one of the wannabe godlets, Gate or Musk.
Just hope that it’s not whichever scam visa guest worker set up this site’s madBot.
I agree there’s great potential in AI, but….where do I start? Just for thinking about…the political operatives in the USA planning to use AI to reap greater than ever before political donations; those same USA operatives planning to fake photos/vids of the opposing team’s leader – ditto political operatives in Aus for the next Federal election.
I’d like to legislate that AI is freely available in Aus – but the legislation MUST contain a start date of 1/7/2063 (ie 40 years hence). <big smile>
If the full flowering of AI coincides with the arrival of some of our more serious climatic tipping points, which seems likely to me, we will lose our chance to regulate AI. In the climate dramas we wont be interested in AI. We will only be concerned about our own immediate survival.
We will be ‘interested in AI’ in the way drowning humans are always interested in a limited number of lifeboats: desperately, brutally, religiously…to the Darwinian death. Regulating it is the last thing we’ll be wanting to do.
What are the AI tech-squillionaires if not Pharoahs hoping to construct electronic mausoleums?