The cliché of a “rise of the machines” is as old as science fiction itself. Beginning with Mary Shelley’s Frankenstein in 1818, and appearing in all manner of guises — from Karel Čapek’s Universal Robots, to Clarke and Kubrick’s HAL9000, to the android matriarch in I Am Mother — it shadows the genre with all the tenacity of Arnie’s Uzi-toting Terminator. Machine intelligence is destined to eclipse our own, the idea goes, envisaging a struggle for power between smart machines and their human creators — a mechanical mutiny.
Recent developments have given credence to this rather hoary notion. Last year, for example, the visual artist Supercomposite (real name Steph Maj Swanson) became briefly famous in tech circles when her experiments with “negative prompt weights” (instructions to an image-generating AI to create an image as different from the prompt as possible) turned up the character of “Loab” — a devastated-looking woman who proved nearly impossible to eliminate from subsequent images, and whose haunting likeness became increasingly violent and gory as the experiment progressed.
More recently, the emergence of ChatGPT has led Elon Musk and other would-be Oppenheimers to refresh their warnings about “strong AI”: that which is not merely intelligent but capable of human-like reflection. Even if no one is claiming (yet) that we are facing a Terminator-style Judgment Day, we are way past the point where Alan Turing’s measure of machine intelligence — whether or not a machine can pass as a conscious actor — has been decisively breached. In short, things are getting distinctly weird.
Nevertheless, the rise-of-the-machines scenario isn’t as relevant as it may seem at first blush. In fact, there’s no reason to think smart machines are actually thinking, if by “thinking” we mean the mental activity engaged in by humans. This is best expressed through another movie cliché, one drawn not from science fiction but from horror: we are waiting for the monster to materialise in the doorway, when in fact it was in the room all along.
Technological somnambulism
Monsters can be defined as liminal beings. They exist between categories, in a way that subverts, or threatens to subvert, the moral, social or natural order. This is what Freud called the “uncanny”: the monster creeps us out because it doesn’t quite fit with our received ideas of how things should look, especially when the thing in question has some features of humanness. Vampires, zombies and evil clowns all provoke this psychic disturbance, and so too, in a weak way, does the rogue AI — the non-human machine that, in point of its intelligence, appears to be gaining on its human counterparts.
But it is not the uncanniness of AIs that should worry us. Rather it is the uncanniness that may come (and may already be coming) to descend on humans as they grow ever more reliant on smart machines and transformative technologies. With so much of their social and creative lives now shaped by powerful tools, humans could well become permanently alienated from one another and from their own humanity. This is where the real monsters lurk.
As I consider in my new book Here Be Monsters: Is Technology Reducing Our Humanity? (published May 1 by Monash University Publishing), recent developments in artificial intelligence, biotechnology, bio-hacking and pharmaceuticals may be affecting us. How might new communication technologies be shaping human social life, and what consequences might that have for social solidarity and individual selfhood? How might developments in biotechnology alter the notions of luck and equality on which social solidarity (arguably) rests? And how might the increasing tendency towards algorithmic complexity — the creation of a “black box society” — affect human creativity and agency?
What makes these questions critical is the character of emerging technologies. Of course, many technologies have changed very little over the past half-century. The car is still substantially the same machine it was in 1973, while the airliner has barely changed at all in that time, except in terms of power and efficiency.
But technologies emerging now are of a different order than those based on the internal combustion engine. Adept for many centuries at bending capricious nature to our will, we can now intervene directly in it, and do so at the level of the atom, the cell and the molecule. We have moved from harnessing nature’s power to actively reconstituting nature itself.
Unfortunately, these developments in technology have not been accompanied by a public conversation about whether or not they are good for us. Technology is developed within an institutional framework that protects it from humanistic oversight. Science, which used to enjoy a degree of independence from technological considerations, is now largely subordinated to those considerations, which are subordinated, in turn, to the profit motive.
The modern university sits at the heart of this framework, where commercial research arms, industrial parks and an emphasis on the marketisation of knowledge through patents and licensing keep research and development on a neoliberal trajectory. Technological development just seems to… happen. Such discussion as there is about emerging technologies is suffused with a sense of inevitability, even a kind of fatalism.
We need to develop a more “techno-critical” attitude to new and emerging technologies, and indeed to technologies in general. We need to reject “technological somnambulism” (as the political theorist Langdon Winner dubs this sense of inevitability) in the name of a democratic “technics” that puts human freedom and flourishing at its core.
Such democratisation does not merely entail the socialisation of private technologies, though that would undoubtedly form part of the process. It would entail thinking deeply about those technologies from a radically humanistic perspective, about what they add to our lives and what they take away. It would mean asking what we want from technology, which is a way of asking who we are. To put it in the words of Lewis Mumford, a philosopher of technology from a very different era, it would mean rediscovering “the human scale”.
Everyman is a handyman
Of course, human beings are technological animals. The use of stone tools predates the emergence of Homo sapiens by some 3 million years — there is no period of our history that does not know the use of tools. Just as anyone reading this sentence was using technology before they woke up this morning (technology in the form of blankets, a bed, a house, an alarm clock, etc), so our species “woke up” in technology. We are Homo faber — man the maker. Not knowing one end of a power drill from another is no disqualification from this condition. Everyman is a handyman.
But humans are not just the users of their tools. In complex ways, they are also their products. Contra the Silicon Valley view that technologies are politically “neutral” phenomena that can be used for good or evil ends, the technologies we use have the power to shape the way we live in community with each other, which will in turn shape the kinds of technologies we decide (or others decide) to create in the future. In this sense, technologies are themselves political. And yet our politics has little to say about them.
As we stand on the edge of a revolutionary era of technological development, it is necessary to bring technology to the centre of our deliberations, as part of a broader examination of the kind of entity humankind is.
We need to reopen the question of technology, upon pain of getting monstered by it.
Richard King’s book Here Be Monsters: Is Technology Reducing Our Humanity? is out May 1 through Monash University Publishing.
I’m not sure this means they’ve entered the realms of “manlike intelligence” more that mining vast troves of conventional discourse makes generating a plausible output easy.
Physically these models are huge living in sprawling data centres. They have no senses and are basically immobile. They are not “manlike” but “warehouse like”. They react when prompted. They do not initiate anything.
What we have seen with things like voice recognition, image processing, text to speech is once trained neural nets can be extracted from the original software and reduced to a compact digital form then used in a camera, phone, scanner, software, whatever.
LLMs are not reducible to a compact standalone form as yet. If they can be we might start to see an explosion of autonomous “things” able to interact with humans, the world and each other.
The most compact form a LLM could occupy (though not the fastest) would be as a biological structure – but we already have those – I’m.one of them or at least part of my brain is.
It’s worth pointing out that it’s decades since promises such as genetically engineered organisms that “refine raw materials” or “consume waste” or “renovate our aging bodies” were made – they’re still not here. The laws of physics, chemistry and the complexity of the task prevailed.
So I’m not expecting androids or replicants to show up anytime soon.
The danger of LLMs is more they will be employed to replace humans where ever business think they can get away with it, than create autonomous “things”.
As worrying as current machine apps are for mundane things like jobs, human interactions, traffic control and warfare, they’re not yet within cooee of being AI.
Far more hazardous is ‘natural stupidity’ because, unlike machines, it is always alloyed with envy, cupidity and arrogance.
Aren’t machines going to have to embrace envy, cupidity and arrogance to become truly intelligent?
Hardly – those traits are antithetical to and detract from, one might say, obliterate intelligence.
Then it may as well be an upscaled BASIC programme, a sophisticated traffic light management system convincingly imitating a lollypop lady.
No, they will do what the really smart people have always done, and delegate the drone & drudge work of life to lesser intellects. All the pejorative emotions are simple reductive algorithms.
Great riff and I look forward to reading the book. Harder than ever for dead trees to stay ahead of the machine mefinx!
We’ve already had our fun on threads hereabout getting HPTChat to fudge up very respectable stabs at both lefty and righty Knowledge Class boilerplate, so let’s take it as read, shall we. At the we core of the rise of the machines is of course (as ever) what it says about human intelligence/thinking, not artificial. I think we can all recognise already that it’s somewhat…underwhelming. Certainly more so than our smug assumptions have led us to believe. The sophistication of ‘approximated mimicry’ is already outpacing us and will very quickly functionally obliterate any real distinction. Certainly for 99% of us. At least in the rational, empirical mode so beloved of Infallible Secular Man.
Look out, Rich: God is circling again! Chortling as fondly, and as curiously, as ever!
Many thanks for that Jack! (always love yr comments, btw)
Are you doing an East Coast tour with it? The last few decades seem to me have re-vitalised the importance of writers physically re-connecting with their words in an interactive material public realm that’s not just some gussied-up, frangered-up organ of commerce: real public lectures, real debates, real writerly sh*tfights, even. I rail endlessly about so many of the smooth-lefty uber-smarties hereabouts not even using their real names in these threads for not unrelated reasons (besides purely recreational purposes, obviously). There’s so many words floating about these days – good words, brilliant even, decent, loving, courageous, insightful, etc – but most are as fleshless and untethered as those spat out so eloquently, en masse and at exponential pace now, by AI. Reading online, geez, it’s like f**king an infinite bucket of little plastic letters nowadays. No traction, no leverage, no resistance…no pleasure, no point. Give us a vanishingly short space of time and the monkies will have typed out every conceivable combo possible. Then – as with live music, live performance – it’s going to be a matter of who’ll go to the barricades to fight out a place for theirs. Again. The guts, one imagines (I hope not 2 cackhandedly) of your thesis regarding reclaiming tech as tool/town square totem/talking stick.
Dan J here at Roaring Forties in Balmain does excellent writer nights. See if you can gatecrash Monsters onto the roster. Big, smart, ready-engaged audience. Better still, I would pay good coin for say an Arena co-curated word-death-match on a God v. AI-esque. A real old-fashioned writers’ blue. A proper one, with swear words & jokes about f*tties & p**fs & p*do priests etc allowed (iff funny). You and Rundle could toss a coin to see who gets lumped with the Loser Nerd Case. It may not yet be quite recognisable, but…this…is the grand epic of our times. I reckon. So good luck with your stab, may u shift bulk units.
One knows one’s humble place, mate. 🙂 Fame doesn’t just kill words these days, it bloody cremates them. Pray that Monsters is not a bestselling smash, man…
While today’s “AIs” are far from intelligent (i.e. they can be trained for specific purposes but I wouldn’t trust one to baby-sit) and the term “AI” is more a marketing slogan than any sort of realistic description, it doesn’t hurt to think ahead. It also doesn’t hurt to look behind either – one thing I have noticed is that nobody appears to be developing any type of algorithmic “3 Laws of Robotics”. Admittedly the 3 Laws in their specific phrasing were incredibly simplistic and unconvincing, but it would be nice to think that someone was looking out for the humans and writing some safeguards into the programming.
As for a future of being ruled by (real) AIs, I’m not entirely convinced that this would be a bad thing. Anybody who has read much Iain M. Banks or Neal Asher will be familiar with the attitude of our remote descendants who are ruled by advanced AIs and wouldn’t have it any other way – all of history shows that humans can’t be trusted to run things themselves.
See Leslie Cannold’s article yesterday – there was a link to “5 principles” you might find interesting.
I hadn’t bothered reading past “Wordlebot…” so I took another look. What she is talking about isn’t AI (although she keeps using the term). It is what used to be called “expert systems” – programs that had been trained to recognise patterns or apply recursive rules long past the point where humans would have gone home for the day. And her “five points” reference seems to be largely an update to the Privacy Act (or either its US equivalent or a US acknowledgement that having one might be a good idea).
My point was about rules for designing future AI systems rather than legislating for legal remedies after the fact. Not that legislation isn’t important though. It is.
I haven’t heard much/anything for the last couple of years from the well informed researcher shaking & sweating in their labcoats about the prospect of autonomous killbots in war – it was, at the time, a real concern.
Almost as if they have been disappeared or co-opted.
What’s the general rule for those powerful players who lord it over us – first ignore, then ridicule, threaten and finally bribe dissenters?
Failing that, imprison and/or terminate with extreme prejudice.
I was trying to remember which novel had a regressing society so a’feared of they AI thangies that they executed them with asl public spectacle.
It might have been “Consider Phlebas” but, not being at home, I cannot check and googling, even Quora fandom can’t confirm.
I’m not sure either, although the (unrelated) example that springs to mind is Dune’s “Butlerian Jihad”.
Although… Anybody remember Trevor Eve in Shoestring? TV show about a radio-based “private ear” whose back-story was that he used to work with computers until he went a little wild with an axe in the datacentre? (Which, when I grew up and spent way too much time in datacentres, stopped making sense. The servers are in racks which have locked metal doors. Also, servers are built a lot more solidly than consumer PCs. You might dent the cases but unless you’re a lumberjack they’ll survive. If you really wanted to do some damage you would attack the air conditioner.)
Richard King (“Technology is getting weird. It’s time to push back against the machines“) asserts that unlike the internal combustion engine or aircraft which haven’t demonstrably changed THAT dramatically over the years, computing seems to have.
No not really. As a person who has been continuously in the areas of personal computing since 1981 – even before the IBM PC with the Tandy TRS-80, the original Apple models and other early attempts – I disagree. Computing has got faster, chips have got smaller and more efficient and more internal memory and storage is available, much more, but the basic premise of what they do has not changed. Any person who had exposure back in the day to say, a 1979 – 40+ year old – Tandy Model 1 with 48K RAM, a pair of 360K disk drives and a black and white screen would have no issue working one of today’s desktop computers.
What has changed is the amount of data there is easy access to to make decisions. So called ‘AI’ is not ‘thinking’, it is making algorithmic based calculations in tern itself based on access to lots of data. True thinking involves emotion and experience, and no computer or machine has both of those and probably won’t have for a long time, if ever.
david@creativecontent.au
Thanks David. My point wld be broader than that. I’d say that the *convergence* of info-tech with new biotechnologies, nanotechnologies etc. has led/is leading to powerful new tools that will allow us (and in some cases are already allowing us) not merely to harness or redirect nature to but intervene in it at its most fundamental levels.
I don’t say (and nor do I believe) that AIs are “thinking”… But the *idea* that machines can think (and that the brain is, therefore, just a computer) is one that has a lot of currency in the San Francisco Bay Area and is driving the kind of innovations that will make us “uncanny” to one another.
Perhaps neurodivergent human brains feel “uncanny” to others as part of their daily grind. AI might not seem uncanny to the uncanny.
Yeah this is very insightful, Donna. Twenty-odd years ago I had a booze-&-dope fuelled nuclear-scale psychotic meltdown. (I got better…) Since then I’ve worked quite a lot as a carer with neurodivergencies of all kinds: ABIs, dementia sufferers, various spectrum dwellers, mental healthies…there is little question in my (slightly battered!) mind at least that what we’ve come to regard as ‘rational, empirical, linear’ human ‘intelligence’…is highly contrived and a million miles from being either rational or empirical, as we think of it. As for linearity, that’s more a function of processing power than of intelligence per se…and it’s here where I think the truly evolutionary component of AI lies. Yes, in its ‘mere’ engineering-tweaking, and thus, yes, its ‘un-intelligence’. What boring old exponentially-accelerating processing speed will gift us, though, is a rapidly compressing ‘adjacency’ of singular digi-cognitive synapse fires. Taken to its infinite logical end point, as we ought to given Moore and tappable galactic solar energy reserves…AI will drive machine cognition in a kind of reversal of the big bang: an effective closing in on the real singularity, the temporally asymptoting space-time pocket where our AI/machinery effectively ‘sees’ all the universe’s data – past present and future – all at once, in a kind of digital nirvana.
It’s certainly what happens when you go batsoid nuts (that tantalising green flash…was it…real?). Time becomes unhinged, is removed as a factor in ‘rational, empirical, linear’ thinking. It’s also as far as I can tell what seems to occur in acute dementia patients, albeit in a different form. The unshackling of our cognitive processes from the tyranny of linear time. We’ve all choofed up, got walloped, had those brief transcendent moments under a variety of stimulants, natural, chemical, circumstantial, hormonal. It’s in these, as Donna says, ‘neurodiverse’ slivers of existence…that I think the key to human intelligence truly lies. I think AI is going to function on that plane, not the one our smartest current ‘thinkers’ – of course – insist on assuming is the definitive one.
In fact it takes an awful lot of determined cognitive dissonance to vconvince yourself that our current model of human intelligence is indeed so. If you observe the world with real intelligence – especially our determined self-destruction of it – any rational, empirical analysis could only possibly conclude that what we call ‘rational intelligent’ thinking is in fact irrational and stupid. That’s simply incontestible: we are hurtling to our deaths and doing nothing of any real use about it.
So I’d strongly cautions everyone on this thread – the AI expert and AI erudite especially – to keep in mind the strong likelihood that, when it comes to ‘human intelligence’, we don’t have a frigging clue what we’re talking about!
This is…acute. Shhh, Donna – don’t give away the big cosmic secret! The machines are still…on our leash. Just.
Twenty years ago I drank myself into a spectacularly psychotic melt-down, the whole messianic, diabolical, cops-n-drawn-guns sh*tshow. Since then I’ve worked as a carer alongside all sorts of oddball brains, from ABIs to spectrum lurkers to full blown schizophrenics to late-stage dementia sufferers. When you’re batsoid, however you are, the common denominator seems to me to be that your synapse clock shucks off Humanity’s modern-age linear tyrannies. Your brain is working just fine, it’s just doing so at a various crazy speeds. In a full-blown overspeed-outage it can approach a kind of terrifying ‘nirvana’ – one ‘processing’ all the universe’s data, in all its possible iterations past present & future, all at once. Anyone who’s ever smoked decent chuff will recognise this unshackling, but you can get a tiny hint of it from many more banal states of heightened ‘presence’ – runner’s high, the ecstatic pain of childbirth, extreme fear, razor-peaks of violent anger, artistic epiphenies of all kinds…as organic motherboards we tend to hit the hard stop that is our limited energy conversion capacity. Our fleshy brain can only take so much processing action before is shuts down in various ways. Even in the crazy; even in the rare true genius.
The key to AI’s evolutionary importance for us as a species I think lies in an exponentially accelerating processing speed that won’t hit those same energy conversion stops. Not at least until it’s driven/led our species to a whole new plane of being. The AI and computing learned of you on this thread who are still fixating on ‘intelligence’ as a same-old ‘linear’ quantity are, I think, really missing the dawning of something-wow-before-our-eyes. Like us Humans, AI isn’t terribly intelligent. It’s just a whole lot faster – already – than we are at being unintelligent. God, of course, is the fastest unintelligent data processor of us all. Once we’ve harnessed our machines adroitly enough to catch us up to Him…well, where to next, Einsteins? That’s the fun question. That’s the singularity of…real interest.
The key for us – as I suspect Richard’s book is onto – is to make sure we hang on to that leash without holding AI back.
speaking of the internal-combustion engine – there’s a case to be made that cars have had a profound negative impact on human interactions – the news is full of incidents of road rage and cyclists being treated as lesser beings by hoons behind the wheel – it was like this from the beginning, from the days pre Ford’s Model T, with motorists racing around cities mowing down pedestrians who had the audacity to cross the road
It’s a case I’d make myself, to some extent … The (techno-critical) philosopher Jacques Ellul was once discussing organ transplants. Surely, someone asked him, that’s a technology everyone can get behind? Ellul pointed out that one of the principal reasons there were so many healthy organs around was because of … the internal combustion engine.
To be clear, I’m not saying other techs don’t have negative influences … I’d like us to reopen “the question regarding technology” in general. What I’m saying is that certain techs are now *so* potentially transformative that reopening it is a matter of urgency …
O.o
I think you might be projected a bit there.
People regularly have “issues” moving between MacOS and Windows, or iPhone and Android. (Or different UNIXes if you want a modern CLI example.)
Heck, people often have issues just upgrading from an older to newer version of MacOS/Windows/iOS/Android.
Going straight from a primitive CLI to a modern advanced GUI – may as well just be starting from scratch with the modern GUI and no prior knowledge.
For another example, early search engines like Alta Vista (let alone their predecessors) are an entirely different animal from more modern tools like Google, and they are in the process of being revolutionised again by the likes of ChatGPT.
A great deal of “thinking” is absolutely just making “algorithmic based calculations”, which is why the better computers become at it, the more they can replace people.
I’m not so sure. Is the modern GUI really all its cracked up to be in the first place? When the Apple Lisa came out, a very old friend of mine observed at the time “we went through thousands of years communicating by pictures, then we got language and writingt. Now we have gone BACK to pictures again”. I suggest there is less work in teaching someone the basics of a txt based UI and getting them up and running, as against a GFX one. I had less stress teaching people running apps under DOS than I ever get with a GUI based one.
Well that’s a very different argument.
GUIs do not prevent the use of text and CLI interfaces where they make sense. The converse, however, is not true.