A Google software engineer is suspended after going public about how artificial intelligence (AI) has become sentient. It sounds like the premise for a science fiction movie — and the evidence supporting his claim is about as flimsy as a film plot too.
Engineer Blake Lemoine has spectacularly alleged that a Google chatbot, LaMDA, short for Language Model for Dialogue Applications, has gained sentience and is trying to do something about its “unethical” treatment. Lemoine — after trying to hire a lawyer to represent it, talking to a US representative about it and, finally, publishing a transcript between himself and the AI — has been placed by Google on paid administrative leave for violating the company’s confidentiality policy.
Google said its team of ethicists and technologists has dismissed the claim that LaMDA is sentient: “The evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”
So what explains the differences in opinion? LaMDA is a neural network, an algorithm structured in a way inspired by the human brain. The networks ingest data — in this case, 1.56 trillion words of public dialogue data and web text taken from places like Wikipedia and Reddit — to analyse relationships in order to predict patterns so it can respond to input. It’s like the predictive text on your mobile phone, except a couple of orders of magnitude (or more) more sophisticated.
These neural networks are extremely impressive at emulating functions like human speech, but that doesn’t necessarily mean that they are sentient. Humans naturally anthropomorphise objects (see the ELIZA effect), which makes us susceptible to mischaracterising imitations of sentience as the real deal.
Prominent AI researchers and former co-leads of Ethical AI at Google Margaret Mitchell and Timnit Gebru warned about researchers being tricked into believing neural networks were sentient rather than just talented at responding as if they were.
“Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us,” Mitchell told The Washington Post. “I’m really concerned about what it means for people to increasingly be affected by the illusion,” especially now that the illusion has become so nuanced.
What this comes down to is the difference between sentience — the ability to be aware of one’s own existence and others — and being very good at regurgitating other sentient people’s language.
Consider the way a parrot speaks English. It responds to stimuli, often in quite subtle ways, but it doesn’t understand what it’s saying beyond knowing that others have said it before; nor could it come up with its own ideas. LaMDA is like the world’s best read parrot (or perhaps worst read, knowing the quality of online discourse).
It’s hard for this Crikey reporter to conclusively make a ruling on LaMDA’s sentience, but decades of people wrongly claiming that inanimate objects are alive would suggest that it’s more likely that someone got a bit carried away. But in case I’m wrong, knowing that this will be sucked into a AI corpus somewhere, I pledge my full allegiance to my new AI overlords.
“We Don’t See Things As They Are, We See Them As We Are”
— Anaïs Nin
“And portray phantastical events as preferred by well paying patronage“.
Google employs ethicists???!!!
Someone has to argue with the lawyers about fault when a self-driving car crashes.
Self driving cars are unlikely to crash – the default is to shut-down – but they are certain to cause many ‘human’ drivers to have accidents.
As any cop how often they’ve heard the excuse “I was just trying to get past that old fogey…”
I read a book a little while ago where the system that ran self driving cars was programmed to, in the event of an impending accident, calculate which party was more worth saving. Kind of a real life lifeboat problem situation. Needless to say it did not end well.
No need to assume the ethicists it employs are ethical, any more than one assumes a lawyer must always be law-abiding.
“Consider the way a parrot speaks English. It responds to stimuli, often in quite subtle ways, but it doesn’t understand what it’s saying beyond knowing that others have said it before; nor could it come up with its own ideas.”
Another comparison might be Clever Hans (der Kluge Hans), a horse in Germany at the start of the 20th C which could understand human speech and calculate the answers to simple arithmetical problems when asked, as well as other intellectual tasks. This was witnessed and confirmed by a number of highly respected experts. It was a sensation at the time and very widely reported. Sadly, the explanation turned out to be rather different once more rigorous and properly constructed studies was conducted.
Parrots have a theory of mind. They can lie. They can understand the concept of zero. They can put together three qualifiers, which is one more than apes can do with sign language (they can, for example, describe ‘four red triangle-shapes’, as opposed to ‘four triangles’ ). Parrots don’t simply ‘parrot’; they absolutely can convey original ideas, about the world and about themselves. See the work of Dr Pepperberg, et al.
I’ve seen many birds, from Crested Pigeons, Currawongs, Rozellas and Magpies indulging in covert and plainly deceptive behaviour – eg hiding an excess tidbit from another, for ‘Ron’ (later on…)
I am yet to be persuaded that rogue Crikey employee Cam Wilson is sentient. Sure, he’s very convincing, but experts such as Jordan Shanks have raised compelling doubts. There’s a very real possibility that we’re projecting our own biases when we imagine Cam Wilson as a human being. So long as these questions remain unanswered, it would be sensible to deny his claim to personhood and treat anything he says with extreme suspicion.
The same could/should be said of many of the ‘writers’ here but, not so much “who are they?” as “WHY?!”
Sufficient unto the day ….
I’m impressed you had the patience to wait nearly a whole article to go from “data from … redit” to the observation “perhaps worst read, knowing the quality of online discourse” 🙂
I do question however, as an advocate of the sentience of inanimate objects, the claim “decades of people wrongly claiming that inanimate objects are alive” as I don’t believe a computer program is an object. A minor quibble however.
Thanks for a good intro to the subject, which I had been avoiding as it seemed like another eccentric engineer filtered through clickbait chasing journalist. It looks like there are some unusual twists in this one.
not just computer programs being declared sentient — i was thinking of the Mechanical Turk (although it’s a bit further back than just decades).
For over a decade the UK Torygraph has been using AI to rewrite wire service reports in the side columns.
It became so obvious, due to glaring errors which no semi-sentient humanoid however ill educated would make, that they actually fessed up in a large feature article in 2019.
So many politicians speak as if they are straight off some dodgy production line – mere prototypes at the moment.
Question is, does that matter?
Even if they are shape shifting alien lizards, rather than just automata wheeled out by powerful interest groups, how different would what passes for government be?
What changes in the single minded pursuit of the apparent objective of divesting the public?