An Australian mayor is threatening to sue OpenAI for defamation over false claims made by its artificial intelligence chatbot ChatGPT saying he was jailed for his criminal participation in a foreign bribery scandal he blew the whistle on.
Gordon Legal, acting on behalf of regional Victorian council Hepburn Shire Mayor Brian Hood, sent a concerns notice to OpenAI on March 21, claiming that he had been defamed by the company’s chatbot.
Hood’s lawyers allege that ChatGPT told users that Hood was found guilty of charges relating to bribes paid by Note Printing Australia and Securency Australia to officials in Indonesia, Malaysia, Vietnam and Nepal between 1999 and 2004.
His actual role was as a whistleblower in the scandal that plagued the two subsidiaries of the Reserve Bank of Australia. Multiple executives — not including Hood — were charged with their roles in the scandal and the companies were fined $21 million.
Hood said he only looked at what ChatGPT said about him after he was alerted to it by others. He’s worried about the answers hurting his reputation as a local councillor and businessman.
“It’s remarkable. It’s devastating. It’s incredibly inaccurate. The media gave me the tag of whistleblower, I was the prosecution’s witness, I did reveal a number of things. There was never a suggestion that I did anything wrong,” he told Crikey.
The notice, first reported by Forbes Australia, claims that OpenAI has caused damage to Hood’s reputation by giving “inaccurate and unreliable answers disguised as fact”.
“As artificial intelligence becomes increasingly integrated into our society, the accuracy of the information provided by these services will come under close legal scrutiny. The claim brought will aim to remedy the harm caused to Mr Hood and ensure the accuracy of this software in his case,” Gordon Legal partner James Naughton said.
Victoria’s defamation laws require prospective claimants to serve a concerns notice that outlines the defamation claims and gives publishers an opportunity to respond. Hood said he hasn’t heard anything back from OpenAI since his lawyers sent the notice last month but he is considering filing proceedings at the conclusion of the mandatory 28-day waiting period from when the notice was sent.
“We haven’t heard anything. We’ll see what happens,” he said.
Crikey was unable to replicate the claims when interacting with ChatGPT. When asked of Hood’s involvement with the Note Printing Australia scandal, the bot said “To the best of my knowledge, there is no information to suggest that Brian Hood, the former mayor of Hepburn Shire Council, was involved in any way with Note Printing Australia or the scandal that surrounded the company in 2012”.
Whether ChatGPT can be sued for defamation has not been tested in Australian law. Online platforms have been deemed publishers and therefore liable for hosting defamatory material, like in the case of Google being repeatedly successfully sued over comments in search results listed by the company.
OpenAI did not immediately respond to a request for comment.
I asked ole mate C-GPT about the accuracy of its info it replied, ” .. since I am a machine learning model and not a sentient being, I may occasionally provide incorrect or incomplete information. Therefore, it is always important to critically evaluate the information I provide and cross-check it with other reliable sources to ensure its accuracy.”
Hmmm, my greatest fear is that bad poetry will soon take over the world.
It’s not usually a defence to libel, is it, say in a newspaper, if there is a disclaimer to the effect that information may be incorrect?
Would “the reasonable man” believe that something stated as fact on ChatGPT was either (a) true or (b) more likely to be true than false?
I’m more likely to believe Scott Morisson than ChatGPT frankly, but I may not be “reasonable” as defined. If it helps, m’lud, I lived in Brixton SW2 for a few years and would have regularly caught the Clapham omnibus if that helps.
The Number 35 presumably?……………..
…………infamous for travelling in convoys of 10 (for mutual protection?)
Considering the level of information and digital literacy in the country, I think the reasonable person would expect to believe what ChatGPT creates.
Elon Musk successfully defended a recent investor class action on the basis that (I paraphrase) no rational person would believe Elon Musk, or any other divine being, when they tweeted “funding secured”. Despite the fact that TSLA routinely announced corporate actions on twitter.
True and demonstrates just how out of touch with reality that decision was.
This poses an interesting question, as well as exposing ignorance of what is meant by “Artificial Intelligence”.
The responses generated by ChatGPT are not coded by the programmers, who have absolutely no control over what it may come up with, nor any way of knowing that in advance.
So how can they be accused of defamation?
There is no direct line of responsibility.
You might as well sue Alfred Nobel for the damage to Juukan Gorge.
The problem with online anything – if the program owner is notified of illegality it only takes a minute for that to disappear as the program is changed . The complainer one hopes has a screen shot or better still a printed copy of the offending screen .