On Friday, the robodebt royal commission will hand down its findings. What is expected is a scathing report chronicling the failure of the executive arm of the Australian government to fulfil its most basic responsibilities to its citizens. Robodebt ranks among the worst mistakes of the Commonwealth public service since Federation in 1901. Its lessons are profound and numerous, extending to failures of bureaucratic culture, transparency and reporting, and basic accountability in leadership.
Centrally, the malign act of robodebt was one of automated persecution, authorised at scale — a judgment handed down upon the vulnerable by a literal engine of the state, without direct recourse or right of reply. A radical violation of every principle of procedural justice, it demonstrated to the Australian people a profound disrespect and devaluing of human life and a disregard for the primary obligation of a duty of care.
Yet while these violations are without doubt all fundamental indictments, they are only the surface of a much deeper failure. At robodebt’s heart is a total misalignment between state administration and the basic values of the democratic project, made possible by human misjudgment of the risk that technology advancements pose to our systems of governance.
The lessons of robodebt are not local but global, and far-reaching into the future. The central concern is the danger of the unfettered use of technologies similar to artificial intelligence in consequential decision-making, driven by a human hunger for cost-efficiency and a corresponding hubris to presume that gain can come without cost. It is a lesson in the catastrophic harm of non-human administration of human life and the totalising and uncompromising consequences of authorising an agent machine to manage people completely free of moral oversight.
Robodebt was a program trained used by the Australian Taxation Office to detect and pursue discrepancies in income to ensure greater compliance in payments received by welfare recipients. It was meant to be cost-effective and efficient, deployed with the goal of optimising for the detection of non-compliance.
The field of AI has accelerated rapidly in the past decade, with advances not only in narrow-use programs (such as the AI systems employed by Centrelink) but more recently in the release of Large Language Modules (LLMs) such as ChatGPT. The fundamental issue with AI is that while we have made exponential leaps in the power of such systems to optimise for a wide array of tasks, we have made far fewer gains in the fields of safety and alignment.
We have built machines that are superior in their ability to solve and execute complex utility functions infinitely faster than our own brains. We have not, however, managed to achieve the optimisation of tasks while still programming for human values. At the most rudimentary level, alignment is the attempt to ensure that AI systems work for humans and support human goals, no matter how powerful the technology becomes.
Robodebt is an example of what can go wrong when AI ambitions are misaligned. Absent of human supervision, the algorithm made a series of decisions and took actions that had disastrous consequences for countless human lives, as seen in the case of Jarrad Madgwick, 22, whose tragic suicide in 2017 occurred as a direct response to receiving a debt notice from Centrelink.
Such systems will continue to be used in government, including in Home Affairs and in the detection of visa fraud. Around the world, governments are increasingly utilising the technology, for anything from calculating bounce-back rates in hospitals to waste management. Globally, governments are leaning into this space, finding new applications and touting the benefits of efficiency in service delivery and reductions to budget bottom lines.
Australia has just taught the world the cost of careless application. The areas in which governments should be investing are AI alignment, regulation, safety and risk-based approaches. The European Union has started this work and has favoured a risk-based approach, differentiating the use of AI according to whether the risk is unacceptable (such as the manipulation of human behaviour) or low risk. The EU framework is nuanced but cautious, and most critically, its focus is on preventing human harm and preserving democratic values.
Australia’s regulatory approach has been sluggish. The Department of Industry, Science and Resources released the discussion paper “Safe and Responsible AI in Australia” in June this year. While identifying some major themes and proposing a one-page, watered-down, risk-based approach at the end, the paper offers little to move the dial on AI safety. The proposed Australian risk-based approach reads more like a voluntary code of conduct than a regulatory framework. It underscores a difference in approach: while the EU has prioritised the protections of citizens, Australia has been reluctant to “stifle innovation” and has left us exposed in the process.
To unleash a powerful technology such as robodebt without consequence to the implications for human life is beyond irresponsible. It is a callous and calamitous failure of public administration, and the responsibility ultimately lies at the top. The action by the current government to initiate a royal commission was not only appropriate but essential.
In the end, robodebt was a decision made to deploy a powerful deterrent against the most vulnerable citizens in our society. It was a decision of the previous government and must be fully owned by those who knew and did not act. Complex technologies are a part of our future, but using them without human oversight, without proper regulation and in the absence of due thought about safety and alignment is not only irresponsible, it is criminal.
For anyone seeking help, Lifeline is on 13 11 14 and Beyond Blue is on 1300 22 4636. In an emergency, call 000.
I wouldn’t call the programming used in the Robodebt debacle AI. More likely AS, that is: Artificial Stupidity. I don’t think any really sophisticated artificial intelligence techniques were used to create these debt notices, just some data matching and bad maths linked to an email generator.
I agree, Stuart. Describing all computer software as ‘AI’ is dangerous. The kind of algorithm deployed in Robodebt could have been built into computer programs fifty years ago, long before the idea of artificial intelligence entered the public lexicon. If Robodebt was a human being, its IQ would register in the negative: as you say, stupidity was built into the code itself.
The threat in categorising all software as AI is that genuine AI poses risks over and above traditional algorithms, all of which need to be addressed but may be overlooked if all software is lumped in together. Other than that, though, I very much agree with the author’s analysis. We haven’t solved all of the problems associated with traditional systems yet. We are a long, long way away from being ready to deal with the social implications of real AI.
It’s the kind of simple-minded software you’d expect from a mediocre computer science undergraduate.
Yes indeed. Not much sign of intelligence by any usual definition of the word.
Nailed it. Artificial stupidity it was to a tee, when anyone with half a clue how declaring casual income to Centerlink goes could tell you it was deliberately obtuse, with the only plausible reason being the LNP’S unwavering hatred of the poor (but lust for poverty creation).
and women perpetrated by supposed sisters in plumb roles but the bulk of the blame is neo cons in all political parties cause hey they still got the same idiots in top jobs moving public ownership and wealth into corporate profiteering – stuff individuals rights creative agency like thei masters they monetize human need ; via aged care,womens rights ; they lie and say we need skilled workers ! disabled and children working poor more like – one bird lies and says she has created meaningful employment and training for women and” older” workers ( this is not seen in old movies you know the ” older people ran things after a life time and were seen as top of the expertise and respected – its a ponzi scheme run by corporate hypocrites and elites
I think it’s not so much AI or AS but Stupidity by “Intelligent Design”!
A machine is an extension of the hand. A hammer can sweetly strike a nail or painfully strike a thumb – it’s the operator, not the machine. The Robodebt Machine did precisely what it was intended to do, which was to ruthlessly apply a modern iteration of the poor law upon the working class. Don’t go blaming the machine for the negligent and the intentional harm caused by the operators.
Sorry, Griselda, I didn’t see your post before doing mine above.
And yes, I wish people would stop calling it a ‘mistake’. It worked very efficiently and exactly as intended.
Albo now quietly stated under his breath ; It was illegal due only to the fact the dodgy brothers had not made it a bad law before they implemented it like Rishworth; currently advocating simply because one is unemployed and despite the economic costs to the public purse that disabled people are forced to work in indentured work which only serves to make billions for jobs trainer / provider companies ( worst non training and desperate awful so called ” courses – a smarter Australia are they kidding ! billion dollar contracts – no improvement in outcomes according to their own data So they increase the rubbish and abuse of the parasites at our trough – so called “Public servants” lazy self serving use of the lobbyist reps to service portfolio gets third party “Job Providing ” Contracts and a fair few self serving as seen in working groups panelled on the NDIA/ s board unemployment ay the dept of employment services members affiliated ” partners” gravy train
Precisely Ms Lamington – better made with SR flour btw!
Robodebt was terrible, illegal, did great harm and should never have happened, but it did not involve AI.
Computers were used to automate the comparison of annual income pro rata per fortnight with welfare payments, while ignoring the welfare system’s use of fortnightly income for calculating entitlements. Therefore the comparisons produced by the system were irrelevant and provided no evidence of real debts, but this too was ignored while the computers automatically spewed out form letter debt notices, and so welfare claimants were mercilessly pursued to extort money from them with menaces. None of this involved any AI. It was plain old-fashioned computerised automation which speeded up a process that could equally have been done manually.
I completely agree. Robodebt preceded even the current mislabelling of everything computer-related and complex as ‘AI’, and it is annoying to see numerous commentators using the term with no clue what they’re talking about.
Not that robodebt could be classed as complex. Its problem was that it wasn’t complex enough. Well, one of its problems. The biggest issue was ministers and their minions whose answer to the problem of staff (the actual experts) pointing out that it was coming up with garbage results was to tell their staff not to do contradictory calculations.
Scratch that. Robodebt had one cause and exposed one huge problem: a Public Service that now has smoothly operating systems in place to evade blame.
its still going on in the job trainer / job provider wasted billions not saving not training unless forcing people vack into Dickensian workhouses is tge desired model ! Well we know “Morriscum, Stewie, Trudge and Rorter and or couse compardre Mutton love the Dickensian Beadle poor workhouse mantle – blame the sinful poor they must be bad or dumb not to be gainfuly employed and working like US
Yep. Stop blaming the machines. The blame lies wholly with the human perpetrators.
The author says, “Robodebt was a machine-learning program trained to use large-scale datasets from the Australian Taxation Office to detect and pursue discrepancies in income to ensure greater compliance in payments received by welfare recipients.”
Robodebt was nothing of the sort.
Robodebt was underpinned by simple math: Is your ATO-declared income divided by 26 fortnights greater than the fortnightly income declared to Centrelink?
There is no AI there. No complexity at all. A 9th grader could have told anyone in the APS that averages don’t work the way the Robodebt folks wanted them to work.
Crikey, you’re being infuriating here: Please raise your standards. The fact that half the world is falling over themselves to hype AI doesn’t mean you need to recklessly cite it in your published articles.
Robodebt isn’t an AI story. Robodebt is a story about corruption in the APS: Gross dereliction of duty pursued by people judged by APS standards to be good enough at public administration to rise to senior ranks; Abusive sociopaths in positions of public responsibility failing to follow the laws set for their conduct by the Parliament.
Math? Ninth grader? Please use Australian English.
arthritis and typo sorry
Here Here and labor allowed it too
Robodebt may have been a mistake but it was not by mistake. It was a deliberately cruel, vicious and dishonest attack on some of the most vulnerable members of society. The politicians and public servants responsible should never be allowed to hold a position of responsibility again. Sadly it seems most of the politicians have already walked away scott free.
When it’s already been made clear that there were attempts to make it known to those in control the concerns about its application and legality, which were ignored for the sake of punishing the poor, it was definitely not implemented by mistake.
Those in control knew exactly what it was doing, It was what Scomo and friends wanted it to do! There was no mistake or ignorance involved, pure bastardry and malign intent. Criminal charges MUST be brought!