A major Australian media company depicting a female politician in skimpy clothing seems like a regressive act reflecting sexist attitudes of the past. Yet this happened in 2024 — and is a taste of the future to come.
On Monday morning, Animal Justice Party MP Georgie Purcell posted to X, formerly Twitter, an edited image of herself shared by 9News Melbourne.
“Having my body and outfit photoshopped by a media outlet was not on my bingo card,” she posted. “Note the enlarged boobs and outfit to be made more revealing.”
Purcell, who has spoken out about gendered abuse she regularly receives as an MP, said she couldn’t imagine it happening to one of her male counterparts.
After Purcell’s post quickly gained a lot of attention online — a mix of shock and condemnation — Nine responded. In a statement provided to Crikey, 9News Melbourne director Hugh Nailon apologised and chalked the “graphic error” up to an automation error: “During that process, the automation by Photoshop created an image that was not consistent with the original,” he said.
While not using its name, Nailon seems to be saying the edited image was the result of using Adobe Photoshop’s new generative AI features, which allows users to fill or expand existing images using AI. (An example Adobe uses is inserting a tiger into a picture of a pond). Reading between the lines, it appears as though someone used this feature to “expand” an existing photograph of Purcell, which generated her with exposed midriff rather than the full dress she was actually wearing.
Stupidity, not malice, could explain how such an egregious edit could originate. Someone who works in a graphics department at a major Australian news network told me that their colleagues are already using Photoshop’s AI features many times a day. They said that they thought something like what occurred regarding Purcell would happen eventually, given their use of AI, limited oversight and tight timeframes for work.
“I see a lot of people surprised that the AI image made it all the way to air but honestly there is not a lot of people checking our work,” he said.
As someone who’s worked in multiple large media companies, I can attest to how often decisions about content that’s seen by hundreds of thousands or even millions of people is made with little oversight and often by overworked and junior employees.
But even if you buy Nine’s explanation — and I’ve seen people casting doubt on whether the edits could have happened with AI without being specifically edited to show more midriff — it doesn’t excuse it or negate its impact. Ultimately one of the biggest media companies in Australia published an image of a public figure that’s been manipulated to make it more revealing. Purcell’s post made it clear that she considers this harmful. Regardless of the intent behind it, depicting a female politician with more exposed skin and other changes to her body has the same effect, although not as severe, as the deepfaked explicit images circulated of Taylor Swift last week.
The Purcell image is also telling of another trend that’s happening in Australian media: newsrooms are already using generative AI tools even if their bosses don’t think they are. We tend to think about how the technology will change the industry from the top down, such as News Corp producing weekly AI-generated articles or the ABC building its own AI model. The UTS Centre for Media Transition’s “Gen AI and Journalism” report states that major Australian media newsroom leaders say they’re considering how to use generative AI and don’t profess to be meaningfully using it in production yet.
But, like in other industries, we know Australian journalists and media workers are using it. We might not have full-blown AI reporters yet, but generative AI is already shaping our news through image edits or the million other ways that it could — and probably is already — being used to help workers, such as by summarising research or rephrasing copy.
This matters because generative AI makes decisions for us. By now, everyone knows products like OpenAI’s ChatGTP sometimes just “hallucinate” facts. But what of the other ways that it shapes our reality? We know that AI reflects our own biases and repeats them back to us. Like the researcher who found that when you asked MidJourney to generate “Black African doctors providing care for white suffering children”, the generative AI product would always depict the children as Black, and even would occasionally show the doctors as white. Or the group of scientists who found that ChatGPT was more likely to call men an “expert” and women “a beauty” when asked to generate a recommendation letter.
Plugging generative AI into the news process puts us in danger of repeating and reinforcing our lies and biases. While it’s impossible to know for sure (as AI products are generally black boxes that don’t explain their decisions), the edits made to Purcell’s picture were based on assumptions about who Purcell was and what she was wearing — assumptions that were wrong.
And while AI may make things easier, it also makes the humans responsible for it more error-prone. In 1983, researcher Lisanne Bainbridge wrote about how automating most of a task made more problems rather than less. The less you have to pay attention — say by generating part of an image rather than having to find another — the greater the chance that something goes wrong because you weren’t paying attention.
There’s been a lot of ink spilled about how generative AI threatens to challenge reality by creating entirely new fictions. This story, if we are to believe Nine, shows that it also threatens to eat away at the corners of our shared reality. But no matter how powerful it gets, AI can’t yet use itself. Ultimately the responsibility falls at the feet of the humans responsible for publishing.
I use Photoshop’s generative fill and generative AI every day. It does *not* work like that. This was not an “automated” process – it *had* to have been done with human oversight, and for Nine to put out such a manifest, easily-refutable lie as an excuse makes it worse than just admitting the newsroom has the mentality and judgement of a 17 year boy to being with.
Yep, Nine’s explanation is the typical infantilising of the public. Good comment.
As a photoshop veteran of over 30 years, I concur.
Hey Michael — thanks for the comment. I don’t mean to downplay the role that Nine had in approving this. The buck stops with them. But playing around with Photoshop’s generative fill today myself, I found that it generated frequently Purcell in more revealing clothing without me prompting it to do so (only asking it to “fill” the rest of the canvas below a cropped image of her).
My point is that Adobe has put a feature into its software that will suggest depicting women in more revealing outfits than they actually wore. It’s not just newsrooms who use this either — noting that Nine Melbourne /should/ have more oversight and failed in their duties letting this go to air — but also freelance journalists, graphic designers, etc, who don’t have that same oversight.
I’m concerned about the impact that putting this feature in the world’s most popular graphic design software will have, especially on its impact on journalism.
Hey Cam, maybe the reason Photoshop’s generative fill is already sexualising images of Georgie Purcell is it’s reading all the online stories about how Luke McIlveen’s new workplace is already emulating Daily Mail and Newscorpse – ie self-fulfilling prophecy.
I’m sorry the only we can regulate AI is if we ensure the buck stops with the human using it.
It’s like saying clothes are bad because they can be removed.
I really wish the media would stop with this constant knee jerk reaction calling for everything to be banned!
You do realise where this ends?
With image rights that will further entrench the power of celebrity.
You do realise where this ends?
Err, no that’s a load of nonsense. It ends with media reverting to telling the truth, even though that puts an end to the business model of Newscorpse, Daily Mail: the organisations where Luke McIlveen learned his trade.
Does truth-telling matter? As and when you lie dying of climate change-driven heat stress, you might reflect on that.
They’re already required to tell the truth!
We have d-famation laws!
We also have copyright.
The issue with copyright is the owner of the copyright in a photograph is the photographer not the subject.
So the only way to give the subject a right is to give them some kind of image right.
And then image rights will be used by subjects of photographs – particularly subjects with money and power – to monetise their image rights and to prevent other people from using their image (including for news stories).
It’s just like d-efamation laws and indeed consumer laws (which are mostly used by corporations against rivals).
It all sounds great on paper but then the only people who can afford to enforce them are the powerful.
Yes, I suspect you’re right, Cam.
If you do an image search on the original, you will see the internet (and hence Adobe’s training corpus) is full of younger, tattooed, blonde women – many with bare midriffs. It’s such a plausible look that it no doubt passed the superficial scrutiny of the broken editorial process. It would be unlikely to make such a change with a much older woman – or a man. Many women who look like Georgina are happy to sport a bare midriff, but Georgina isn’t one of them (at least in a professional context). Hence the justifiable outrage.
Where does this fall on the “malice vs stuff-up” spectrum? Many see malice because they can’t conceive how such an edit could be a mistake. Much like when spellcheckers became commonplace and people’s names were “corrected” to derogatory words, I think people will be more likely to see a stuff-up as they gain fluency with the tools.
None of this is to let Nine off the hook – they have failed to properly manage this new technology.
Hi Cam, you’re correct in that Photoshop can and will generate such outfits, and yes, using no prompts and allowing the system to come up with variants will produce a range of responses that can include revealing imagery, but a. it will generate 3 options to choose from – an *operator* makes the choice of which to use or or to scrap all three and try again if nothing suitable emerges, not Photoshop, and b. generative fill only creates new imagery on the blank part of the canvas and the fringe area connecting the existing image with the newly generated stuff. It does NOT alter non-adjacent areas of the existing image to increase a bust size, for example.
Sorry, didn’t add a bit to address your point re Adobe adding this feture – Photoshop isn’t “adding skimpy outfits”, it’ just choosing what sort of image is likely to best fit, using some AI to measure and trim art styles from it’s training corpus. Adobe make a selling point of this – it’s training corpus is the absolutely ginormous Adobe Stock Photography library, and the y use this to adrdress the copyright issues text systems like ChatGPT are up against. If your work is in the Adobe Stock library then you’ve agreed to allow it to be used to train Adobe Firefly AI. And since it’s a stcok photography library, used for advertising, marketing and content creation, then it skews *away* from a documentary slice of life and tends to what appeals in the advertising and marketing sectors. We knew this before we ever fired up Firefly and it’s generative art, and it’s a limit that (for the most part) we’re happy to live with. It is not a perfect system. If you have access to Photoshop, or one of your colleagues does, ask it to generate “echidna” and see what eldritch horrors it comes up with ?
Exactly.
But let’s take a step back.
This story has turned into AI fearmongering but there is a bigger issue at play here.
For some reason Nine made the EDITORIAL DECISION to play the person not the ball i.e. what the media and everyone else constantly does because everything is so f’ing superficial these days!
They either did this to chase ratings (sex sells) and/or to undermine the policies that this person stood for.
And therein lies the issue at so many levels.
– The power and influence of the media;
– Society’s obsession with titillation;
– Our inability to deal with issues of substance – what are our responses to her actual policy positions.
AI is a form technology. Like all technology all it does is enable human beings to do more – and the more we do the more good and bad we achieve.
I didn’t think 9 Media would be joinging the **** of 7 and NewsCorpse so readily.
I was wrong.
Check out the background of their new bloke – McIlveen.
Doesn’t engender confidence.
Will News Corp be requesting a discount on the sum they want to sue all the AI platforms for, based on the volume of AI-generated content they’re apparently publishing?
Seems about as hypocritical as claiming the government needs to do rent seeking on your behalf because Google is somehow stealing all your paywalled content.
I’ve seen people casting doubt on whether the edits could have happened with AI without being specifically edited to show more midriff
Maybe you saw it in my comment yesterday. I didn’t make it up, I was relying on a quote in the Guardian from the company involved:
A spokesperson for Adobe said use of its generative AI features would have required “human intervention”.
“Any changes to this image would have required human intervention and approval,” the spokesperson said.
Automation? No way.
This is human.
Working with photoshop on an almost daily basis I can confirm that the application does do automation but the user first has to set the parameters.
In addition, the image would have been approved at various levels prior to publication.
This was no mistake in any definition of the word.