As usual, Taylor Swift was the subject of some of the biggest stories in the world last week. Unfortunately, it wasn’t just for her appearance on the sidelines of her boyfriend’s Super Bowl appearance-clinching win in the NFL.
AI-generated sexually explicit images of Swift went viral on X, formerly known as Twitter, last week. Photorealistic images of the world’s most famous singer in compromising positions at an NFL game remained up for the better part of a day, garnering tens of millions of views.
With Elon Musk’s company sitting on its hands despite widespread media attention about the images, Swift’s own fan army took matters into their own hands and tried to bury the images by posting their own content using keywords and hashtags that had promoted the AI-generated images. Eventually, X took down the images and blocked the search term “Taylor Swift” all together.
It marked the first time that an AI-generated non-consensual image had broken through to the mainstream like this, but it’s certainly been a long time coming. As noted by US-based misinformation researcher Renee DiResta on Threads, this exact scenario was predicted years in advance.
What made last week’s fake Swift images possible was the confluence of factors. Tools making it simple to create realistic images of people have been getting more powerful over the past few years: easier, quicker and more convincing. Photoshop has existed for decades, deepfakes were first reported in 2017, and now generative AI has supercharged this trend even further. Now anyone with a computer can generate completely original images of real people doing anything. Earlier this month, we reported on a thriving online community that is facilitating people creating non-consensual sexual imagery of real people, from celebrities to members of the public.
The second part is the dereliction of responsibility by X to moderate content that it’s distributing on a mainstream, popular platform. Whether it’s firing 80% of his staff working on site moderation or his rolling back of policies aimed at protecting users, Musk has actively worked to make the platform into a more toxic place that’s become home to neo-Nazis. Like frogs in slowly boiling water, many users — including much of the news and political class — remain on the platform where they’ve been exposed to content such as the Swift images. While not at the same level, we’ve seen other tech companies also begin to loosen their moderation too.
If you squint hard, a silver lining could be a new push in the United States for new legislation to stop fakes like this. The reality is laws against image-based abuse already exist in many US states and in Australia. Similarly, it’s nice to see Swift’s fans take action into their own hands but ultimately demoralising to know that even someone of her stature can’t get action from a platform in a timely fashion, not to mention that the rest of us don’t have stans to protect us.
The key problem with this issue is enforcement. In a world where the ability to hurt people with technology becomes as easy as pressing a few buttons, trying to protect people by offering them justice through the slow and arduous criminal justice system isn’t enough. In Australia, the eSafety commissioner has powers to respond to image-based abuse, but again these depend on individual reports on a case-by-case basis. Using this to defeat this problem is like trying to play Whac-A-Mole.
Trying to regulate technology is also not realistic. Although the Swift images were produced using a generative AI-product from the world’s most popular company, Microsoft — an under-covered aspect of the story — similar tools are also freely available online due to open source technology.
A significant focus needs to be on the platforms who allow the distribution of this content. While these images emerged on Telegram, a less popular platform with near zero moderation, X’s size combined with its poor moderation made this possible. The eSafety commissioner has put out draft rules for platforms such as X which requires them to act quickly on problems such as this — it remains to be see whether this can have a material effect.
One final note to end on: X’s Taylor Swift search term blocking was a crude solution, presumably to limit the already incredible amount of damage that it had done to Swift but not capable of doing anything more targeted. While being blocked on X is unlikely to affect Swift’s standing, it also reminds us how victims are often the ones who bear the brunt of efforts to protect them.
Another misinformation researcher, Nina Jankowicz, mentioned she can’t promote events, post photos or share her real-time location because of the abuse she’s faced from trying to combat online bullshit. It’s a reminder of how easy it is to abuse others, and how it’s the targets who ultimately pay the price.
Interesting story on ABC News today,
“Chanel 9 apologises for digitally altered ‘photo of Victorian politician.”
Totally spring on the facts Ch 9 blamed Ai for doing it without them knowing!!
What a load of bullshirt, I can agree that Ai could enhance the ‘Photographic quality independently,but enlarging the breasts and removing part of the dress un-comanded by human hand, probably male, defies belief.
I guess this B/S is going to replace “the dog ate my Homework” as the standard excuse henceforth.
Yes, I loved their defence, too. Must be hard to work with a pervy AI program you have absolutely no control over. You would think they’d just stop using it, if it was at all true, wouldn’t you?
I thought that was a disgraceful response. Needed to be page one headline.
I recon she should be talking to a Legal Eagle.
Dang! That pesky AI has slipped its collar again….
Apparently Adobe photoshop have denied that automatically altering the image is possible.
I hate to advocate for litigation, but America’s ‘freedom of the press’ obsession has become a complete perversion of justice when someone of Swift’s profile and money can’t sue Musk and X for the equivalent of dereliction of duty.
I’m surprised that she can’t sue Musk and X. Why is that, please?
https://en.m.wikipedia.org/wiki/Section_230
Musk’s cesspit. Wait until the US election starts revving up. “You ain’t seen nothing yet.”
While Tay Tay is more my students’ thing than mine, she is talented clever and fairly good writer who can actually perform her stuff without all the extra crap. She does not deserve this garbage and all the social media mob need flaying though Musk needs it most
Totally agree.
And, do you remember that revolting stuff with Julia Gillard. There certainly are some very sickos out there
This story angers me for several reasons. One is that deep fakes have been reported since 2017 (acknowledged in this article) with victims whose names would be known mostly to their family & friends rather than by millions. Despite their pleas for the fakery to be curtailed – by Meta, Twitter/X – or criminalised nothing has been done. But now billionaire Taylor Swift is feeling the heat it’s become a serious issue worth front page headlines & our attention. Celebrity has distorted our society & values.
Sociopaths in influential positions, such as Zuckerberg & Musk, have distorted our society and values.
But they’re making money from their monstrous creatoins, so how can that be wrong???