It’s early days in what’s set to be a particularly challenging era of artificial intelligence-driven disruption of, well, just about everything. As ever, it seems media and entertainment are at the greatest risk of being first to get it in the neck.
Surprise! Looks like plenty of old media paladins reckon the dawn of AI could be a show-me-the-money jackpot: cheaper, robotic (literally) copy, plus a lucrative new market selling decades of paywalled stories as data sets for AI training.
As it was with the fight for the news media bargaining code with Meta and Google, it’s News Corp that’s leading the chase for dollars. Back in May, News Corp CEO Robert Thomson was already talking up the demand for a bundle of quids for the quo of training AI engines on News Corp’s accumulated stories.
He also noted the potential for AI to improve optimisation and “efficiency” — almost certainly job losses — in both the newsroom and the broader business. After close to 20 years of training its star reporters to write for the algorithms of YouTube and Facebook, it’s just one small step to have a new algorithm do all the work.
For journalists — like all creative workers — there’s a lot in AI to get us excited. It’s making investigative deep dives more practical, for example, and helping with the grunt work of trawling for patterns through data sets and document dumps. It’s also giving us the scale to identify and challenge misinformation in real time (although more dangerously; it equally empowers bad actors to overwhelm “truth” with AI fake news in the first place).
Publishers have been experimenting with AI to generate simple news reports that effectively turn data (sports results, elections, financial documents) into words, such as The Washington Post’s Heliograf. But yesterday’s AI can quickly become just another indispensable tool: the ubiquitous spellcheck, for example, was once cutting edge, back when it was developed in Stanford’s artificial intelligence lab 50 or so years ago.
Just maybe, AI could offer new business opportunities, as shown by the frisson of hope rustling through the publishing elite at the news on Friday that US wire service Associated Press had embraced the “if you can’t beat ’em, join ’em” ethos, striking a content and tech-sharing deal with OpenAI, the parent of ChatGPT.
It’s an iteration of AP’s traditional business model, with the not-for-profit selling (or swapping) its content for others (traditionally its member newspapers) to use as they please. It’s a good deal for ChatGPT, too: AP’s rigorous fact-checked news foundation helps train the engine to answer the question it’s found hardest to grapple with: what’s verifiably true? What’s not?
Le Monde CEO Louis Dreyfus told a gathering of European publishers the AI situation is “an emergency”, fearing “the end of our business model”. He called for a shared demand from publishers on “what AI companies are ready to put on the table for use of all content”. Representatives of The Guardian and the Financial Times agreed.
In June, the Financial Times reported that big tech and old media have been getting together to talk about “copyright issues around their AI products”, while Vanity Fair wrote that The Information’s Jessica Lessin brought together “an off-the-record who’s who of gen X and elder-millennial media luminaries” to work out what to do.
The fight with Meta and Google was about the distribution of existing works — and how, by cleaving the individual work from the masthead, they made it difficult to monetise. Now it’s more about the one-off value of the body of work — the decades of reporting — as a tool for training the emerging AI-powered voices.
Given how the imperatives of social media and search have reshaped commercial journalism, is its output the best — or even the only — tool for laying down a fact-based bedrock for AI-supported journalism of the future? How eager should we be for an AI engine trained on the sort of Fox News coverage of the 2020 elections exposed by the Dominion case?
AI trained on commercial media also risks replicating all the biases of old media — male, pale, stale (and very American) — in whatever new media emerges. There’s evidence that AI ”takes gender and racial stereotypes to extremes” in, for example, text-to-image.
As with every wave of disruption, it’s not the technology. It’s how it will be used. Sure, old media wants the sugar hit of an AI pay-off. But for journalism, it’s too early to conclude that the old way of doing business is the best way of shaping a future we can only dimly see.
If I was running Sky After Dark, I’d love this. Combine all the rage, rants, illogic, snide and specious commentary into one AI figure, a sort of Max Headroom that is part Jones, Bolt, Murray, and that white haired ferret-like character, what is his name? Max would have to have a female counterpart, a Barbie to his Ken, a Maxine Hateroom. You could then sack the overpaid, underwhelming shock jocks and save a fortune while convincing the elderly white gullible to continue the righteous crusade against the woke forces of overeducated trans environmentalists who want to force our precious white Christian children to use unisex toilets.
Well summarised, Frank. No doubt they’re working on it.
The Crikey BTL flock will be able to tell real journos from AI journos easily and instantly. The former will exhibit spelling and punctuation errors in their writing.
It’d be trivial to analyse Dutton’s output, only two bytes needed in every instance. It could be done in BASIC, save a fortune on AI.
Murdoch’s slavish mouthpieces may be the first to go. The columns go on but the toadies will be sitting at home staring out of the window. Tis an ill wind that blows no good.
Every data sample has bias, as does every algorithm. It’s the challenge of recognising what it is, and what to do about it, that is the uncertainty.
Previously, just ignore the situation.