Scott Morrison walked away from Osaka’s G20 summit with, optically at least, a few wins. Apart from his sit down with the US president, Morrison secured the support of world leaders to put increased pressure on Facebook and other social media giants to act faster on taking down “violent terror content”.
The non-binding statement, endorsed by all G20 members, calls on companies to act immediately when contacted by authorities to remove content such as the live video of an attack or terrorist recruitment material.
But with the already mammoth task that moderating social media content entails, what exactly would a stricter framework look like? And is change even possible?
The sheer scale
Before the horror of the Christchurch attack gave the argument a new urgency, Facebook was already wrestling with content moderation, facing criticism for the flowering of fake news, white nationalism and radicalisation on its platform. Last year, Vice reported on a series of dinners between Mark Zuckerberg and leading social media academics to discuss the issue. Noting the debate “has largely shifted the role of free speech arbitration from governments to a private platform”, the piece sums up Facebook’s challenge:
How to successfully moderate user-generated content is one of the most labor-intensive and mind-bogglingly complex logistical problems Facebook has ever tried to solve. Its two billion users make billions of posts per day in more than a hundred languages, and Facebook’s human content moderators are asked to review more than 10 million potentially rule-breaking posts per week.
The human cost
Content moderation is undertaken largely by around 7,500 workers in sites scattered across the world. In recent years, the effects of this kind of work have become more clear. In 2017 two Microsoft moderators sued the company for the PTSD they suffered from having to regularly view “inhumane and disgusting content”. In March 2018, a moderator in Florida named Keith Utely died of a heart attack because of, according to a damning piece on The Verge, the terrible conditions endured workers at his site:
The 800 or so workers there face relentless pressure from their bosses to better enforce the social network’s community standards, which receive near-daily updates that leave its contractor workforce in a perpetual state of uncertainty …
‘The stress they put on him — it’s unworldly,’ one of Utley’s managers told me.
The conditions were such that three of Utely’s coworkers broke the 14 page non-disclosure agreements required of Facebook contractors. These NDAs lead to another layer of trauma: not only must a moderator absorb the most grotesque content on the internet, they are not allowed to tell anyone what they have seen. Filmmakers Hans Block and Moritz Riesewieck, who made The Cleaners, a documentary account of a content moderation site in Manila, told the ABC this had lasting effects:
You’re not allowed to verbalise the horrible experience you had. While we were filming the documentary … we both had time to talk about what we are filming, time to have a break and to stop watching and to take our time to recover from what we saw. The workers in Manila don’t have the time. They don’t have the ability to talk to someone.
An ‘open’ internet?
A further question is one of who decides what is acceptable, and what the lasting effects on the discourse might be.
In a bracing piece for The New York Times‘ “Op-eds from the future” series, Cory Doctorow posits one possible outcome. He speculates that regulation of social media will see “the legal immunity of the platforms … eroded”, spurred by “an unholy and unlikely coalition of media companies crying copyright; national security experts wringing their hands about terrorism; and people who were dismayed that our digital public squares had become infested by fascists, harassers and cybercriminals”.
He envisages that news giants “thanks to their armies of lawyers, editors and insurance underwriters” will be able to navigate the tightening walls of acceptable speech.
If this seems at all melodramatic, it’s worth noting that a civil rights audit of Facebook, released today, shows that banning certain terms fails to root out the problem and necessitates “scope creep”, where more and more content is banned:
While Facebook has made changes in some of these areas — Facebook banned white supremacy in March — auditors say Facebook’s policy is still “too narrow.” That’s because it solely prohibits explicit praise, support or representation of the terms “white nationalism” or “white separatism,” but does not technically prohibit references to those terms and ideologies the audit team recommends Facebook expand its policy to prohibit content that “expressly praises, supports, or represents white nationalist ideology” even if the content does not explicitly use the terms “white nationalism” or “white separatism.”
“A one on one with a side of praise” from Trump – the Red Panda in Chief of the US, who gets all the news he needs from Murdoch’s undies?
What else did Morrison achieve of what he promised at this G20 – is this the consolation prize – a “non-binding deal”, that no one was likely to oppose?
If there’s one thing Morrison’s big on, it’s little things.
I believe the future warriors of free speech and unfiltered expression who created Facebook and other social media are now awakening to the fact they have spawned an uncontrollable monster. I further believe that they have realised that they have lost control, but are unwilling to admit that to the world. The horror of how the Christchurch slaughter was streamed and reposted by voyeurs and demagogues of the extreme right was the final, unrefutable proof of these beliefs. In the words of every IT helpdesk person ever – “Try turning it off and waiting ten seconds (lets make that years) before turning it on again and see if that fixes it.”
Every country could simply enforce the same rules on Facebook etc that apply to legacy media. Facebook etc could only operate as FB Australia, FB US, FB UK etc subject to the media laws of each nation, and be sued accordingly. Level playing field.
The Information Superhighway envisaged in the last decades of the 20thC has turned out to have a lot of toll gates, not mention sinkholes, detours and cul-de-sacs.
Censors, from antiquity through Good Queen Bess to the 20thC Lord Chamberlain feared, if common people are allowed (too much – discuss) freedom to communicate they tend to be irksome and disrupt the divinely ordained flow of their sweat’n’blood into the money pipe to the elites.
As many commenters here have noted, Crikey’s ModBot is a fractious, inherently irrational, blundering behemoth, apparently running on a Sinclair ZX80.
The grauniad is no better and it claims to have soft machines doing the modding – “Your post was taken down due to minor legal risk. Unfortunately in these cases we cannot go into fine detail.”
Like Crikey, no discussion will be countenanced and another absolute no-no, as with Voldemort or when ASIO/ASIS grab someone off the street, no-one may mention it – “You also had another removed for referencing a prior moderation”.
Who could have guessed that the late 20thC was the very zenith of freedom & civilisation.
At least the total collapse will be swift – whether physical or psychic is moot.