The claim
The federal government put social media companies on notice after the Christchurch massacre for failing to prevent the shooter’s actions being broadcast and promoted online.
On March 20, Prime Minister Scott Morrison said in an interview on the Seven Network’s Sunrise program:
If they can geo-target an ad to you based on something you’ve looked at on Facebook within half a second — it is almost like they are reading your mind — then I’m sure they have the technological capability to write algorithms that screen out this type of violent and hideous material at a moment’s notice.
Do social media companies have the capability to write algorithms that can remove hate content within seconds?
RMIT ABC Fact Check investigates.
The verdict
Morrison’s claim is wishful thinking.
YouTube and Facebook already use algorithms to detect hate content which is then reviewed by people.
But ensuring such material is removed “at a moment’s notice” requires a fully automated approach — something experts told Fact Check is not currently possible.
Experts also dismissed Morrison’s comparison of content-detection systems and targeted advertising, saying the two technologies were completely different.
Still, the data social media companies use to target their advertising can be used to identify, if not the content itself, then the people who share it.
Companies already do this with banning certain groups — although, at least in Facebook’s case, white nationalists have only been targeted since Christchurch.
Experts suggested companies could use methods other than algorithms to prevent harmful content being shared, such as banning, regulating or delaying live streaming.
The role of social media
Social media played a unique role in the Christchurch massacre, with the shooter using Twitter and 8chan to share links to his manifesto, and Facebook to broadcast the shooting in real time for 17 minutes.
Footage of the white-supremacist-inspired attack was then copied and shared across social media.
Speaking with ABC News Breakfast on March 26, Attorney-General Christian Porter said it appeared to have been “well over an hour until Facebook took anything that resembled reasonable steps to prevent replaying of that video”.
Facebook said it first learnt of the broadcast 12 minutes after it ended — or 29 minutes after it began — when a user reported it.
The company removed 1.5 million videos of the attack in the first 24 hours, catching 1.2 million before users saw them.
In the same period, YouTube also deleted tens of thousands of videos and suspended hundreds of accounts, a spokeswoman told Fact Check.
“The volume of related videos uploaded to YouTube in the 24 hours after the attack was unprecedented both in scale and speed, at times as fast as a new upload every second,” she said.
Who’s in trouble?
After Christchurch, the government demanded answers from the big three social media companies: Google (which also owns YouTube); Facebook (which also owns Instagram); and Twitter.
Both Facebook and YouTube offer live-streaming services.
Experts told Fact Check it made sense for the government to focus on these companies, as they offered the largest audiences.
The need for speed
Given the prime minister’s clear wish to catch content within seconds, Fact Check takes him as referring to fully automated content screening.
Morrison claimed social media companies should be able to screen content “at a moment’s notice” because they can target users with advertisements “within half a second”.
A day earlier, he justified his position on the premise that companies had the technology “to get targeted ads on your mobile within seconds”.
The government has since passed legislation that could see social media executives jailed and companies fined for failing to take down “abhorrent violent material expeditiously”.
What kind of content?
Fact Check also takes Morrison to be referring to more than just video.
While he referred to “this kind of violent and hideous material” in the Sunrise interview, in a letter to Japanese Prime Minister Shinzo Abe, tweeted the day before, he referred broadly to material by actors who “encourage, normalise, recruit, facilitate or commit terrorist and violent activities”.
What were the companies already doing?
Screening content is generally a two-step process.
Material is identified, or flagged, by machines or users, and in some cases company employees.
Human reviewers then decide whether it broke the platform’s rules.
YouTube employs 10,000 reviewers for this, while Facebook employs 15,000.
A spokeswoman for Twitter told Fact Check that humans played a critical role in moderating tweets.
Before the attack, these companies already used algorithms to flag a variety of material that might be called hate content.
Professor Jean Burgess, director of Queensland University of Technology’s Digital Media Research Centre, said platforms had commercial incentives to do so, and pointed to how in 2017 Google lost millions when companies discovered their products were being promoted alongside extremist content on YouTube.
YouTube prohibits hate speech and violent or graphic content, among other things, and in the three months to December 2018, the platform removed nearly 16,600 videos promoting violent extremism.
Of the nearly 9 million videos it removed in total over the quarter, 71% were flagged by algorithms.
YouTube said that, thanks to machine learning, “well over 90 per cent of the videos uploaded in September 2018 and removed for violent extremism had fewer than 10 views”.
Facebook also bans hate speech, terrorism and violence.
In the three months to September 2018, it dealt with 15 million items of violent or graphic content, of which 97% was computer-flagged.
Algorithms also flagged 99.5% of content deemed to be promoting terrorism, though just 52% of reported hate speech.
Twitter told Fact Check it also uses algorithms to flag video content based on hashtags, keywords, links and other metadata.
In the six months to June 2018, it suspended 205,000 accounts for promoting terrorism, of which 91% were flagged by Twitter’s “internal, proprietary tools” …
Read the rest of this Fact Check over at the ABC
Principal researcher, David Campbell
factcheck@rmit.edu.au
Sources
- Scott Morrison, Sunrise interview, March 20, 2019
- Scott Morrison, Facebook post, March 19, 2019
- Scott Morrison, Media conference, March 19, 2019
- Scott Morrison, Tweet of Letter to Japanese Prime Minister Shinzo Abe, March 19, 2019
- ABC, Interview with NZ privacy commissioner, March 27, 2019
- Facebook, Blog post: A Further Update on New Zealand Terrorist Attack, March 20, 2019
- Andreas Kaplan, The challenges and opportunities of Social Media, January 2010
- Christian Porter, Media Release, April 4, 2019
- YouTube, Transparency report, December 2018
- Facebook, Community standards enforcement report, September 2018
- Twitter, Transparency report, 13th edition
- Facebook, Standing against hate, March 27, 2019
- Facebook, Hard questions: how we counter terrorism, June 15, 2017
- Sheryl Sandberg, Op-ed in the New Zealand Herald, March 30, 2019
- Google’s senior vice president, Op-ed in the Financial Times, June 19, 2017
- Tarleton Gillespie, Custodians of the internet, June 2018
- Microsoft, Using PhotoDNA to fight child exploitation, September 12, 2018
- Facebook, Media release on terrorism, December 5, 2016
- YouTube, How Content ID works, accessed March 30, 2019
- YouTube, Expanding our work against abuse of our platform, December 4, 2017
- Mark Zuckerberg, The Internet needs new rules, March 30, 2019
- Parliament, Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill 2019
- Transcript of Mark Zuckerberg’s US Senate testimony, Washington Post, April 10, 2018
- Facebook, Responses to questions from US House committee, June 29, 2018
- Sidney Fussell, Why the New Zealand Shooting Video Keeps Circulating, March 21, 2019
Facebook et al are being used as convenient scapegoats by Morrison and his mates, who are all too keen to find something to distract the public from the Libs’ own contribution to the social environment that feeds such tragedies. Live streaming is an end-point or symptom, not the root cause of the problem.
Human behaviour nearly always fits a bell curve. Once upon a time, the actions of the Christchurch killer would have been so far outside the limits of this bell curve that the killer was clearly an outlier with extremist views. Over the last few decades, however, the Liberals have been only too happy to move the centre of the bell curve further and further to the right, purely for their own short-term political gain, so that now the views of the killer have mainstream political representation – even in parliament, courtesy of politicians like Anning. This would have been unthinkable thirty or forty years ago when the generation that fought Hitler was still well enough represented to have some political clout.
What I’m saying is that representatives of a sitting government stoking community fears with references to ‘African gangs’ just to get a few more votes, does far more to encourage mass murderers than Facebook or YouTube do.
Facebook- unwitting host of this material, fought hard to keep taking it down despite a legion of attempts to keep putting it back up
Commercial networks in Australia – ran the footage on purpose.
Right wing radio hosts: Encouraged hatred against Muslim immigrants
Greens: Encouraged acceptance of Muslim immigrants.
Morrison: Facebook and the Greens are evil, blame them!
TV, radio, Sky etc: Go ScoMo!
Maybe livestreaming would be acceptable with a slight delay, moderated by a human, similar to what radio stations do.
We don’t want to miss things like the SpaceX rocket launch, or on a more sobering note, the tsunami rolling across the Pacific towards the coast of Japan.
I’d add one other issue: if the people programming the algorithms are limited by their own bias, conscious or unconscious, the algorithms won’t pick the boias at all.
oops, bias
I’m more concerned at where this demand for censorship will lead – one person’s objectionable content is another’s genuinely held belief, religion being an obvious example.
History shows what happens when the State tries to ban an idea – the Weimar government prohibited the Nazi party and that worked just dunky-hory.
The ModBot strikes.
With Easter this probably won’t appear for 3-4 days.
Silly me, using the N word!