Jack Dorsey (Image: Wikicommons)

In an extraordinary interview with the Huffington Post’s Ashley Feinberg, Twitter CEO Jack Dorsey appeared unable to answer questions about whether he would ban Donald Trump, even if the president incited someone to murder a journalist. “We’d certainly talk about it”, was the best the Twitter boss could do, before promising to make the “report” button a little bit more visible.

This revelation was just one of many uncomfortable moments in an interview where Dorsey struggled to coherently articulate how Twitter could better respond to harassment, or explain his apparent coziness with far-right figures like Ali Akbar.

But Dorsey’s under-dressed word salads and half-baked promises to improve the algorithm are nothing new. As social media companies repeatedly come under fire for incubating reactionary politics, spreading fake news and turning a blind eye to harassment and doxxing, bosses like Dorsey have consistently continued to issue empty promises to do better, while trying to downplay and distance themselves from the incredible social harm caused by their platforms. Crikey takes a look at how different tech CEOs have responded to public criticisms of their awfulness — and how remarkably similar they are.

Facebook

In 2018, the rap sheet of Facebook’s sins seemed to stretch for eternity — the Cambridge Analytica scandal, widespread data breaches, allowing discriminatory ads, providing the breeding ground for genocide in Myanmar. In response, Mark Zuckerberg spent a lot of time telling people he was sorry, and that Facebook would do better. “We have a responsibility to protect your information, and if we can’t, we don’t deserve it,” Zuckerberg wrote in a post, a message he took out ads in some of the biggest American and British newspapers to spread. They also made this cute video. Zuckerberg then appeared in front of the US Senate, House of Representatives, and European Parliament, to say he was sorry about the fake news and the misuse of data — and it wouldn’t happen again. Despite Zuckerberg’s calm assurances, the scandals just keep coming.

Google

Zuckerberg wasn’t the only tech supremo to be grilled on Capitol Hill last year. In September, his COO Sheryl Sandberg appeared at a hearing alongside Dorsey. Google was meant to come, but instead sent an empty chair, and a statement that it was “committed to working with Congress on these issues”.

When Google’s CEO Sundar Pichai did eventually show up in December, Republicans wasted a lot of time finger-wagging about the tech giant’s supposed anti-conservative bias. Pichai’s has largely avoided serious questions about the very real problems with his platform. When asked by the New York Times, how Google would deal with misinformation, he referenced different cultural attitudes to freedom of speech in the US and Europe, and admitted that “it’s just a genuinely hard problem”. In another interview with the Washington Post, Pichai was equally vague, saying that Google needed to more work in “areas where the world doesn’t quite agree”. When asked about plans to open a censored version of Google in China, Pichai again responded vaguely, saying “whatever form it takes, I don’t actually know the answer”.

Reddit

Reddit may be one of the only tech companies where the incredible awfulness of its content eventually led to the downfall of its boss. In 2015, CEO Ellen Pao tried to crack down on harassment on the site by closing various subreddits (forums on the site), which created outrage among users culminating in her resignation.

Reddit would continue to gain notoriety, for its perceived role in Donald Trump’s election, the Charlottesville rally, and providing a breeding ground for the misogynistic incel movement. While the platform has taken steps to ban some of its worst subreddits, co-founder and CEO Steve Huffman appears to take a laissez-faire approach to policing the site’s content. In 2015, he said, on policing hate speech, that he didn’t think the company “should silence people just because their viewpoints are something we disagree with”. Last year, Huffman said “open racism including slurs” was allowed on the platform because it didn’t violate any rules.

“On reddit there will be people with beliefs different from your own, sometimes extremely so,” Huffman said.

“Our approach to governance is that communities can set appropriate standards around language for themselves”.

YouTube

It’s increasingly apparent that YouTube, both by design of its algorithm, and the content it allows, is one of the biggest radicalising forces for the far right, and a hotbed of nasty conspiracy theories. At a recent summit, Wired’s Peter Rubin confronted CEO Susan Wojcicki with evidence of how the algorithm pushes such content. In response, Wojcicki made a Zuckerberg-esque admission that YouTube “doesn’t always get the balance right”, and that the company was “working on it”.

Wojcicki has been working on it for a while. In 2017, in the face of backlash over the monetisation of inappropriate content, Wojcicki promised “a new approach to advertising on YouTube”. But like so many other platforms, the bad content is spreading quickly, in spite of promises to tinker with the algorithm, and last year, a new report highlighted just how bad YouTube’s problem with the far right was.