For one week in January 2012, Facebook deliberately manipulated the emotional content of the news feeds seen by nearly 700,000 users, just to see how it would affect their moods. This secret mood manipulation experiment, as The Atlantic called it, is yet more proof that Silicon Valley and its poisonous culture need to be told that they don’t get to decide the future of human society.
The question, though, is whether investors, legislators and regulators, bedazzled by multibillion-dollar company valuations but baffled by the technology behind it all, have either the clue or the spine to do something about it.
While Facebook’s controversial experiment was conducted more than two years ago, the results were only published earlier this month in the prestigious Proceedings of the National Academy of Sciences in a paper titled “Experimental evidence of massive-scale emotional contagion through social networks“. The news crossed over into mainstream media over the weekend as people began to understand what Facebook had actually done.
The questions Facebook was investigating are simple enough. Are people’s moods influenced by the moods expressed by their social media contacts? If so, to what extent? A typical Facebook user’s friends, family and other contacts generate far more posts than can be shown in that user’s news feed — reportedly around 1500 items at any given time. Facebook already has processes for selecting the most “relevant” — taking into account factors such as popularity, the closeness of personal connections and, presumably, commercial reasons.
So what happens when that selection process has an emotional bias? If users see a preponderance of happy messages, do they feel left out and get depressed? Or are good and bad moods contagious?
It turn out the answer is yes. Or, as Facebook’s research blandly puts it:
“When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks …
“We also observed a withdrawal effect: People who were exposed to fewer emotional posts (of either valence) in their News Feed were less expressive overall on the following days, addressing the question about how emotional expression affects social engagement online.”
Facebook thinks it has users’ permission for this sort of emotional tinkering because, buried in its data use policy, part of its 9045-word terms of service says “we may use the information we receive about you … for internal operations, including troubleshooting, data analysis, testing, research and service improvement”.
To many of the geeks who reacted to the story across the weekend, this isn’t news. Facebook manipulates you all the time.
In one sense they’re right. Advertising companies — and that’s what Facebook is — have emotional manipulation as their primary mission. They hope we’ll feel positive about their message, buy the product, or vote for the candidate. Naturally they’ll conduct research to see what techniques sell most effectively.
But isn’t there a difference between conducting research with a clear and specific commercial aim and poking at people’s emotions to see what happens?
“Let’s call the Facebook experiment what it is: a symptom of a much wider failure to think about ethics, power and consent on platforms,” tweeted Kate Crawford, an Australian who researches the politics and ethics of data for Microsoft Research, the MIT Center for Civic Media, the Information Law Institute at NYU, and the University of New South Wales.
“Perhaps what bothers me most about the Facebook experiment: it’s just one glimpse into an industry-wide game. We are A/B testing the world,” she tweeted.
Crawford is right. This isn’t just a Facebook thing. The entire Silicon Valley realm, what I sometimes call Startupland, is run by engineers who see us less as humans with our own needs, desires and fears, and more as data to be manipulated.
The core problem here is that for all its smarts, Startupland is populated by a very narrow segment of society: highly intelligent, well-educated software engineers and their associates. Most are from privileged backgrounds — Stanford University is the main gateway on the United States west coast, Harvard on the east.
And most, it must be said, are white males. My experience of tech conferences on San Francisco and San Jose is that white men do the presentations, perhaps along with a smattering of middle-class Asian people. You might see Hispanics serving food and drink, while blacks might provide security muscle. It’s a clearer, sharper racial stratification than you see in Australia.
By coincidence, last week Quartz published an essay by top-shelf software engineer Carlos Bueno saying that the next thing Silicon Valley needs to disrupt is its own culture:
“Silicon Valley has … created a make-believe cult of objective meritocracy, a pseudo-scientific mythos to obscure and reinforce the belief that only people who look and talk like us are worth noticing. After making such a show of burning down the bad old rules of business, the new ones we’ve created seem pretty similar.
“It’s even been stated [by Max Levchin, a founder of PayPal]: ‘The notion that diversity in an early team is important or good is completely wrong. You should try to make the early team as non-diverse as possible.'”
Bueno is right. The geeks should not inherit the earth. Not this narrow little enclave of geeks, anyway.
You are right about the “white man” thing. Both Google and Facebook have published their diversity stats…in the leadership ranks…its over 70% male and white. Hispanics and African Americans make up less than 3%.
As for Facebook manipulation…they already manipulate your news feed based on a criteria. It’s just changing that algorithm briefly to also filter on emotional words. Big deal. They are a company, you already know you are giving something away when you signup/comment/post, but you accept that for the benefits you get for staying in touch with your friends.
If you don’t like it, get off the site.
As for hauling off all the jews to the death camps, the census already collects this data its just changing the algorithm, big deal
Hi Stil, great story, couldn’t have put it better. (Hilarious that they have recently dumped “move fast and break things” as their motto, just as the implications are becoming clear.)
Scott, the issue is that most Facebook users are NOT aware of the full extent of what Facebook thinks you have, by clicking that button, agreed that they can do with you and what’s inside your head — so ‘agreeing’ becomes like a blank cheque. It falls short of properly informed consent, so its legal effect is also potentially doubtful.
At some point the price gets too high (for anyone), but with a culture of contempt for the “dumb f**ks”, as founder Zuckerberg was once quoted as having described his users, there seems to be effort applied to keeping our blissful, clueless ignorance undisturbed by full understanding and knowledge of the facts and their implications. It is unlikely we get told when the price or risk goes up past our own line in the sand.
If so, it is a bit rich to then claim that we have actually accepted every cost and risk they might wish to impose on us, or the wider world (influence elections anyone?), deliberately or accidentally, forever. What seems to have happened is that we have been suckered into signing a blank cheque, a fundamentally dangerous and unwise transaction — not something a “friend” asks you to do, something a sucker gets sucked in to.
And it’s hardly just a matter of “getting off the site”: their tracking bugs and spyware are littered all over the net, on third party sites, even leaking out into the offline world, and it’s not at all clear what they do with all this invisible net-wide surveillance Big Data if you “get off the site”.
Women in Silicon Valley? white or otherwise?
This would have passed the ethics panel with flying colours. Has it really hurt anyone? I’m fascinated by the results and fully support Facebook to experiment in this way to a limited extent. Get over it, concentrate on the truly egregious breaches of humanity out in the real world.