By Martin Kihn | June 30, 2014 | 2 Comments
During a single week in January, 2012, Facebook conducted an experiment involving almost 700,000 users. By scaling back the number of posts containing positive and negative words, Facebook ended up validating their hypothesis that emotional states “can be transferred to others via emotional contagion” – or, as my mother used to say, smiles last for miles and a frown can get you down.
Quietly releasing the results of its two year-old study in the Proceedings of the National Academy of Science last week, Facebook could hardly have anticipated the outrage.
The tempest reminded me of a similarly short-lived furor at the end of last year, when outgoing Senator Jay Rockefeller held hearings into the practices of “data brokers” who maintain databases of consumer information. Those hearings inspired steamy op-eds and a 60 Minutes segment that hinted our personal medical records are available for sale online – a practice that is absolutely illegal.
Back to Facebook. The study itself strikes me as being routine, legal, ethical and unsurprising. It’s actually more interesting for what it gives away between the lines that in its widely-reported findings. We’ll get to the outrage and academic defenses in a moment, as well as what the study really tells us. For now, here’s the between-the-seams “tells” I detected.
First, Facebook says it analyzed 3 million posts containing 122 million words, of which 4 million were positive and 1.8M were negative. A bit of basic math here tell us the useful tidbits that:
Second, why did Facebook do the study at all? I suspect the answer is embedded in the report itself. Consider the context: early 2012. Instagram and Pinterest are sizzling new social networks that are almost entirely visual. Facebook content is a mix of visual and verbal. Facebook is wondering: Is the future photo-only? In academic terms, the study addresses this question as an attempt to determine if “nonverbal cues” (i.e., images, tone of voice) are necessary to elicit an emotional response.
In other words: Do words matter? The study is purely text-based and concludes, yes, they do. Words alone can make us feel emotions. (This is a relief to those of us who are authors.)
I also think Facebook was reacting to a popular book. In 2011, Sherry Turkle published Alone Together: Why We Expect More From Technology and Less from Each Other, which made an oft-repeated claim that seeing our friends’ good times stream past us all day on our social feeds actually makes us depressed. We compare our insides to other peoples’ outsides and it brings us down, man.
Facebook data scientists probably realized they could take this calumny head on. And in fact, the final point made by the current study’s authors reads as a direct rebuttal to Sherry Turkle:
“The fact that people were more emotionally positive in response to positive emotion updates from their friends, stands in contrast to theories that suggest viewing positive posts by friends on Facebook may somehow affect us negatively, for example, via social comparison.”
So there. As I said, the reaction to these rather common-sensical findings ranged from a legalistic squib in Slate (conclusion: “It’s made all of us more mistrustful”) to a more measured but still ominous dissection in The Atlantic titled “Everything We Know About Facebook’s Secret Mood Manipulation Experiment.” After which came a series of defenses from academics, who made the point that everything we see on all our social networks – and on most websites, period – is designed to manipulate us somehow into engaging, sharing, buying, shopping, staying, liking, and so on. And it’s only going to get worse, believe me.
One of the academic defenses, from Tal Yarkoni, strummed a refreshingly cynical chord:
“Everybody you interact with — including every one of your friends, family, and colleagues — is constantly trying to manipulate your behavior in various ways.”
In other words: Man up, people.
A couple final points that I think weren’t stressed enough, before this controversy too fades to black and we go back to being happily manipulated by our social feeds.
If we start demanding an academic standard of “informed consent” for routine A/B and multivariate tests run online, we’re skirting the boundaries of absurdity. What do you think?