During a single week in January, 2012, Facebook conducted an experiment involving almost 700,000 users. By scaling back the number of posts containing positive and negative words, Facebook ended up validating their hypothesis that emotional states “can be transferred to others via emotional contagion” – or, as my mother used to say, smiles last for miles and a frown can get you down.
Quietly releasing the results of its two year-old study in the Proceedings of the National Academy of Science last week, Facebook could hardly have anticipated the outrage.
The tempest reminded me of a similarly short-lived furor at the end of last year, when outgoing Senator Jay Rockefeller held hearings into the practices of “data brokers” who maintain databases of consumer information. Those hearings inspired steamy op-eds and a 60 Minutes segment that hinted our personal medical records are available for sale online – a practice that is absolutely illegal.
Back to Facebook. The study itself strikes me as being routine, legal, ethical and unsurprising. It’s actually more interesting for what it gives away between the lines that in its widely-reported findings. We’ll get to the outrage and academic defenses in a moment, as well as what the study really tells us. For now, here’s the between-the-seams “tells” I detected.
First, Facebook says it analyzed 3 million posts containing 122 million words, of which 4 million were positive and 1.8M were negative. A bit of basic math here tell us the useful tidbits that:
- The average post is 40 words long
- Positive words are 2X more common than negative words
Second, why did Facebook do the study at all? I suspect the answer is embedded in the report itself. Consider the context: early 2012. Instagram and Pinterest are sizzling new social networks that are almost entirely visual. Facebook content is a mix of visual and verbal. Facebook is wondering: Is the future photo-only? In academic terms, the study addresses this question as an attempt to determine if “nonverbal cues” (i.e., images, tone of voice) are necessary to elicit an emotional response.
In other words: Do words matter? The study is purely text-based and concludes, yes, they do. Words alone can make us feel emotions. (This is a relief to those of us who are authors.)
I also think Facebook was reacting to a popular book. In 2011, Sherry Turkle published Alone Together: Why We Expect More From Technology and Less from Each Other, which made an oft-repeated claim that seeing our friends’ good times stream past us all day on our social feeds actually makes us depressed. We compare our insides to other peoples’ outsides and it brings us down, man.
Facebook data scientists probably realized they could take this calumny head on. And in fact, the final point made by the current study’s authors reads as a direct rebuttal to Sherry Turkle:
“The fact that people were more emotionally positive in response to positive emotion updates from their friends, stands in contrast to theories that suggest viewing positive posts by friends on Facebook may somehow affect us negatively, for example, via social comparison.”
So there. As I said, the reaction to these rather common-sensical findings ranged from a legalistic squib in Slate (conclusion: “It’s made all of us more mistrustful”) to a more measured but still ominous dissection in The Atlantic titled “Everything We Know About Facebook’s Secret Mood Manipulation Experiment.” After which came a series of defenses from academics, who made the point that everything we see on all our social networks – and on most websites, period – is designed to manipulate us somehow into engaging, sharing, buying, shopping, staying, liking, and so on. And it’s only going to get worse, believe me.
One of the academic defenses, from Tal Yarkoni, strummed a refreshingly cynical chord:
“Everybody you interact with — including every one of your friends, family, and colleagues — is constantly trying to manipulate your behavior in various ways.”
In other words: Man up, people.
A couple final points that I think weren’t stressed enough, before this controversy too fades to black and we go back to being happily manipulated by our social feeds.
- How did Facebook determine whether a post was “positive” or “negative”? It used something called the Linguistic Inquiry and Wordcount software LIWC2007, which simply counts words (i.e., “hate” is bad, “love” is good, etc.). However, this method is notoriously unreliable – especially on social networks, where posts are too short and usage too quirky and ironic for such methods to work. I’d be surprised if this automated assessment was 50% correct. (This issue wasn’t raised in the study, and others have taken notice.)
- The impact is extremely small – Surpringly small, in fact. Eliminating emotionally-laden content (positive or negative) tended to shift response about 1/50th of a standard deviation, which is almost trivial. So emotional words may impact us, but not very much. The study also had no way of measuring impact on people’s minds, unless they happened to express that impact in a post later. (Silent sulking went unnoticed.)
- What is “informed consent”? – In case you don’t already know it, let me be clear: If you are online, someone is trying to manipulate you. You are being served experiments continually and aggressively – different versions of ads, web content, Click Here! buttons, images, background colors, offers, products . . . anything. Your reactions are watched and that information is used to improve the manipulation. Marketers call this “targeting” and it is the whole reason so much of your content is “free” anyway. You get nothing for nothing. Advertisers pay and they want something in return.
If we start demanding an academic standard of “informed consent” for routine A/B and multivariate tests run online, we’re skirting the boundaries of absurdity. What do you think?
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.
Comments are closed
2 Comments
But this wasn’t “routing A/B” testing. It involved academics doing academic research that was published in an academic journal. It could have run with proper informed consent after a proper review by an IRB.
We don’t allow academics to set ethical rules on their own experiments because as in this case, they will justify circumventing protections because of expediency.
It is simply bad research practice and that is something entirely distinct to Facebook’s self-interested relationship with consumers.
Thanks David – your point is well taken and received. Some of my colleagues at Gartner have likewise chastened my implication that academic standards don’t apply in this case because Facebook does similar tests as part of its usual business practices. Of course, even Facebook must adhere to standard practices in an academic context, and there are certainly questions about the process followed here.