I’m in the middle of a doctorate program at Northeastern University, and my research focus is on the impact of disruptive technologies on the public policy making process. Before I ever engaged in any behavioral research, Northeastern required that I achieve certification on protecting human research participants. The certification was based in large part on principles outlined in The Belmont Report, which was commissioned by Congress following the public outrage over the syphilis study conducted at Tuskegee University. The Belmont Report is the basis for the ethical guidelines for any academic research involving human subjects. Any such research must be approved by an Institutional Review Board (IRB) at the university or other research institution that is conducting the study.
“Respect for persons” is a first principle of Belmont’s ethical guidelines. This principle requires informed consent of the research participants. If informing participants of the exact nature of the research would comprimise the validity of the study, the participants don’t have to be told the exact nature of the research, but they do have to be told that there will be experiments. The general notice to users of Facebook that their data could be used for research purposes may be fine for experiments involving pre-existing data, but it does not explicitly address live experimental research (plus when the study was done in 2012, the research clause did not exist at Facebook). In the case of the experiment on over 600,000 Facebook users that was published recently and reported in The Atlantic and elsewhere, subjects’ behaviors were directly manipulated. They were shown varying degrees of positive and negative content on Facebook and then evaluated as to whether they then behaved in a negative or positive way.
“Respect for persons” also requires that vulnerable persons be protected. That could apply to minor children, individuals with a limited mental capacity to understand the nature of the experiment and to persons with a precarious emotional or mental state. Notably, the researchers in their study noticed a “withdrawal effect,” stating: “People who were exposed to fewer emotional posts (of either valence) in their News Feed were less expressive overall on the following days, addressing the question about how emotional expression affects social engagement online.” Such a lingering effect of the individuals involved in behavioral research makes it important that protection for vulnerable individuals is established. These persons would need to be identified and then additional measures taken to prevent any negative outcomes for them. It is not adequate to do a statistical calculation and then from that determine that the risks to the group of potentially vulnerable persons is minimal. For a waiver of the disclosure requirement, the specific individuals should be identified and action taken to protect them as individuals. Furthermore, when the research is concluded, the Belmont principles require that research subjects are debriefed. It’s not stated in the study as to whether a debriefing occurred.
Analysis of social media usage data invokes serious ethical considerations and my colleague Frank Buytendijk at Gartner is working to provide guidance to our clients about big data ethics. Taking that data and then using it to manipulate individuals into taking action to buy a certain product or service, engage in a certain activity, and even vote in a certain way is the core of the business model of social media firms. With that business model, perhaps in the world of social media businesses, this experiment may have seemed mild. Certainly, the researchers appear to have been caught off guard by the public backlash. If nothing else positive comes of this experiment, it offers a sober lesson on the fine line between the business of marketing and the intentional manipulation of emotions.
Facebook states that it has significantly upgraded its research safeguards since 2012 when this experiment was conducted. Perhaps Facebook, and other social media firms, could consider openly sharing the details of those safeguards.
Read Complimentary Relevant Research
Predicts 2017: Artificial Intelligence
Artificial intelligence is changing the way in which organizations innovate and communicate their processes, products and services. Practical...
View Relevant Webinars
How to Protect Mobile Apps
Securely enabling applications on corporate- or employee-owned devices is key to protecting enterprise data from misuse. From containers...
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.