Gartner Blog Network


Google and the Jigsaw Puzzle of Online Discourse

by Darin Stewart  |  February 24, 2017  |  Comments Off on Google and the Jigsaw Puzzle of Online Discourse

Comment Sections: the toxic waste dumps of the internet. Just like in the comics, even the most mild mannered citizen can turn into a noxious super-villain when doused with the radioactive sludge to be found at the end of online articles.  Most of us know better than to wander into those dark lands and attempt to engage the natives, but sometimes we just can’t stop ourselves.  Before we even realize what is happening, we ourselves have turned into trolls.

We have more information and more articles than any other time in history, and yet the toxicity of the conversations that follow those articles are driving people away from the conversation,says Jared Cohen, an Adjunct Senior Fellow at the Council on Foreign Relations and president of Jigsaw, formerly known as Google Ideas.

The default reaction is to simply turn off the comments section in order to keep the rabble rousers at bay, but that also precludes the possibility of resurrecting those long forgotten and near mythic beings, civil discourse and productive dialog.

Fortunately Cohen and his Jigsaw cohorts are bringing a new super power to bear on the fight against flame wars: machine learning.  Today, Jigsaw announced Perspective, a new service intended to foster and protect the civil exchange of ideas online. From the Jigsaw project page:

Perspective is an API that uses machine learning to spot abuse and harassment online. Perspective scores comments based on the perceived impact a comment might have on a conversation, which publishers can use to give real-time feedback to commenters, help moderators sort comments more effectively, or allow readers to more easily find relevant information. We’ll be releasing more machine learning models later in the year, but our first model identifies whether a comment could be perceived as “toxic” to a discussion.

In what can only be considered a selfless act of heroism, Jigsaw engineers, in concert with staff from the New York Times and Wikipedia, reviewed hundreds of thousands of online comments to identify patterns and characteristics that would reliably indicate, on a scale from 0 to 100, that a given post is “toxic”.  This is an interesting use of tried and true auto-classification techniques, though it may also represent the foulest training set ever devised. At least it was created in support of a noble cause. Publishers can apply for access and use the software at no cost.

I’ve noted in past posts that the Internet is losing its way.  It is no longer the source of information and connection it once was.  It has devolved into a morass of misinformation and vitriol and it’s starting to look like we will soon have to pay a premium to bypass the bog. Efforts such as Jigsaw’s Perspective, Google’s ranking tweak for rubbish and Twitter’s “Safe Search” are important steps to keep the trolls in their caves.

Category: social-computing  taxonomy  trends-predictions  

Tags: google  jigsaw  machine-learning  trolls  

Darin Stewart
Research Vice President
6 years with Gartner
21 years IT industry

Darin Stewart is a research vice president for Gartner in the Collaboration and Content Strategies service. He covers search, knowledge management, semantic technologies and enterprise content management. Read Full Bio




Comments are closed

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.