A few years ago, media incubator NewAssignment.net partnered with Wired magazine in a decidedly meta project called Assignment Zero. The idea was to use crowdsourcing to produce an in depth study of crowdsourcing. NewAssignment would “have a crowd of volunteers write the definitive report on how crowds of volunteers are upending established businesses, from software to encyclopedias and beyond.” Like many enterprises venturing into Web 2.0 territory, the sponsors did not put much thought into how this would happen. They would simply provide the tools and topics and the magic of the crowd would take care of the rest. “It’s like throwing a party,” Assignment Zero editor Lauren Sandler told the New York Times in 2007. “You program the iPod, mix the punch and dim the lights and then at 8 o’clock people show up. And then who knows what is going to happen?”
The goal was to produce at least 80 high-quality, feature length articles. At the completion of its 12 week run the project had produced only seven. The problem was not due to a lack of interest or enthusiasm. According to a post-mortem published in Wired magazine the project attracted over 500 potential contributors in its first week of operation. Rather, Assignment Zero’s difficulties arose from the assumption that the flood of volunteers would self-organize and produce the desired content. This is the problem with most ‘let’s just throw it out there’ implementations of collaborative technologies. If you build it they will come, but they are more likely to stand around looking at each other than to start a Field of Dreams-esque pickup game. Eventually, most will loose interest and wander off. This is precisely what happened to Assignment Zero.
Halfway through the project, the sponsors realized they were in trouble, but they were learning. The first wave of volunteers had largely disappeared and most of the topic pages had been abandoned. The problem was that assignment zero put the organizational burden on the volunteers themselves. “What we learned,” said Jay Rosen one of the project sponsors, “is that you have to be waaaay clearer in what you ask contributors to do. Just because they show up once doesn’t mean they’ll show up over and over. You have to engage them right away.” The most critical form of engagement is to provide guidance, direction and structure.
Every authentic example of collective intelligence that I am aware of also shows how that collective was guided or inspired by well-meaning individuals. These people focused the collective and in some cases also corrected for some of the common hive mind failure modes.
In open-source software development, this is role is known as the benevolent dictator. In Assignment Zero’s case this took the form of 30 volunteer professional editors who were trained on the tools, educated on the goals of the project and assigned to manage various topics. In essence, each research area to be explored became a fiefdom (but a pleasant fiefdom) with its own benevolent dictator working under the direction of a board of governors.
This may sound like it flies in the face of the open enterprise and web 2.0 in general, but what I’m describing is not a traditional command and control structure, but rather a guide and nurture ethic more on the order of Robert Greenleaf’s servant leadership model. Successful crowdsourcing efforts depend on the passion of participants or they will fail. After all, you can make an assignment, but if the contributor’s heart isn’t in it you are likely to get less than stellar results, if you get any at all. And while we may be creating fiefdoms, the borders are always open. People can come and go as they please.
This is the second lesson Assignment Zero learned. Once you have set the direction and boundaries, the community controls the scope and direction of the project. In the Wired post-mortem, David Cohn, another Assignment Zero editor, recalled “we had to jettison most of the topics we’d started off with. Instead, we concentrated on the topics that people were most clearly interested in.” Instead of the 80 feature articles Assignment Zero was hoping for, they ended up with only seven, but this may not be the failure it seems. In addition to the formal articles, the project conducted and posted 80 in depth interviews, at least 60 of which were of a professional caliber.
While Assignment Zero may not have produced the definitive study of crowdsourcing it intended, at least not in the form originally conceived, it did demonstrate and document some important lessons about collaborative technologies. First, you must provide guidance and direction. Second, that guidance and direction cannot be rigid or overbearing. Give the crowd, be it a marketing team, a group of developers or an online community, an overall vision, set some boundaries, educate them in the tools and resources and then only intervene when necessary to keep things on track.
Read Complimentary Relevant Research
Predicts 2017: Artificial Intelligence
Artificial intelligence is changing the way in which organizations innovate and communicate their processes, products and services. Practical...
View Relevant Webinars
The Mobile Scenario: Taking Mobility to the Next Level
The definition of "mobile" in the post-app era will involve new interactions such as bots and conversations, new devices such as wearables...
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.