Blog post

Balancing the Risks of Open Government

By Andrea Di Maio | December 06, 2009 | 2 Comments

web 2.0 in government

After attending the Gartner Symposium in Sydney, I have read a number of press articles that make reference to some of the topics I covered there.

An article on Computerworld captured my point about the need of a conscious loss of control to make government 2.0 initiatives successful.

Another article on GovernmentNews picked on the risks of open government. Quoting what I said in a press conference, the journalist wrote:

Governments should be more cautious about the data they place in the public domain […] governments had hurried to open up data sets with little regard to the outcomes and value that ‘mashed up’ products offered the public sector. […]

Who’s looking at whether these mashups are bringing value or not? Someone mashes them. It’s good from a public perspective but who will take action?” […]

Think about the consequences of the data they’re putting out there. The more data you publish the more risk you create […].

One could argue that these articles somewhat contradict each other, but actually they both are accurate reports (although with some added emphasis). As I wrote in an earlier post, government 2.0 is something that needs to be tackled, although it is fraught with risks. It is not that government organizations have a choice; the train has already left the station and, although different government domains in different jurisdictions will be affected at different times, this is something that pretty much any government organization  will have to come to terms with sooner or later.

Once more, the real issue is striking the right balance between different aspects of government 2.0 (something I have also mentioned in a previous post). In the picture below I have drawn in black the “traditional” information flows that are associated to government 2.0: data from government to people and engagement from people to government. I have drawn in red the reverse flows, i,.e. data that is created by communities and used by government, and government employees being engaged on externally sourced communities.

image

The black lines are those where government can exercise (some) degree of control. These include the various data.gov, all web 2.0 thrills and frills on government web sites as well as government pages on Facebook or accounts on Twitter.

The red lines are those where governments have to let go, have to empower their employees to act as connection agents with external communities. By its nature, this looks like the riskiest part, although – I would argue – is also the most important.

However, and that’s the point made in the second article, one should not think that publishing data is risk free. As I also said in my previous post, “if something goes wrong because data is not accurate or up-to-date or just because it gets mashed up the wrong way or even maliciously, government will be held accountable”.

I need to be clear on this. Open government data will do more good than bad, and the incidents that will occur on the way to transparency will be far outweighed by the value that this data create. This being said, it will take effort to make sure that data is used (see my point about the doubtful relevance of mashup contests) and that the occasional drawbacks of transparency are appreciated and properly managed.

The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.

Comments are closed

2 Comments

  • Hi Andrea,

    I respect (and read) most of your publicly available work, however think your emphasis on the risks of open data needs more evidence to be authenticated.

    Certainly there is some individual risk and organisational risk to showing the real numbers – however what is the corresponding risk to society of working off faulty figures?

    Also if government departments decide what data to release based on what they think is useful and will be reused there is a major risk of stifling innovation.

    Data becomes useful at different times for different purposes, so data released for years with apparently no value could suddenly become immensely valuable based on certain environmental changes.

    I also don’t think governments around the world have a great track record of anticipating what their communities want. Like all organisations and individuals they make decisions based on how they perceive the world (often how their senior management perceive the world) and not based on the community’s views.

    Keep up the thought-provoking posts – it’s good to have different perspectives and there’s a grain of truth in many views.

    Cheers,

    Craig

  • Don McIntosh says:

    I think that there does needs to be some assessment made regarding the usefulness of data. Releasing almost any non-trivial data to the public has some cost as well as risk attached to it, so it makes sense to have a way of determining what data gives the best bang for buck.

    I do agree with Craig that government departments may not be best placed to make all the decisions. Perhaps they should be responsible for the risk assessment and invite community involvement to help estimate the usefulness of the data.

    There’s also a general question about government departments’ motivation to do much at all. Andrea, as you pointed out to us last week, the ROI for making data public is often external, so aside from a bit of excitement around mashup comps, there are probably very large volumes of data that will not enter the public domain not because the risk cannot be managed (I think it can be – stats agencies have been doing it for decades), but because there’s not enough upside for the individual departments to get the data out there.