Over the last few days there has been quite some debate about an article Against Transparency by Lawrence Lessig. In this article Lessig, who ironically sits on the advisory board of the Sunlight Foundation, looks at the dark side of transparency, something I have touched upon in a couple of previous posts (see here and here).
While he looks at this from a more political perspective, exploring the impact of what he calls naked transparency on public officials – be they doctors receiving research grants from pharma companies, or members of the Congress getting campaign contributions from corporation – he points to a very interesting problem.
He says that
The naked transparency movement marries the power of network technology to the radical decline in the cost of collecting, storing, and distributing data. Its aim is to liberate that data, especially government data, so as to enable the public to process it and understand it better, or at least differently.
While he gives several examples of good and targeted transparency that can effectively change the way market behave, he also warns that
To know whether a particular transparency rule works we need to trace just how the information will enter these “complex chains of comprehension.” We need to see what comparisons the data will enable, and whether those comparisons reveal something real. And it is this that the naked transparency movement has not done. For there are overwhelming reasons why the data about influence that this movement would produce will not enable comparisons that are meaningful. This is not to say the data will not have an effect. It will. But the effect, I fear, is not one that anybody in the “naked transparency movement,” or any other thoughtful citizen, would want.
One of the main problems, in Lessig’s view, is the attention span.
To understand something–an essay, an argument, a proof of innocence– requires a certain amount of attention. But on many issues, the average, or even rational, amount of attention given to understand many of these correlations, and their defamatory implications, is almost always less than the amount of time required. The result is a systemic misunderstanding–at least if the story is reported in a context, or in a manner, that does not neutralize such misunderstanding. The listing and correlating of data hardly qualifies as such a context. Understanding how and why some stories will be understood, or not understood, provides the key to grasping what is wrong with the tyranny of transparency.
Toward the end of his article he concludes that:
Proposals for public funding can be understood as a response to an unavoidable pathology of the technology–its pathological transparency–that increasingly rules our lives and our institutions. Without this response–with the ideal of naked transparency alone–our democracy, like the music industry and print journalism generally, is doomed. The Web will show us every possible influence. The most cynical will be the most salient. Limited attention span will assure that the most salient is the most stable. Unwarranted conclusions will be drawn, careers will be destroyed, alienation will grow.
Lessig’s point is that “the collateral consequence of transparency need not itself be good”. He focuses on examples of transparency, where individuals are linked to money flows that may suggest a possible bias, even in absence of any other evidence.
This is not new. How many lives and reputations have been damaged by press articles? At least, journalists have a deontology, while the man in the street, each of us, does not. A series of data presented on a web site and showing one side, or – even better – one little piece of a story, could be easily taken for the whole story as we have no time to do the analysis and seek the context. The burden of proof will be on people to show their innocence and not on us to prove they are guilty. And our suspicion may be triggered by something as tiny as a 140 character tweet or an intriguing mashup on Google Maps.
As I wrote in a previous post about the illusion of privacy, the amount of information available about each and every one of us may make us vulnerable to the same risk. When the Italian Revenue Agency published income and tax data for every taxpayer, many people checked their neighbors, friends, colleagues, and when finding discrepancies between too little an income and too expensive a life style, became suspicious. They did not care digging further, finding out whether that wealth came from other family members or from perfectly legitimate (and taxed) company benefits. Similarly, when a picture of yours half drunk at the October Fest gets published on Flickr, it is most likely stripped of its context (it was a holiday, you were not driving, etc), and can impact your reputation for a long time.
But the kind of transparency that technology makes possible today creates risks that go well beyond our individual reputations. The attention span problem can make us take decisions about our lives based on incorrect or incomplete information, while we believe there is plenty of. Very few of us will be able to dig into the raw data published by government in whatever standard and machine readable format and make sense out of it ourselves. We will have to trust intermediaries, communities, mashers, who will create the context for us. Will that context reveal the “complex chain of comprehension” that is necessary to take that decision? And, if so, will it do so in a fair way?
I do like Lessig’s definition of pathological transparency of technology. We must understand – as he says – that “while sunlight is the best of disinfectants, as anyone who has ever waded through a swamp knows, it has other effects as well.”