Gartner Blog Network

Good discussion, flawed analysis

by Robin Wilton  |  August 11, 2010  |  2 Comments

I have to commend the Wall Street Journal for the “What They Know About You” series of articles it is running at the moment. As someone who has been keenly interested in digital identity and online privacy for several years now, I recognise that user awareness is a key factor in mitigating privacy risk, and poor user awareness leads to ill-informed risk assessments, bad decisions and harmful behaviour.

I also appreciate that ‘online commerce’ and ‘online privacy’ are simple phrases which mask large and complex ecosystems of inter-related elements, so I also welcome the fact that the WSJ’s coverage reflects many different viewpoints – not just the privacy advocates or the data miners yelling at each other from their respective (if a miner on a hilltop makes any sense as a metaphor…).

Here’s a snapshot of three current articles in the series, with some thoughts as to what I think they illustrate:

1 – Jessica Vascellaro’s piece charting some of the internal thought processes, discussions and disputes as Google gradually progressed towards cookie-based tracking for targeted advertising.

To me, this reflects the many tensions at work in organisations like Google (organisations with a power to capture, store and analyse data about human behaviour on a scale unprecedented in social history) as they redefine the dynamics of online interaction and try to balance the commercial, technical, legal and ethical possibilities which confront them.

I’m glad to see a maturing approach to keeping users informed and giving them tools with which to exercise greater personal responsibility. I don’t think that job is done, by any means, and I don’t think good practice is widespread enough, but conversely, if the likes of Google were completely ignoring the problem, the outlook would be a lot worse.

I also recommend Jennifer Valentino-Devries’ article on the opt-out/preference services some online service providers are starting to offer.

2 – From the privacy advocacy perspective, this letter from Jules Polonetsky and Christopher Wolf (Future of Privacy Forum) argues concisely in favour of greater transparency.

I agree. The WSJ’s series has probably exposed, to a wider audience than anything previously, just how many third parties have access to what the vast majority of web users assume are simple, two-party interactions. As I will go on to describe in my comments on the third article I’ve singled out, it is meaningless to tell users to take responsibility for “their” data, if they have no idea who else routinely has access to it.

3 – Jim Harper’s article on “Why Online Tracking Isn’t Bad“, in which he takes the provocative position that ‘web users get as much as they give’. (Really? How can you tell?)

I have  several issues with Jim’s analysis.

First, there’s the whole notion of having this discussion based on an idea of “my” data. Joe Andrieu has written as concise a post as you could wish for on this complex topic (here), and Bob Blakley has described “The Absurdity of ‘Owning One’s Identity'” with customary clarity here. I’m not going to try and re-hash either piece; all I will say is, if you lay down “data ownership” as the conceptual cornerstone of online privacy, the building project is doomed to be short and unsuccessful.

Second, and on a related point: Jim asserts that you can’t even claim something as “your” data if you don’t do anything to control it. Hmm.

I am unconvinced. Let’s replace the idea of “my” data with the idea of “data about me, concerning the use of which I have certain legitimate rights” (OK, it’s 63 characters instead of two, but who ever said self-determination comes at no cost… ;^); on that basis, what Jim is saying is that you don’t continue to enjoy any rights which you don’t exercise – which is nonsense. Just because I don’t wear body armour and continuously assert my right not to be murdered doesn’t mean I forfeit that right.

OK, I’m caricaturing Jim’s position a little there. Let’s take a closer analogy. If I leave my car unlocked with the keys in, I don’t forfeit the right not to have it stolen (that’s still wrong), but I do weaken the strength of my complaint if it is stolen.

On that basis it makes sense to say that, if I knowingly expose my personal data without taking appropriate steps to protect it, I have fewer grounds for complaint if it is abused by someone else. But the word ‘knowingly’ is an important one there. If I reasonably assume I am having a conversation with one other person, and I judge the privacy risk on that basis, I will come to one set of conclusions about how discreet to be in my remarks.

If, instead, I know that many people are listening, I may well temper my remarks accordingly, especially if I am discussing something sensitive.

The problem is that, in the world of online tracking and targeted advertising, I am given the impression that I am having a one-to-one conversation, while in fact not only are other parties listening, but

  • it is generally impossible for me to tell how many, or who they are;
  • their commercial interests are likely to be directly at odds with my interest in my own privacy and self-determination.

Under those circumstances it is hard, if not impossible, for me to form a realistic risk assessment and therefore behave appropriately to minimise risk (further assuming I had the right tools with which to try and do that).

Jim’s article makes the point that users get a lot “free” because of the use that is made of their personal data. I think that analysis is overly simplistic. First, it is not the case that all the “free” services and information on the web are paid for by the mining of personal data. Many are, for sure, but there is also increasing sophistication in business models which offer a certain level of service without charge, offset by the subscription fees for higher-function offerings.

Second, the use of the word “free” is misguided. There may be no financial charge for a service, but all that means is that the internet operates on many forms of value exchange other than financial ones. All that’s happening is that some of the non-financial costs (such as compromise of the end-user’s privacy) are not being given the same weight as financial charges – at least, in Jim’s model.

Several years ago, I conjectured that “Privacy is the new Green” – in the sense that it was regarded with some suspicion, as the preoccupation of a slightly odd group of individuals with eccentric priorities. Over time, we’ve seen environmental issues become mainstream; various kinds of cost model have emerged to allow ‘green’ factors to be taken into account alongside commercial and economic ones. The environment is (literally) a complex physical and biological ecosystem, and those models often struggle to relate it coherently to the broader web of commercial, economic and political imperatives. However, they reflect an understanding which continues to evolve, grow and mature.

We need to learn from that process as we confront the complex technical and social ecosystem of online privacy, and the broader web of commercial, economic and political imperatives which make up the internet. The WSJ’s work to raise awareness and public debate is  good news, but there’s still a long way to go…

Additional Resources

View Free, Relevant Gartner Research

Gartner's research helps you cut through the complexity and deliver the knowledge you need to make the right decisions quickly, and with confidence.

Read Free Gartner Research


Robin Wilton
Research Director
26 years IT industry

Robin Wilton is a research director with a particular interest in digital identity and privacy (and their relationship to public policy), access control and single sign-on, and the productive use of public key infrastructures. Read Full Bio

Thoughts on Good discussion, flawed analysis

  1. Bob Blakley’s “On The Absurdity of ‘Owning One’s Identity’” was an interesting piece, touching on the tricky topics of privacy and identity. I like Bob’s critique of the overwrought Laws of Identity. But I find his particular privacy angle less convincing. It’s not that his points about personal privacy are wrong, but rather they are very much tangential to the main thrust of modern information privacy law. ‘Owning One’s Identity’ (luckily) is not what matters.

    The cause of practical privacy governance is much helped if we actually decouple privacy and identity, as ironic as that might sound. “Identity” is indeed a slippery, soft concept, and like “privacy”, resists definition. But this is why information privacy law tends not to rest on “identity” but rather on the much drier idea of personally identifiable information. Likewise, privacy law tends not to even use the terms “private” and “public” because they are so problematic. Instead, we can construct effective information privacy laws without any philosophical complexity, by basing it on principles that provide individuals some rights to be informed about and control the flow of information that pertains to them.

    Modern information privacy law is quite simply framed. It generally forbids the collection by governments and businesses of personally identifiable information (PII) without good cause, and the re-use of PII for purposes unrelated to the primary reason for collection. These are quite straightforward, enforceable measures. And they don’t depend on recognising ownership of information. So much of
    Bob’s energetic critique is actually moot.

    Bob said you can’t stop people from observing you or taking your picture or talking about you. True of course, but these sorts of act are not actually the concerns of information privacy law. To begin with, most privacy law governs the acts of organisations, not of individuals. And it’s specifically concerned with recorded information (not merely “talk”) about identifiable individuals. So under privacy law you can in fact stop corporations and governments from recording observations about you where you are named, linking in other material, putting such records to new purposes, making them available to others and so on.

    I love to play with the philosophical and sociological aspects of privacy and identity as much as they next guy, but technologists especially need to keep in mind that information privacy law is purposely framed to skirt the harder questions of identity and ownership.

  2. Robin Wilton says:

    Thanks Steve – great comment, as usual!

    I agree with you that “Owning One’s Identity” is not what matters, for many of the reasons you correctly set out… the problem is that that doesn’t seem to prevent an awful lot of people from trying to frame the discussion in those terms. How often do we still see articles/blogs etc dive in with the phrase “who owns your online identity?”.

    (In fact, just yesterday I saw someone Tweet “Who owns you? 20% of your DNA is patented”…).

    As you say, the issue is not one of ‘ownership’, it is one of ‘assertable rights over data which pertains to you’.

    In some cases, the data pertains to you because, even if it doesn’t happen to be on a list of things defined as PII, it still relates to you as an individual. In other cases, the data pertains to you because even though it is not even ‘about’ you, it can still give rise to things which negatively impact your privacy, freedom or self-determination.

    My assessment is that current laws are:

    – arguably mediocre at successfully handling privacy detriment arising out of well-defined lists of PII;

    – quite poor at providing protection against abuse of data which is ‘about’ you but not personally identifiable (see the current mess over Google Streetview, wireless MAC addresses and geo-location);

    – clueless about how to address the privacy detriments arising out of third party aggregation and data mining.

    There have been some valiant attempts to use “demonstrable harm” as a criterion for determining privacy detriment, but that approach is not good at handling things like the HMRC breach: there’s no provable harm one can point to, but that surely doesn’t mean the HMRC breach was ‘privacy neutral’.

    In my view, an effective law would have to be able to recognise that an action gives rise to the /potential/ for privacy harm to the data subject. But laws like that are notoriously hard to draft well.

    Thanks again for your comment… keep ’em coming!

Comments are closed

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.