I have to commend the Wall Street Journal for the “What They Know About You” series of articles it is running at the moment. As someone who has been keenly interested in digital identity and online privacy for several years now, I recognise that user awareness is a key factor in mitigating privacy risk, and poor user awareness leads to ill-informed risk assessments, bad decisions and harmful behaviour.
I also appreciate that ‘online commerce’ and ‘online privacy’ are simple phrases which mask large and complex ecosystems of inter-related elements, so I also welcome the fact that the WSJ’s coverage reflects many different viewpoints – not just the privacy advocates or the data miners yelling at each other from their respective (if a miner on a hilltop makes any sense as a metaphor…).
Here’s a snapshot of three current articles in the series, with some thoughts as to what I think they illustrate:
1 – Jessica Vascellaro’s piece charting some of the internal thought processes, discussions and disputes as Google gradually progressed towards cookie-based tracking for targeted advertising.
To me, this reflects the many tensions at work in organisations like Google (organisations with a power to capture, store and analyse data about human behaviour on a scale unprecedented in social history) as they redefine the dynamics of online interaction and try to balance the commercial, technical, legal and ethical possibilities which confront them.
I’m glad to see a maturing approach to keeping users informed and giving them tools with which to exercise greater personal responsibility. I don’t think that job is done, by any means, and I don’t think good practice is widespread enough, but conversely, if the likes of Google were completely ignoring the problem, the outlook would be a lot worse.
I also recommend Jennifer Valentino-Devries’ article on the opt-out/preference services some online service providers are starting to offer.
2 – From the privacy advocacy perspective, this letter from Jules Polonetsky and Christopher Wolf (Future of Privacy Forum) argues concisely in favour of greater transparency.
I agree. The WSJ’s series has probably exposed, to a wider audience than anything previously, just how many third parties have access to what the vast majority of web users assume are simple, two-party interactions. As I will go on to describe in my comments on the third article I’ve singled out, it is meaningless to tell users to take responsibility for “their” data, if they have no idea who else routinely has access to it.
3 – Jim Harper’s article on “Why Online Tracking Isn’t Bad“, in which he takes the provocative position that ‘web users get as much as they give’. (Really? How can you tell?)
I have several issues with Jim’s analysis.
First, there’s the whole notion of having this discussion based on an idea of “my” data. Joe Andrieu has written as concise a post as you could wish for on this complex topic (here), and Bob Blakley has described “The Absurdity of ‘Owning One’s Identity'” with customary clarity here. I’m not going to try and re-hash either piece; all I will say is, if you lay down “data ownership” as the conceptual cornerstone of online privacy, the building project is doomed to be short and unsuccessful.
Second, and on a related point: Jim asserts that you can’t even claim something as “your” data if you don’t do anything to control it. Hmm.
I am unconvinced. Let’s replace the idea of “my” data with the idea of “data about me, concerning the use of which I have certain legitimate rights” (OK, it’s 63 characters instead of two, but who ever said self-determination comes at no cost… ;^); on that basis, what Jim is saying is that you don’t continue to enjoy any rights which you don’t exercise – which is nonsense. Just because I don’t wear body armour and continuously assert my right not to be murdered doesn’t mean I forfeit that right.
OK, I’m caricaturing Jim’s position a little there. Let’s take a closer analogy. If I leave my car unlocked with the keys in, I don’t forfeit the right not to have it stolen (that’s still wrong), but I do weaken the strength of my complaint if it is stolen.
On that basis it makes sense to say that, if I knowingly expose my personal data without taking appropriate steps to protect it, I have fewer grounds for complaint if it is abused by someone else. But the word ‘knowingly’ is an important one there. If I reasonably assume I am having a conversation with one other person, and I judge the privacy risk on that basis, I will come to one set of conclusions about how discreet to be in my remarks.
If, instead, I know that many people are listening, I may well temper my remarks accordingly, especially if I am discussing something sensitive.
The problem is that, in the world of online tracking and targeted advertising, I am given the impression that I am having a one-to-one conversation, while in fact not only are other parties listening, but
- it is generally impossible for me to tell how many, or who they are;
- their commercial interests are likely to be directly at odds with my interest in my own privacy and self-determination.
Under those circumstances it is hard, if not impossible, for me to form a realistic risk assessment and therefore behave appropriately to minimise risk (further assuming I had the right tools with which to try and do that).
Jim’s article makes the point that users get a lot “free” because of the use that is made of their personal data. I think that analysis is overly simplistic. First, it is not the case that all the “free” services and information on the web are paid for by the mining of personal data. Many are, for sure, but there is also increasing sophistication in business models which offer a certain level of service without charge, offset by the subscription fees for higher-function offerings.
Second, the use of the word “free” is misguided. There may be no financial charge for a service, but all that means is that the internet operates on many forms of value exchange other than financial ones. All that’s happening is that some of the non-financial costs (such as compromise of the end-user’s privacy) are not being given the same weight as financial charges – at least, in Jim’s model.
Several years ago, I conjectured that “Privacy is the new Green” – in the sense that it was regarded with some suspicion, as the preoccupation of a slightly odd group of individuals with eccentric priorities. Over time, we’ve seen environmental issues become mainstream; various kinds of cost model have emerged to allow ‘green’ factors to be taken into account alongside commercial and economic ones. The environment is (literally) a complex physical and biological ecosystem, and those models often struggle to relate it coherently to the broader web of commercial, economic and political imperatives. However, they reflect an understanding which continues to evolve, grow and mature.
We need to learn from that process as we confront the complex technical and social ecosystem of online privacy, and the broader web of commercial, economic and political imperatives which make up the internet. The WSJ’s work to raise awareness and public debate is good news, but there’s still a long way to go…
Read Complimentary Relevant Research
Predicts 2017: Artificial Intelligence
Artificial intelligence is changing the way in which organizations innovate and communicate their processes, products and services. Practical...
View Relevant Webinars
How to Protect Mobile Apps
Securely enabling applications on corporate- or employee-owned devices is key to protecting enterprise data from misuse. From containers...
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.