by Robin Wilton | April 25, 2012 | 1 Comment
Today, the UK Cabinet Office (responsible for cross-government administrative strategy and operations) announced the members of the newly-formed Digital Advisory Board. The Board’s role will be to keep an eye on the government’s attempts to deliver better services digitally, and to advise it on how to deliver the highest possible quality of user experience.
No-one wants sub-optimal user experience in their online interactions with government services.
I wont list or categorise the Board’s members one by one… there are only a dozen of them, and a quick glance at the press release will give you a pretty good impression. The impression it gives me is this: there are plenty of service delivery experts there, whether commercial ones such as Lloyds Banking Group, Marks and Spencer or Condé Nast, or community-oriented ones like Will Perrin of “Talk About Local”, and the London Olympics organisers. There are also nods to academia, telecomms and social networking. They look like an intelligent, qualified and motivated group of individuals. And yet (call me a worrier, but…) I’m uneasy.
The direction of travel, in technology and Internet service delivery, is clear: all else being equal, it is easier to deliver services online than offline; it is easier to accumulate data en masse than to discard it selectively; it is easier to aggregate data than to segregate it; it is easier to publish data than to keep it secret. Those are the facts of digital life.
However, sometimes our interests, as conumers and citizens, are actually best served if our data is kept offline, discarded when not needed, strictly segregated by purpose, and if necessary encrypted. This, of course, requires something that resists the otherwise inevitable direction of travel. The only things that can exert such counterpressure are:
1 – the normal inertia and friction of any transactional system (and the Board’s Chair, Martha Lane Fox, made her name in realising the internet’s potential to remove those);
2 – policy and governance.
On the strength of its skills profile, this Board looks set to continue Martha’s work, by identifying and removing inertia and friction in the e-delivery of government services. What I don’t see there are the advocates of citizen and consumer rights (including, but not limited to privacy), or the specialists in translating between privacy-related policy statements and technological reality. It is not clear to me who, on the Digital Advisory Board, will see “privacy by design” as a priority – and experience has taught us that when privacy is not seen as a design objective from the outset, bolting it on as an after-thought is both ineffective and inefficient.
I hope the DAB recognises, from the outset, that “quality user experience” does not mean “ensuring the user is not scared off by any mention of privacy or consent”. There are already too many service providers who take that approach. We don’t need e-government to adopt it too.
Category: Uncategorized Tags:
by Robin Wilton | April 20, 2012 | Comments Off
A couple of things I’ve heard recently make me wonder whether the word “trust” is being used in public discourse in ways that fundamentally undermine its meaning. Consider the following short phrases:
- “Trust… but verify”
- “Trusted traveller programme”
Neither of these usages is actually that new. The first was made famous by that renowned “communicator”, the late President Reagan, and the second – though more recently adopted by the TSA – is the kind of phrase that has been applied generically to various air travel and immigration “fast track” initiatives over the last decade or so.
To me, Reagan’s phrase encapsulates the paradox. After all, if you trust someone, why would you feel the need to check them out? And if you only ‘trust’ someone because you’ve rummaged through their sock drawer looking for evidence of hostile intent, surely what you have demonstrated is that you don’t really trust them – you only trust such evidence as you are able to gather, one way or another.
Don’t get me wrong: I’m not saying that, for instance, there is no place for intelligence-led security measures, or that there are no circumstances under which people’s sock drawers should be subjected to a good rummage. All I’m saying is that we should resist being fooled into thinking that those come under the heading of trust. So, if I am saying that I know what trust is not (I hear you say…), what do I think it is? Here’s a candidate definition that I have found pretty useful so far:
“Trust is the belief that someone will act in (or at least not contrary to) your interest, even if they are in a position (and perhaps are even motivated) to do otherwise”.
The word “belief” is important. Like any belief, trust in someone can be well- or ill-founded. But trust is founded on some degree of uncertainty. If you know perfectly well that someone cannot hit you (for instance, because they are locked in another room), then the relationship you have with them is not, in that regard, one of trust: it is one of knowledge.
Why does this matter?
Well, trust has never been more of a factor than in today’s online world. It is intimately woven into any discussion of authentication, digital identity, online anonymity/pseudonymity, privacy, copyright and intellectual property management, net neutrality, internet access, deep packet inspection, CCTV surveillance, Passenger Name Records, web tracking and so on and so forth. I’m not sure an exhaustive list is even possible. In many of these areas, the roots of “trust” lie – we are told – in law enforcement access, accountability mechanisms, auditability and the like. But I’d argue that that approach does not reflect trust at all; it reflects a desire to base decisions on knowledge, not belief. A wish to remove uncertainty from risk-related decisions. Ultimately, a wish to predict the individual’s future behaviour based on exhaustive knowledge of their past behaviour.
It is rational to mitigate risk by seeking to understand it.
Just stop trying to kid me that it’s “trust”.
Category: Uncategorized Tags:
by Robin Wilton | April 13, 2012 | 3 Comments
Last June I blogged about the transposition into UK law of the EU’s e-Privacy Directive, and noted that although the corresponding UK law came into force in May 2011, the UK’s privacy regulator gave firms a further year to “demonstrate progress towards compliance”. That year is almost up. The ICO has now indicated that there is little risk of companies facing any enforcement action if they fail to comply with the law. To be a little more precise, the ICO’s position appears to be this:
- The EU Directive distinguished between “technical” cookies, which were loosely defined as “those essential to the operation of a website”… such as cookies which reflect the status of your shopping basket and items in it, and “tracking” cookies, which do not contribute to the operation of the site, but serve to gather data about the site’s visitors.
- “Technical” cookies are acceptable, under the EU Directive; “tracking” cookies may only be used if the user’s consent has been indicated in some way (with all the potential practical problems that might entail).
- The UK Privacy and Electronic Communications Regulation (and I can only apologise for the unfortunate US English connotations of that acronym), “does not distinguish between cookies used for analytical activities and those used for other purposes”.
- As a result, the ICO “does not consider analytical cookies fall within the ‘strictly necessary’ exception criteria” – so user consent is required.
- BUT… the ICO is less likely to take formal action – even for analytical cookies – if they are first-party cookies [see Mike O'Neill's comment below, and my reply], if they demonstrate “a low level of intrusiveness”**, or there is a low level of risk of harm to individuals.
To cut it short: it’s not hard to characterise the ICO’s position as “we’re going to qualify any possible intervention so rigorously that, in the end, we won’t actually do anything about cookie use”.
What should one conclude?
First, let’s give the ICO the benefit of the doubt and ascribe their semi-recumbent posture to pragmatism rather than spinelessness. Arguably, the EU’s Directive on cookies was a well-intentioned piece of legislation, but hopelessly impractical because it depended entirely for its effectiveness on factors outside the EU’s control (the willingness of browser manufacturers to implement meaningful controls). On that basis, the ICO can maintain that it has more important things to do than find ways of shoring up someone else’s fundamentally flawed legislative initiatives.
Second, if you are a company with a possible compliance obligation under the UK’s PECR law, it looks like you can score the risk of UK regulatory action as “low”… though if you choose to do absolutely nothing about compliance, don’t blame me if you suffer reputational damage as a result. You should also keep an eye out for anything the ICO subsequently says about third-party cookies, because so far the “wiggle room” only extends to first-party ones.
Third, where does this leave the European Commission? Well, on one hand, they are still dependent on browser maunfacturers’ progress towards a robust “Do Not Track” implementation – but it is perhaps now clear to the Commission that a unilateral attempt to impose a cookie law under those circumstances was unrealistic. On the other hand, the shaky status of the Directive should also remind the Commission how risky it is to try and legislate at the level of specific technical mechanisms, rather than defining a clear policy objective and leaving the technical details to the technicians. Viviane Reding did the Directive no favours when she explained that it was based on the distinction between “technical” cookies [nice] and “spy” cookies [nasty]. Framing the discussion in those terms makes life almost impossible for the regulators, does nothing for the privacy interests of the user, and gives malevolent online services a free run at any privacy-hostile tracking technique that is not cookie-based.
Cookie regulation may have seemed like a temptingly achievable target, but I think the Commission needs to acknowledge the following problems:
- It was a bad idea to frame privacy legislation with reference to a specific techical mechanism, rather than relevant privacy-related practices on the part of the service providers;
- It was a bad idea to leave an EU law so much at the mercy of critical success factors outside EU control, without seeking some form of consensus before drafting it;
- It was a bad idea to focus so closely on cookies that other privacy-hostile tracking techniques pass un-noticed.
Let’s be optimistic, though: if the Commission can learn from the shortcomings of this legislative initiative and maintain the political will to try again, it could do better.
**You might, of course, think that “a low level of intrusiveness” was exactly the problem that the Directive sought to address in the first place, by insisting that users be given the opportunity to express clear, informed, unambiguous, prior consent. I couldn’t possibly comment.
Category: Uncategorized Tags:
by Robin Wilton | March 5, 2012 | 2 Comments
The second reason their response is significant is that users need to decide whether Google’s user privacy posture is genuinely changing for the better, as they would have us believe, or whether their consolidation of user data across multiple services represents a privacy threat. And by “users”, let me be clear that I include the growing number of business who use services like Google Docs…
Contrast that with another privacy-related change Google has made recently. In November 2011 Google announced that domestic wi-fi networks could opt out of its Street View geolocation database (you remember… the one which was populated by Google’s war-driving camera cars, sniffing packets of network traffic all the while…).
Here’s a blog post about why that was a bad thing at the time, and here’s one from last November questioning whether the introduction of an opt-out mechanism was good enough.
My question is this: were you aware that this wi-fi opt-out existed? It doesn’t feel to me like Google made much noise about it… and though I could guess why, I don’t have any inside information about that. Objectively, the point is that it is not in Google’s interest for users to have an effective opt-out mechanism.
It’s interesting, too, to see how Google describe the opt-out:
“An SSID-based opt out substantially decreases the risk of abusive or fraudulent opt-out requests – for example, it helps protect against others opting out your access point without permission. This method of opt out can also be seen by other location service providers, and we hope the industry will respect the “_nomap” tag. This would benefit users by saving them the hassle of having to request several separate opt outs.”
I have a few simple observations to make about this:
1 – You know what else would have users the hassle of requesting several opt-outs? An “opted-out by default” policy.
2 – You know what would be simpler than hoping that ‘the industry’ will respect the “_nomap” kludge? An “opted-out by default” policy, and a law that says you’re not entitled to intercept traffic on domestic networks. Oh, wait, we already have one the latter (in the UK, at least… it’s called RIPA).
3 – An SSID-based opt-out ‘prevents others from opting my network out without my permission’? Whuh… This is so about-face that I keep looking around to make sure I didn’t step through the looking-glass without noticing, and to check that giant chess pieces aren’t sneaking upon me from behind. Really, Google – you’re too kind. I hadn’t considered the possibility that fiendish hackers might maliciously opt my network out of having its SSID sniffed by a third party. Thank goodness you’re on the case. Again – you know what would be a simpler and more effective defence against malicious third-party opt-out attacks? An “opted-out by default” approach…
So do Google really ‘get’ privacy yet? Sorry, but I’m not convinced. On that basis, I think the aggregation of data across their multiple services represents a privacy risk which individual and corporate users should not ignore. But what do you think?
Category: Uncategorized Tags:
by Robin Wilton | January 27, 2012 | Comments Off
My colleague Avivah Litan has given her insightful and thought-provoking read on the recent US Supreme Court decision here.
Avivah correctly identifies the “opt-in”/”opt-out” dichotomy as a critical element of the discussion. Tracking for law enforcement purposes needs, of course, to be set aside from the debate over user consent… but outside law enforcement – whether in the commercial domain or for public sector service delivery – I strongly believe that there should always be an opt-out available. In fact, my personal opinion is that “opted out” should always be the default, with an opt-in choice if the user wishes.
Of course, if there’s an opt-out, some of the people who exercise it will be virtuous, and some will not. There are those whose take the old “if you have nothing to fear, you have nothing to hide” view – but as anyone who has followed my blogging will know, that’s a view that I find misguided, harmful and pernicious. Avivah’s distinction between law enforcement and the commercial sector helps indicate one of the reasons why: it is clearly not the case that everything the law enforcement authorities know about me should, of right, be made public. Similarly, there are things which commercial service providers know about me which law enforcement authorities have no business knowing. The “nothing to hide, nothing to fear” brigade cannot cope with the idea that those who may seek to harm me can do so whether I have anything to hide or not.
In US v Jones, the Supreme Court was explicit about the citizen’s legitimate expectation of privacy. I tend to take a strong line on that. The ‘default setting’ is not that if I have nothing to hide, I have nothing to fear… it is that unless you have a provable, legitimate reason for doing so, you have no business meddling in my affairs.
Category: Uncategorized Tags:
by Robin Wilton | January 25, 2012 | Comments Off
Well, it may have been a quiet week in Lake Wobegon, but in the privacy and policy domain it has been quite the opposite. Wikipedia and a number of other sites went dark in protest against SOPA/PIPA; the Feds took down the MegaUpload file-sharing site, alleging violation of piracy laws; Anonymous retaliated by taking down a slew of SOPA supporters; and the European Commission has just announced its new, pan-European Data Protection Regulation (link to PDF version).
But let’s not talk about that… let’s talk about the 4th Amendment. For those on the right hand side of the Atlantic, the 4th Amendment is the part of the US Constitution which establishes the individual’s “right to be secure from all unreasonable searches, and seizures of his person, his houses, his papers, and all his possessions”. Like any constitutional law, it has been subject to a great deal of interpretation in the 221 years since it was ratified, not least as the law tries to keep pace with new ways of “searching” and “seizing”.
The 4th Amendment is often considered to be the closest thing US citizens have to a privacy right, and it generally establishes the need for any violation of that right to be backed up by a judicial warrant. Of the current Supreme Court, Justice Antonin Scalia is the one who most commonly dissents from this view, holding that the “reasonableness” test can be satisfied without a warrant. However, in a judgement this week Justice Scalia joined with his peers in finding unanimously in favour of the need for a warrant.
The case at issue was US vs. Jones, and the Supreme Court ruled that US law enforcement authorities had violated Mr Jones’ 4th Amendment rights by fixing a GPS tracker to his wife’s car, and using it to track his movements. Mr Jones was, at that time, suspected of being involved in drug dealing.
The judges ruled that, in attaching the device to Jones’ car, the police had physically intruded into “a constitutionally protected area”, and that this ran counter to a legitimate expectation of privacy in that respect. Justice Sotomayor and Justice Alito both drew attention to the issues of keeping 4th Amendment protections in step with rapid technological change – not least, the fact that so many of our personal actions are tracked by commercial websites and hand-held devices.
The court held back from ruling on what other means of surveillance might violate the 4th Amendment rights, though it is clearly something they thought about in their review of prior case law. As a result, the two aspects I mentioned above (physical intrusion, and expectation of privacy) are very likely to be the basis of future decisions, if it should come to questions of whether, say, traffic camera data can be used to track a suspected criminal. There would be a strong argument that the installation and operation of traffic cameras does not involve intrusion into a constitutionally protected area, and that it does not infringe on an expectation of privacy.
Whether that will extend into the online domain of web tracking remains to be seen.
So much for the 4th Amendment… I’ll see your 4th and raise you one: in a quite separate case, a judge in Denver ruled that an individual could not claim 5th Amendment protection from a law enforcement request to decrypt data on her laptop. (The 5th Amendment is the one establishing, among other things, an individual’s right to refuse to give information which might incriminate them).
In this instance, the suspect declined to decrypt the contents of her hard drive on the grounds that it might incriminate her. The judge held that, even if the police did not know the specific contents of a specific document, the fact of its existence was a foregone conclusion, and that therefore the 5th Amendment did not apply.
I have to admit, I don’t quite follow that chain of reasoning, but like I say,the law is having a job keeping pace with technological change. It has been an interesting week, then… and I dont see the pace of change slowing down any time soon.
Category: Uncategorized Tags:
by Robin Wilton | January 13, 2012 | Comments Off
Neelie Kroes is Vice-President of the European Commission, and also the Digital Agenda portfolio-holder (in which role she is also responsible for the Commission’s policy direction on cloud computing). Ms Kroes took up these posts in the so-called “Barroso 2″ Commission; prior to that re-shuffle, she was the Competition Commissioner. In that role she oversaw the competition case against Microsoft, which resulted in a €497m fine for the company and the enforced release of interoperability documentation relating to Windows.
I mention this background to establish that this is someone well versed in the disciplines of policy formation, strategy-setting, and the practicalities of regulating technology industries.
Ms Kroes has been blogging today about the forthcoming review of the EU Data Protection Directive… She also blogged last June about the review of the ePrivacy Directive, and rightly sees the two as being intimately connected. In terms of policy formulation and direction, I think that’s a great thing. In terms of execution, it concerns me, and here’s why; today’s blog post ends with the following up-beat assessment:
“And I am confident that the Commission will propose “technology savvy” protection for all of us – rules which protect our rights, while taking full account of both the risks and opportunities of the digital age.”
That’s a worthy goal, but the previous experience of the ePrivacy Directive and its measures on cookie regulation give us legitimate grounds to wonder whether the Commission has the skills to achieve it. Let’s not forget that the cookie directive sought to distinguish between “spy” cookies (which are bad, and should not be allowed without the user’s prior and informed consent) and “technical” cookies (which are OK). This, among other things, led one UK IT law specialist to describe the legislation as “breathtakingly stupid“. In the interests of impartiality I, of course, couldn’t possibly comment.. but if you know of a browser that allows you to set separate preferences for “spy” and “technical” cookies, please do point me at it.
As well as establishing one exemption for “technical” cookies (whatever they might eventually turn out to be) the Directive also qualified the need to seek informed consent by saying that this should be done “Where it is technically possible and effective…” – a loophole through which a competent corporate lawyer could probably back a bus while sipping a skinny latte.
I should make clear that “spy” vs “technical” distinction came from one of the other Commissioners, not Ms Kroes. I’m just rather worried that, with the best of intentions, she may be writing “technical savvy-ness” cheques her colleagues can’t cash.
Specifically in terms of data protection and privacy, here are some of the challenges which face the Commission’s legislators. I think it’s safe to say that current laws are:
- mediocre at successfully handling privacy detriment arising out of well-defined lists of PII;
- poor at providing protection against abuse of data which is ‘about’ you but not personally identifiable (see the mess over Google Streetview, wireless MAC addresses and geo-location);
- clueless about how to address the privacy detriments arising out of third party aggregation and data mining;
- ineffective at providing redress in cross-border cases;
- equally clueless about how to factor “potential harm” into regulation that encourages better privacy behaviour.
If those sound vaguely familiar… well, it’s because I’ve just recycled some bullet points from an August 2010 blog post, and the legislation doesn’t really seem to have moved on. The proposed review of the Data Protection Directive has just been further postponed because of “negative feedback” about the leaked draft version which surfaced in December. It’s good that feedback has had a visible effect on the policy-making process, but if the concerns aren’t acted on and new, realistic proposals brought forth pretty soon, another 18 months will go by without effective legislation. That would be bad for commerce, bad for privacy, and bad for the credibility of the legislative process.
Category: Uncategorized Tags:
by Robin Wilton | January 4, 2012 | 3 Comments
Happy New Year!
On the assumption that you probably had enough to do in December, what with the usual year-end rush and the Christmas break, I didn’t blog about the two new reports of mine that came out in the middle of the month. You can get to them if you have an IT1 subscription…
The first is about Simplifying Cross-Border Privacy Compliance; in it, I build on last year’s Catalyst presentation to develop a couple of simple models for analysing privacy risk. The goal is to help organisations define a manageably simple framework for coping with the differing privacy regimes in which they may do business.
The second paper, “Electronic Signature: Is It Safe To Break The Rules Yet?”, reflects my research into how the market for digital signature has evolved since that technology first hit the market in the mid to late 90s. Although most digital signature laws remain unchanged since their introduction around the turn of the millennium, electronic signature, as a business tool, is now a lot more varied and nuanced than “digital signature using public key technology”. This paper looks at the implications of a more pragmatic, business-oriented approach.
As ever, if you have any comments or questions, I’d welcome feedback via the blog (or by email if you’re shy… ;^)
Category: Uncategorized Tags:
by Robin Wilton | December 2, 2011 | Comments Off
Taxi cabs in Oxford may still have to apply for the rather archaically-named Hackney Carriage Licence, but Oxford City Council reckons it is keeping up with the times: it has decided to require all taxi-drivers in the city to install CCTV in their cabs, as a condition of receiving their licence. The cameras, which must capture both audio and video, will run permanently and must store data for 28 days in case police decide they would like retrospective access to the recording.
Privacy advocates are, not surprisingly, questioning the impact of such a move: Big Brother Watch said it showed “total disregard for civil liberties”.
Oxford City Council has, presumably, done a risk assessment of this proposal (after all, a council spokesperson is quoted as saying “The risk of intrusion into private conversations has to be balanced against the interests of public safety, both of passengers and drivers.” It would be fascinating to see the details of that risk assessment.
Interestingly, the UK Information Commissioner’s Office (ICO) Code of Practice on CCTV happens to make explicit mention of the use of audio recording in taxi cabs. It prefaces its comment with a clear warning on audio capture (my emphasis):
“CCTV must not be used to record conversations between members of the public as this is highly intrusive and unlikely to be justified. You should choose a system without this facility if possible. If your system comes equipped with a sound recording facility then you should turn this off or disable it in some other way.”
It goes on to say:
“There are limited circumstances in which audio recording may be justified, subject to sufficient safeguards. These could include:
- Where recording is triggered due to a specific threat, e.g. a ‘panic button’ in a taxi cab.”
In the limited circumstances where audio recording is justified, signs must make it very clear that audio recording is being or may be carried out.”
That looks to me as though it rules out the option of a permanent audio recording.
The CCTV Code of Practice is itself subsidiary to the Data Protection Act, which includes the following over-arching principle:
“3. Personal data shall be adequate, relevant and not excessive in relation to the purpose or purposes for which they are processed.”
It’s going to be very tricky for Oxford City Council to prove compliance with the Proportionality Principle, because another council (Monmouthshire) has come to precisely the opposite conclusion about whether taxi drivers should record what goes on in their cabs.
Category: Uncategorized Tags:
by Robin Wilton | November 21, 2011 | Comments Off
Well, it seems I was a little optimistic about relaying the Ministers’ Statement from the e-Gov conference: they did draft one at the conclusion of the Ministerial meeting, but it is going to be published along with a broader release from the EU Council of Ministers (Telecomms subset). I’ve been given the target date for that release, so stay tuned and I’ll let you know as soon as I hear more.
In the meantime, I’ll also try and find a moment to blog some of the eID and Identity Assurance news I picked up at the conference, where there was lots to see and hear. Very interesting stories from Austria, Sweden, Belgium, Slovenia and elsewhere.
There is plenty going on in this domain. No-one was shying away from the current economic and political turbulence, indeed most of the people I spoke to felt that a robust, viable and privacy-respecting approach to eID, Identity Assurance was going to be a vital part of e-Government strategies during the challenging period ahead of us. As I say, stay tuned…
Category: Uncategorized Tags: