by Robin Wilton | November 14, 2011 | 1 Comment
I’ll be attending the European e-Government conference in Poznan later this week (I know, I know… Poland in November… why didn’t I get the Gartner IAM Summit in San Diego, or the Symposium on the Australian Gold Coast? No matter: for you, I will make the sacrifice ;^).
The theme is “Borderless e-Government Services for Europeans”, and there is plenty on the agenda to do with e-identity, trust and privacy.
I’m looking forward to seeing the showcase of e-government projects, and finding out how those have evolved since the last conference in Malmö in 2009. If you’ve got a stand at the conference, please let me know and I’ll make sure I drop by and fine out what you’ve been up to.
The conference includes a ministerial meeting, which produces a statement summarising the high-level policy goals for e-government projects over the coming years, so I’ll try to post the highlights of that here as soon as possible after they are released.
Category: Uncategorized Tags:
by Robin Wilton | November 8, 2011 | 3 Comments
Well, yesterday I mentioned two strands of the federation concept – so here’s the post about the second one.
This strand has to do with attributes (and yes, I acknowledge that “identity” is an attribute… but let’s set that to one side for the time being). In the ‘traditional’ world of assured identities, attributes are things you hang off a known identity. If you want to know how tall Joe Bloggs is, you first establish which one is Joe Bloggs and then look up the ‘height’ attribute in his records. A set of concepts and disciplines has grown up around that approach – in fact, more than one set of concepts and disciplines. You will hear attributes talked about in terms of “claims”, “assertions”, “tokens” and so on. Mostly, these are different technologists’ perspectives on a similar, if not identical conceptual core, though the technical embodiments of that core may differ. Again, the Higher Education federation community has been a pioneer in this domain, in the sense that they were the first to design a protocol (Shibboleth) which made it possible to grant someone access to resources based purely on an attribute (“is a member of this institution”) without the recipient needing to know the identity of the requester.
The thing is, once you start exchanging clusters of attributes between federated partners, another vital discipline is thrust into the limelight: managing the metadata associated with those attributes. My colleague Ian Glazer is developing some ground-breaking ideas on this theme. So, while the management of attributes may be reasonably well understood, managing metadata is less so – whether we’re talking about a federated environment or even in-house, within a single organisation.
[Here's a brief detour to give an example of how bad most of us are at managing metadata: a basic rule of data protection is that you shouldn't collect data for one purpose and then use it for another; or that you shouldn't keep data for longer than necessary... but how many organisations actually tag information with metadata which states the purpose of collection, or the date of collection and intended retention period? Virtually none, in my experience to date.]
So my second question is: what will drive organisations to manage metadata as effectively as they manage the data which, right now, they perceive as being core to their success? Because I think that if metadata isn’t already, it soon will be just as critical.
And last: where is that attribute-based strand headed? Well, in some senses it’s already there: there are organisations which don’t care who you are, but survive because of the skill with which they process your attributes (and the attributes of others like you). I can think of examples in advertising (especially behavioural/targeted), insurance, social interaction sites and law enforcement where knowing ‘who someone is’ is less critical than having an accurate picture of their attributes.
My third question, though, is about reputation. All of the things I’ve mentioned so far – assurance, attributes, metadata – are attempts to formalise things we, as social animals, understand and act on every day (however imperfectly). We want and/or need to formalise them because our lives are so technically-mediated that we need devices and applications to act as our proxies… and to act on the basis of the same concepts we ourselves rely on.
The third question is: what would make for a successful (online) reputation system?
Convincing answers to all three questions would get you an “A”.
The bonus question for a Distinction is: what could an (online) reputation system do to manage reputation post mortem?
Category: Uncategorized Tags:
by Robin Wilton | November 7, 2011 | 1 Comment
A couple of conversations recently have got me thinking about where we are with federation, particularly in relation to standards, and I wanted to put some thoughts out there for you to mull over/comment on/dispute. You’ll have to be a bit patient, I’m afraid: I do have questions, but the questions only have real purpose if we cover a little history first. Bear with me…
It seems to me that, in the broadest sense, there are two strands to the federation concept, and I’m going to leave the second one until my next post. In the meantime, the first strand is about identities. It started out with the very basic question “how can I [my organisation] authenticate your [organisation's] users?”. In its most basic form, identity federation is an attempt to solve the problem of passing a successful authentication status from one organisation to another. I’d suggest that that is now a comparatively mature concept, robustly implemented and with some successful large-scale deployments.
The business of passing authentication status from one organisation to another, though, raises questions of trust… it’s all very well for me to receive a message from you saying “it’s OK, I’ve authenticated Joe Bloggs successfully, you can let him in” – but my trust in that message rests on all kinds of factors which may or may not be reliable. Indeed, one of the early issues with OpenID was that, in its eagerness to simplify, it cast aside many of the trust factors upderpinning that kind of assertion. The recipient had to take the “it’s OK, I’ll vouch for Joe Bloggs” assertion very much at face value.
Initiatives like the US NSTIC programme are an attempt to move federation beyond simple, inter-organisational assertions and develop a more structured way of assessing the trustworthiness of such assertions. In NSTIC, the higher education federation community and elsewhere, a big chunk of that is being codified in the form of four “Levels of Assurance”… metrics which can, in theory, be objectively assessed, quantified and passed as a parameter in an authentication message. Principally (but not exclusively), Levels of Assurance are aimed at increasing the trust one organisation can place in another organisation’s enrolment processes. There has been massive investment of effort and money in developing the concept of Levels of Assurance, and in some respects that concept, too, is a relatively mature one. It has not yet reached the stage of widespread implementation or deployment, though.
So where next? Well, one thing which the evolution of federation has revealed is that we’re a long way from the comfortable days when the authoritative source of “people I need to authenticate” was “the list of people in the payroll database”. Life used to be so simple. Even basic federation meant that the list of people I need to authenticate now includes people on your payroll, who may well not be on mine. In practice, it also meant ‘customers of yours who aren’t on your payroll either’…
The identity-based strand of federation activity, then, has gone from relying on other organisations’ employee/payroll databases to relying on other organisations’ customer databases, and now to a point where the person you need to let in is, quite possibly, not ‘registered’ with the organisation they came to you from. And yet, still, you have to come to a trust decision about levels of assurance and therefore levels of access.
The leading edge of identity-based federation is, I’d suggest, not a very comfortable place to perch. So the first of my questions is: where can we find reliable trust anchors in that environment?
As for the second strand… well, in the interests of keeping this post (I hope) readably short, I’m going to come to that tomorrow…
Category: Uncategorized Tags:
by Robin Wilton | October 27, 2011 | 1 Comment
Take a moment to think about the way you use the internet… Which characteristic of it is more important to you, as a consumer: the ability to access resources anywhere, from anywhere, or the ability to find out what services are physically in your immediate vicinity? I’d be willing to bet that it’s the former.
Put another way, if you had to choose between a ubiquitous internet which gives you global, virtual access and an internet which tells you everything about what is physically nearest to you, would you honestly prefer the latter?
In fact, when I have used geo-location myself, the useful aspect has not been that I can physically locate a restaurant when I’m near it, but that I can physically locate it from wherever I happen to be.
So, if geolocation isn’t really for my benefit, why do my devices and online services increasingly assume that it is, and enable it by default?
[This micro-rant is, of course, my own personal opinion, should in no way be construed as the view or policy of my employer, and does not reflect any research or factual findings whatsoever...] But seriously… why should we put up with being tracked everywhere and told it’s for our benefit…? I’m just askin’…
Category: Uncategorized Tags:
by Robin Wilton | October 26, 2011 | 1 Comment
This is a little off the usual topics, but I hope you will enjoy it anyway… and my apologies if you can’t see the content in question because of copyright or time constraints. The BBC’s Timewatch series broadcast a documentary yesterday (available here, for another 10 days or so from today) about Bletchley Park’s WW2 code-breakers… but before you say “yeah, yeah, Enigma, Turing, blah blah, hasn’t that story already been told?”, bear with me. Enigma and Turing are mentioned, yes, but this was actually about the cracking of the Lorenz encryption system, also known as “Tunny”. In fact, among other things, the programme was a useful reminder that the Enigma machine was actually a very cumbersome way to go about things. For all its mechanical brilliance, it was strictly an “off-line” encoding and decoding tool. It required a team of three per machine to operate: one to key in the cleartext, another to read and note down the output, and a third to send the ciphertext (and, of course, the corresponding team of three at the receving end).
I’m not going to go into the details of Enigma or Lorenz here – if you want more than is in the BBC programme, I highly recommend this article by the late Tony Sale (of whom more in a moment). What I did want to do, though, was mention some of the fascinating inferences which lurked between the lines of the Tunny narrative.
For instance, at one point in the documentary it is noted that interception of Hitler’s top-echelon signal traffic helped the Allies do what we would now think of as ‘profiling’ – working out the quirks of his behaviour, and crucially, the ways in which he was likely to diverge from expected military strategies.
The story also hinted at the far broader implications of Churchill’s reported decision to destroy Bletchley’s ground-breaking electronic computing machines – the first of their kind in the world. For decades, the story was simply that Churchill had ordered the machines destroyed and their existence to be kept secret… and that it wasn’t until Tony Sale and others reconstructed one that Colossus ran again, after 60 years of oblivion. It emerges, though, that a couple were in fact kept in operation and used to underpin Cold War code-breaking efforts. In other words, Churchill’s decision wasn’t simply a matter of good secret-keeping for its own sake, but a strategic move to preserve a cryptanalysis advantage for the post-war period.
If that decision paid off over time, it was certainly not without cost. For the “unsung heroes” of the title – principally Tommy Flowers (the telecoms and computing engineer) and Bill Tutte (the mathematician and cryptanalyst), it meant a lifetime of obscurity and zero recognition of their achievements. The documentary also describes how Flowers appears single-handedly to have conceived and designed Colossus with no encouragement or resources from his military employers… only to have them turn round and ask for four more (on the double!) once the device’s utility became apparent.
For the economy as a whole, Churchill’s choice may well have meant that the UK forfeited the opportunity to capitalise commercially, from the outset, on its pioneering work in electronic computing. Whether Churchill was conscious of balancing the security and intelligence dividend against the broader economic cost of secrecy, we may never find out.
In the late 90s I visited Bletchley Park and was lucky enough to be shown round in a group led by Tony Sale. He wore his expertise lightly and with self-effacing modesty. Indeed, it wasn’t until quite a while later that I learned who our softly-spoken guide had been, and how significant his role was in preserving the site and its unique history. It seems Bletchley Park has that effect on people.
Tommy Flowers 1905-1998
Bill Tutte – 1917-2002
Tony Sale – 1931-2011
Category: Uncategorized Tags:
by Robin Wilton | August 11, 2011 | 5 Comments
Even among people I know in the privacy community, there are those who maintain a LinkedIn account even though they would not touch most of the other online networking services with a barge-pole. Somehow LinkedIn, with its business-oriented approach to building and publishing your professional biography, has been seen as less promiscuous with its members’ data than the broader ‘social’ sites.
Whether or not that trust was well placed, I think LinkedIn may just have forfeited a slab of it.
As blogger Steve Woodruff writes, here, LinkedIn have added a function to include what they call ‘social advertising’ in users’ notifications: that is, if you “recommend people and services, follow companies, or take other actions”, your name/photo may be displayed in advertising to others. The justification they give for this is that it “makes it easy for [...] members to learn about products and services that the LinkedIn network is interacting with”.
One of the principal issues, from a privacy perspective, is that this change has been introduced by default and without notifying users either that it is happening, or how they can opt out if they don’t fancy it. Neither is it clear to users who will see them in these advertisements, or whether their account privacy settings have any effect on the size of that audience. It’s not even clear, from the ‘catch-all’ description above, just which of a user’s activities might trigger the advertising function.
If that sounds familiar, it may just be because there has already been a highly visible case of something not wholly dissimilar. Do you remember the furore over Google’s “opted in” launch of the Buzz service? In that instance, the fallout included a class action settlement with a price tag of $8.5m, and a settlement agreement with the FTC under the terms of which Google had to sign up to 20 years of independent privacy audits…
Now, what LinkedIn want to publish about you via this new mechanism may differ from what Google disclosed via Buzz, but there’s a very similar set of questions at issue:
- To what extent is users’ personal data “fair game” when they sign up to an online networking service?
- What’s the right way to notify users if you’re changing the way you use such data?
- Should the default be “opt-in” or “opt-out”?
- How should you present any applicable privacy controls to the user?
Do you have a LinkedIn account (and if so, did you really think they were different from other online networking services)?
Will being opted in by default to this new feature make any difference to you?
What will you do as a consequence?
(Incidentally, Steve Woodruff’s blog post also describes how to revert to opted-out status, if that is your inclination)
Category: Uncategorized Tags:
by Robin Wilton | June 1, 2011 | Comments Off
As you may be aware, a 2009 revision to the EU’s e-Privacy Directive was transposed into UK law as the Privacy and Electronic Communications Regulations 2011, as of May 26th. All EU member states are required to transpose EU Directives into their own national law… though experience shows that member states vary both in their sense of urgency and sometimes in their interpretation of what a given directive should look like once transposed. The e-Privacy directive is no exception. As of today, my understanding is that only Denmark, Estonia and the UK have transposed it into their respective legal systems.
According to the UK’s Information Commissioner’s Office (ICO), the e-Privacy Directive means (at least in its UK form) that UK websites are required to obtain users’ informed consent before tracking their online behaviour through means such as cookies.
Well-meaning though this legislation may be, there are a number of practical issues with its implementation. As it has never been my intent to invade, subvert or otherwise compromise your privacy, this post is a brief indication of some of those issues, and the possible impact on you as a visitor to this blog.
First, jurisdiction: is this a UK site? Well, I’m located in the UK, and it’s my blog, so I’m going to behave as though it is and assume that PECR 2011 applies to it and to me. However, as this blog is hosted by Gartner, I don’t know where it is is actually hosted, and if it is hosted in the States, it’s not entirely clear to me what impact the EU Directive is intended to have for a UK-based blogger on a US-hosted site. However, the ICO seems pretty sure that, if I install cookies on your device as a result of your visit, I need to let you know about it and get your consent. Interestingly, when transposing the Directive into UK law, parliament deleted the word “prior” from in front of the word “consent”.
Of course, my Gartner blog is only one example. Anyone in the UK who has a blog hosted on a third-party service (Google’s Blogspot, for instance) will be in a similar position. Indeed, I suspect a lot of individuals, small/medium enterprises and organisations are in the same position: their websites may or may not be hosted in the UK, and that may give rise to some question as to whether or not PECR applies.
Second, enforcement. The UK ICO has, allegedly, been ‘pressured’ by the UK government not to enforce PECR, at least for a year while companies figure out what to do about the law. On the one hand, I have little sympathy with this: EU legislation – and its transposition - moves at a pretty normal pace for law-making, and PECR has been inching its way down the legislative alimentary canal for many months now. Its emergence should not have come as a surprise to anyone…. but let’s not take that analogy any further. On the other hand, there’s no doubt that the mechanisms for doing a good privacy-respecting job of gathering user consent are sadly lacking. Of course, as the only viable candidate for deploying such mechanisms is the browser, and as the dominant browsers on the planet are all developed outside the EU, that shouldn’t come as a surprise either. One reason cited for instructing the ICO to give UK firms some breathing space in this area is that the time can be used to encourage browser manufacturers to improve the privacy controls accessible to users.
Third, practicality. I do use a counter to track visits to the blog: it’s based on Statpress. I can give you the following assurance: I never use the stats for anything other than an occasional look at how site traffic is trending over time. I sometimes look at the search terms to see what brings people to the blog, and if I get persistent nuisance comments I may look up the IP address of a specific visitor. However, I never use the tracking details for any other purpose, and never knowingly disclose them to any other entity. Nor is it my intent to do so.
By comparison, think for a moment about the commercial web hosting business: there may well be commercial hosting services who mine the stats for their subscribers’ sites so as to be able to target advertising at visitors to those sites. If you are an individual, organisation or small/medium business with a hosted site in such a position, it’s not clear to me how you can comply with PECR even if you want to – and as ‘cloud’ computing continues to grow, that situation will grow with it.
As you can doubtless see by now, there’s scope for a lot of confusion here:
- Which sites are covered by UK law, and how urgently do they have to do something… and what?
- How should users of third-party hosted services react to the legislation?
2 – if you don’t like the idea that my hosts may also be setting cookies, I can sympathise, but I doubt that they will ask for your consent via my blog. If you have a problem with that, please leave a comment, and then we can both stare at it and wonder what to do next…
So, what can we expect from the PECR 2011 amendment?
Will it immediately change the way in which UK websites track your online behaviour? No.
Will it change the way browsers handle cookies and consent? Possibly, over time.
Will it advance the debate over online privacy: I sincerely hope so, even if it’s only through increased discussion, rather than immediate improvement.
Will it resolve the tension between technologists who see the law as an inconvenient obstacle to commercial progress, and legislators who don’t understand the technology but want to be seen to be doing something? No. That, regrettably, is something we’re stuck with for the foreseeable future. Welcome to Aldous Huxley’s world.
Category: Uncategorized Tags:
by Robin Wilton | March 31, 2011 | 2 Comments
Amidst all the Google/FTC reaction, my Twitter board lit up today with some fascinating traffic from online privacy advocates in Pakistan about their National Database and Registration Authority. It seems NADRA launched a service in 2009 which, thanks to an agreement with three mobile network operators in the country, makes it possible for any citizen to text a CNIC Computerised National Identity Card (CNIC) number to the number 7000. In return they will get a text message containing the first name, last name and father’s name of whoever that card is registered to. Another of NADRA’s announcements about the service adds that they may disclose “any other content they deem appropriate” – and that law enforcement agencies have a separate “7001″ number for their own requests.
Interestingly, beyond that split across two numbers, the scheme has no concept of a ‘valid requester’; anyone is entitled to check any CNIC number via the 7000 code. If you, as a subscriber to one of the three mobile networks, want to send random CNICs in and see who you get, nothing will stop you. It will cost you something – though I can’t be precise about how much. In principle, the charge is 15 rupees (about 12 UK pence or 18 US cents) per message. In practice, one local privacy advocate has so far received 42 text messages, all about his own CNIC number, in response to a single request. He is a little miffed…
Billing issues aside, you might wonder why it’s appropriate for every citizen to have access to such a mechanism. Although it was launched as a ID Card validation service, it somehow manages to go beyond that (in the sense that you can enquire against the database even if a card is not present) while at the same time falling short (it doesn’t actually validate the card, if a card is present… it just tells you what name is associated with that CNIC).
If I have no legal or contractual right to ask you for your CNIC, why should I have the ability to check your CNIC and be told your name (let alone your father’s name)? I regret I am unable to ansewr that question on NADRA’s behalf – though if I find out, I’ll be glad to let you know.
There is a slightly more pernicious side to this, apparently. I’m told that some forms and registration processes intentionally just ask Pakistani citizens for their CNIC number (but not their name or other personal details), just as a general indication of citizenship. That seems like a nice, privacy-respecting step – unless it is trivial for the collector of that number to ping the central database and fill in the blanks.
It’s tempting to sit back and say “well, something like that could never happen in the UK, could it?”. Couldn’t it? To my mind, this example just goes to show how things that are technically straightforward can give rise to the need for a large and potentially complicated governance layer, including not just technical controls to establish properly authorised access, but also procedural and legal controls to ensure that those who do have authorised access cannot abuse that access, intentionally or otherwise.
Category: Uncategorized Tags:
by Robin Wilton | March 31, 2011 | Comments Off
The US Federal Trade Commission (FTC) has announced (March 30th) the details of a settlement order with Google over alleged privacy violations arising from the launch of the Buzz service. I wasn’t actually a Gartner analyst when Buzz was launched, but by a strange co-incidence I was attending a European conference on Trust in the Information Society – at which a speaker that morning had been Alma Whitten, since appointed as Google’s Director of Privacy. Alma actually did a good job on stage that day, but she can have had little idea of the hornet’s nest which was starting to buzz [sorry...] as a result of her employer’s most recent (at the time) foray into networked interaction.
Back at the time (11th Feb 2010, to be exact) I blogged my initial reactions to Buzz here. I have included some subsequent blog posts for completeness, but the 11th Feb entry is the one in question. What most struck and irritated me at the time was that even though I said “no thanks, just take me to my inbox”, Google went ahead and turned Buzz on anyway. This opt-in – not only by default, but also in the face of an explicit indication that I did not want it – seemed to me to violate my immediate preferences and potentially my subsequent privacy.
The FTC agrees. This specific point is one of the elements of today’s enforcement notice against Google. They further find that, as a consequence of the implicit opt-in, Google was using personal information for purposes other than those stated at the time of collection, and therefore was failing to meet its obligations under the EU/US Safe Harbour agreement.
The FTC also says that Google did not inform users adequately about the extent to which they were enrolled in Buzz, the effect that would have in terms of exposing personal information to others, or the controls for limiting such exposure.
Although Google are the subject of the current FTC ruling, they were not the only example we considered in our paper. The three points set out above do not represent an unattainably high bar, but the research so far suggests that many organisations still have plenty of scope for improvement. Let’s hope that the FTC’s action encourages them to try.
Category: Uncategorized Tags:
by Robin Wilton | March 24, 2011 | 1 Comment
You have probably seen those TV advertisements in which the alleged problem of “household odours” can apparently be dealt with by means of a simple spray, or a plug-in dispenser of dubious perfume – rather than anything time-consuming and inconvenient like laundering or better personal hygiene… Nevertheless, there’s obviously a market for those products, even if I’m not part of it.
I wonder if there isn’t a somewhat similar attitude to PKI products, too. One blog comment I saw earlier today said there had been ‘a rush to the bottom’ in PKI service assurance, with critical (but inconvenient) factors being disregarded in favour of convenience.
These reflections are prompted by a topic which has had much of the security community a-Twitter over the last couple of days: the revelation that a ‘Reseller Authority’ for the Comodo CA service had been compromised. As a result, a third party had managed to get valid certificates which would allow it to identify itself (falsely) as Microsoft Live, Yahoo!, Google, Mozilla or Skype.
There is clearly a serious issue here – one of the basic mechanisms for users and servers to identify one another online was fundamentally compromised. I’m going to stick my neck out, though, and say that this incident doesn’t tell us anything new about the problem. Unfortunately, it doesn’t suggest any new solutions yet, either.
What, after all, have the last 15 years taught us about PKI technology?
1 – Issuing certificates is, technically, not that hard.
1(a) – Making sure you issue certificates to the correct entity is harder, especially if the entity is a human being and/or is remote from the point of issue. Much of the effort and cost of rolling out PKI-based authentication systems actually comes from the messy process of establishing who a given human being is, so that you can reflect that in the certificates you issue to them. The same issues crop up when trying to establish the identity of a corporation, not least because that usually entails first establishing the identity of a given human being (see above) and then establishing that that human being really is a duly appointed representative of the corporation in question.
2 – Revoking a certificate is, technically, not that hard.
2(a) – Making sure the revocation is publicised is harder. Certificate validation services need there to be a trustworthy chain of ‘trusted roots’, and access to well-maintained Certificate Revocation List (CRL) or Online Certificate Status Protocol (OCSP) services. All of those have their practical drawbacks and deployment difficulties.
2(b) – Making sure the publicised revocation is correctly acted on is really hard. What do you, as a user, do if your browser warns you that the site you want to visit has an invalid certificate? Bearing in mind the ‘trusted root’ requirement mentioned above, it’s a sad fact that many of those ‘invalid certificate’ warnings are actually the result of organisations self-signing the certificate for a site – because that’s less hassle than hooking your certificate into a chain with a third-party trusted root. Under those specific circumstances, backing away because of the browser warning might actually not be the rational thing to do.
3 – Establishing a PKI-enabled secure session between A and B is, technically, not that hard.
3(a) – Training users to expect a secure session is harder.
3(b) – Training users to deal with the failure to establish a secure session is really hard. (See 2b above) This is by no means entirely the users’ fault; after all, if users were exposed to the grisly innards of SSL sessions they would back away hastily. Arguably, SSL has only achieved widespread adoption to the extent that it has been progressively hidden from users. As a consequence, users see only a ‘metaphor’ for SSL (a comforting little padlock icon) if they see anything at all. The trouble with metaphors is that they don’t just insulate you from the inconvenience of operation: they also insulate you from the discomfort of failure – and in life, discomfort is often what educates us and changes our behaviour.Even that metaphor is often weakened in the interests of “usability”: see this site for an interesting experiment in how browsers react to revoked certificates.
That timescale of 15 years may seem like an eon in this modern world, but as it happens, there’s a 1997 paper by Don Davis which pretty much describes the issues I’ve raised above. Don predicts a slightly different balance between user responsibility and infrastructure function, but in retrospect many of the causes of the current incident can be traced to shifting the problem from the user to the infrastructure without necessarily solving it. For instance, there are those who currently suggest that, if disciplines like entity validation and certificate status checking are to be moved away from the user, those disciplines should be strengthened at the network level – for example, by security enhancements to DNS (DNSSEC).
I wish I could say that there’s a quick and simple fix to the problems which this incident has exposed, but I can’t.
Yes, you can upgrade your browser to the latest version, so that it includes the blacklisted certificates – but that’s no use if you respond inappropriately to ‘invalid certificate’ warnings, and the browser isn’t going to give you a lot of advice about what ‘inappropriately’ means.(In fact, to compound the irony, when I upgraded my browser one of the first messages I got was to tell me that a certificate management extension I had installed is not yet supported in the new browser version… so even if you try to be virtuous the odds can be stacked against you).
And yes, the compromise of a particular registration process has created specific and fundamental issues – but those issues are symptoms of a wider set of problems which, frankly, don’t have a quick and simple fix. In particular, end user actions can only mitigate part of the problem; for the rest, users have to depend on third parties improving their practices and technology.
The question is, are we just going to reach for the “room freshener”, or is it time to insist that our landlords deep clean the furniture and fabrics, while we consider our own personal hygiene regimes? While you ponder that, if you’ll excuse me, I’m just off to vacuum the sitting room.
Category: Uncategorized Tags: