Andrew Walls

A member of the Gartner Blog Network

Andrew Walls
Research Director
5 years at Gartner
35 years IT industry

Andrew Walls is a research director in Gartner Research, where he specializes in information security practices, tools and markets in social media, enterprise governance, awareness/communications, directories, investigations and security program management. ...Read Full Bio

Google, EU Privacy ruling and Privacy

by Andrew Walls  |  May 19, 2014  |  Comments Off

Last week quite a number of headlines were generated by a court ruling concerning the responsibility of a search engine provider to filter search hits that conflict with an individual’s privacy requirements. Although the details of implementation and enforcement will take some time to sort themselves out, on the face of it, this judgement appears to create some serious risks to privacy rather than mitigating privacy risk.

Here’s my logic:
1- Jacques specifies to Google and other search providers that there is a list of URLs that lead to content that violates a court-supported privacy assertion.
2- Google (and others) must then filter those URLs out of any search results that are presented when someone searches for ‘Jacques.’
3- Google then asks ‘which Jacques are you?’
4- Jacques, in order to maintain his privacy, must then prove his/her identity to Google with sufficient specificity that Google can act with due care and diligence in filtering search results.
5- Jacques hands over a lot of personal details to the search engine providers to support their identity validation process.

Admittedly, this is a crude and premature analysis, but anonymity is part of the infrastructure of the internet. If we want to filter content based on identity, we will need to have a way of validating the identity of the data subject and verify that defined URLs contain content that is actually about that particular data subject and not some other Jacques. In other words, in order to exert strong control over the distribution of content that is already on the internet, we will need strong identity validation, which relies on, yep, you guessedit , more extensive sharing of PII.

In this case the cure may be more destructive than the disease…

Comments Off

Category: Uncategorized     Tags: , ,

Printable guns and the FUD factor

by Andrew Walls  |  May 10, 2013  |  Comments Off

The web is all abuzz about the emergence of workable designs for the manufacture of simple guns via a 3d printer. Defense Distributed ( http://defdist.org/) is behind much of this furor as they have released CAD files that enable anyone with sufficient resources (an appropriate 3D printer, raw materials and basic mechanical skills) to print and assemble a functioning pistol. Already, various state legislatures are working on laws that seek to prohibit these activities (printing your own gun) and the US State Department has ordered Defense Distributed to take the files out of public distribution.

All of this hysteria is a wonderful example of bad risk management.

The basic logic presented by most of those that oppose the availability of 3d designs for printing guns appears to be that availability will enable (or possibly encourage) bad people to manufacture weapons that they will use to do bad things. Unfortunately, economic realities drive behavior in the opposite direction. Although I am confident that we will one day have relatively cheap 3d printers capable of printing high density plastics or even some metals, at present, the cost of the equipment and materials required to produce a functioning weapon are far in excess of the cost of a reliable, manufactured weapon from a reputable manufacturer. Even if you are willing to invest in 3d printing resources to produce a weapon, you will end up with a weapon that will probably fire less than 10 rounds before failing and that assumes that you printed it correctly, the designs were good and that you assembled correctly and tested it adequately. Along the way, you might encounter a few prototypes that fail catastrophically (who wants to take the first shot?).

So, what’s a bad guy going to do? Purchase a stack of technical gear and powders, get it all working, assemble the output, test it and then use it (until it fails after a few shots) or is the bad person going to acquire a reliable weapon online, at a gun show or on the street for a few hundred dollars? The economics are simple. The market for weapons is fairly efficient and cheap, reliable weapons are available with no requirement for upfront capital costs for manufacturing capabilities.

The actual risk presented by Defense Distributed’s designs is negligible when compared to all of the other ways that weapons can be produced and acquired. By focusing on 3d printing of weapons, legislators and regulators perpetuate the poor risk thinking that has resulted in the open derision of many TSA activities. A much greater risk is that poorly written regulations will inhibit the development and adoption of 3d printing techniques by manufacturers – both established and start-up, innovative organizations.

And how could such a regulation be enforced? Require manufacturers of 3d printers to get their machines to determine which designs are illegal and should not be printed? Would that mean that gun manufacturers would not be able to use 3d printers in their manufacturing process? Block the distribution of designs? In addition to the free speech issues this might generate, one thing we have learned in the security business is that it is nearly impossible to stop illegal material from circulating on networks.

Humans are great at imagining risks and taking steps to mitigate fictional threats. If the objective is risk management, a real risk analysis is needed to drive cost effective investment in mitigation. Flights of fancy are not part of risk management.

Comments Off

Category: Uncategorized     Tags: , ,

How do you get to secure behavior? Practice, man, practice.

by Andrew Walls  |  March 21, 2013  |  1 Comment

In conversation with a client today the phrase ‘weak link in the security chain’ came up as a description of users. This is a typical belief or attitude in IT security and, I would hazard, the IT industry at large. The user is typically seen as a problem to be managed while simultaneously being our most valued client. I have a serious problem with this.

Don’t get me wrong. I have dealt with my share of clueless users that could not make a sound security decision if their life depended on it. However, I am very much aware that most people cannot make good decisions about IT security because they are not expected to do so and are never required to do so. In the IT industry we have been pushing every process element we can into an IT system. We try to remove from the user’s control anything that can be automated. Often, this makes good sense and provides for a better quality processing environment. When we do this with security we create a situation where our users are never provided with an opportunity to make an informed security decision.

Simply put, our users don’t get a chance to practice. As long as we can keep them locked up in our enterprise security cocoon, this is okay. We don’t need them to make security decisions, we will do that for them. But now, our people are using DropBox and FaceBook and carrying their own tablets and smartphones. All of a sudden they are out there in the world on their own!

Wouldn’t it be nice if they were better at making security decisions? Maybe if we gave them a chance to practice security decisions in a safe environment they would develop their abilities…

If we expect people to develop security acumen they need a chance to practice. Sitting through a security awareness PowerPoint is not practice! They need to know what information they should use to guide their decisions and how to actually take action and they need regular opportunities to do all of this and experience the consequences of their choices. If we take that away from our users (which is what most security teams do) we should not be surprised when users fail to make good security decisions.

The failure is ours, not theirs.

1 Comment »

Category: Uncategorized     Tags:

A Right to Privacy? Really?

by Andrew Walls  |  May 7, 2012  |  3 Comments

On a regular basis I find myself perplexed by someone asserting that they have a ‘right’ to something. This is particularly the case when someone tells me that they have a right to privacy. What on earth do they mean?

I am not the first to ask this question and many different (and often conflicting) answers have been offered. A right to maintenance of confidentiality over personal data, the right to be left alone, etc. These are all potentially useful definitions of this particular right, but what appears to be missing – IMHO – is an acknowledgement that rights are constructed by society. Rights do not have an existence outside of a social context. Different human cultures define rights in different ways, drawing on religion, culture and environmental drivers to construct baseline statements or principles regarding the privileges of individuals and/or groups.

I find the right to privacy (however it is defined) particularly odd. Humans are social animals. We congregate in groups and erect complex cultures and social structures. Social interaction is based on the exposure and sharing of personal data. All of that personal data that many of us consider private (such as your birth date, favorite color, etc.) is known by many people, only some of which we know and can control through some sort of culture-based behavioral expectation. A lot of people tell me that the growth of IT has spurred the public outcry around privacy because IT makes personal information accessible to people and organizations that we do not personally know and with whom we do not share a cultural basis for management of behavior. The security analyst in me, translates this to “we used to have security through obscurity (difficulty of getting access to personal data), but now the obscurity has been removed.”

This makes sense to me. As the use case changes so must the controls we apply. If we are to enable innovation without engaging in undue privacy risks, we need to develop new security approaches to replace the obscurity that we once enjoyed. But let’s not kid ourselves. The right to privacy under discussion also is changing as people are alternately repulsed by and attracted to the power of new IT service delivery options.

Rights are fluid and dynamic. Their meanings change across time and cultures and (here’s the part that drives IT security investment) the policies and laws written to define and enforce these rights will experience continual change. The end result is that your privacy and data protection program can never rest.

3 Comments »

Category: Uncategorized     Tags: ,

Exploring the limits of social media transparency, privacy and free speech

by Andrew Walls  |  February 17, 2012  |  Comments Off

This morning I spent some time listening in on a hearing on DHS monitoring of social networking and media. The hearing was held by the US Congressional Sub-committee on Counterterrorism and Intelligence, chaired by Patrick Meehan. Transcripts of the testimony are available here ( http://homeland.house.gov/hearing/subcommittee-hearing-dhs-monitoring-social-networking-and-media-enhancing-intelligence ) as well as a video playback of the hearing.

The substance of the discussion did not offer any surprises, but the expressed motivations of the subcommittee members was fascinating. There were repeated mentions of the ‘chilling effect’ on private speech if people knew that their utterances on social media were being captured, stored and analyzed by government entities. Fundamentally, the argument against DHS monitoring of public speech on social media systems was based on protection of free speech rights. This is not a long bow to draw and there are a host of vignettes available in Orwell’s “1984” that articulate the concept better than I can. What I find interesting about this argument is that it mirrors the rationale that has led various states within Germany to pass laws making it illegal for employers to monitor or capture the social media conversations of their employees.

In the employer/employee scenario, it is possible that an employee will self-censor their public speech if they anticipate a negative reaction by their employer. In the German context, employer monitoring of their employees’ social media activities is viewed as a privacy intrusion that limits free speech and public debate. (Set aside for the moment the issue of the enforceability of such a law.) Similar logic is being voiced in support of new EU regulations regarding privacy and data protection.

If we accept that government monitoring of social media conflicts with free speech rights, it is a short path to drawing the same conclusion concerning employer monitoring of social media. When an employee represents an employer in social media, we expect the employee to self-censor and we also expect the employer to obtain and deploy security controls that monitor for compliance and enforce policy where possible. Should that same self-censorship be expected in all use of social media by employees or should society place limits on the degree to which employers can inspect their employees’ personal activities and use the fruits of that inspection to drive management decisions?

This is a tricky area. In certain circumstances it is illegal under US laws for an employer to seek out personal information regarding an employee or job candidate. For example, in a job interview, it is illegal to ask an employee about their religious affiliation or whether they are pregnant or plan to get pregnant. It is trivial to discover this information if the individual is a regular participant in Facebook. In other circumstances, there are no legal restrictions on employers viewing the activities of employees in a public environment.

If we want to support free speech in social media without any fear of retribution by employers or potential employers, we have a lot of work ahead of us. To start with, it is impossible to block employers from seeing the social media conversations of employees or possible future employees. What if a manager uses their home PC or a cybercafe to view a user profile? How could we identify the manager’s relationship to the employer? Should social media providers provide logs of source IPs and user profiles utilized to view a person’s profile? This might enable the user to identify who was looking at their profile, but the collection, storage and dissemination of this data would create a significant burden on social media providers. I am also dubious that the average user would actually analyze this data to detect who was having a look. The various social media platforms already offer a host of privacy features that users can configure to restrict access to their content, but the adoption of these capabilities is spotty at best. Placing the burden on the average user is not realistic any more than we would expect people to wear disguises when they attend a public event that their employer opposes.

At the end of the day, this is a social problem, not a technological problem. Despite this, we can expect to see a blizzard of attempts at regulation and a steady flow of technology and services that attempt to control employer actions and protect employees’ free speech. This is why Gartner positions social media as a disruptive technology. Social media is actively disrupting traditional patterns of social interaction; creating both benefits and threats to all of us.

Comments Off

Category: Uncategorized     Tags: