Neil MacDonald

A member of the Gartner Blog Network

Neil MacDonald
VP & Gartner Fellow
15 years at Gartner
25 years IT industry

Neil MacDonald is a vice president, distinguished analyst and Gartner Fellow in Gartner Research. Mr. MacDonald is a member of Gartner's information security and privacy research team, focusing on operating system and application-level security strategies. Specific research areas include Windows security…Read Full Bio

Coverage Areas:

Cloud Security Lessons from Google’s Internal Security Breach

by Neil MacDonald  |  September 16, 2010  |  6 Comments

Earlier this week, I saw this article describing a security breach by an internal Google employee where a site reliability engineer (now fired) had violated the privacy of multiple email accounts. From the article:

Barksdale’s intrusion into Gmail and Gtalk accounts may have escaped notice, since SREs are responsible for troubleshooting issues on a constant basis, which means they access Google’s servers remotely many times a day, often at odd hours. “I was looking at that stuff [information stored on Google's servers] every hour I was awake,” says the former Google employee.

There are a couple of immediate lessons from this. First, insider threats are real for our own organizations and they are real for cloud providers. There are multiple ways to protect ourselves from internal threats, but one of the foundational elements is to limit and monitor all privileged access as well as baseline and investigate abnormal behavior. However, the article goes on to state:

And the company does not closely monitor SREs to detect improper access to customers’ accounts because SREs are generally considered highly-experienced engineers who can be trusted, the former Google staffer said.

Another lesson is that when assets are concentrated, the damage from an individual incident can be greater. In other words, the same type of incident can cause more damage. We face this in our own data centers with the shift to virtualization platforms where multiple workloads are now dependent on the integrity and separation provided by the virtualization platform underneath. Public cloud-based services providers face the same problem on an even greater scale. That means the level of due care we require from the provider meet must be higher.

One approach is monitoring (as discussed above). Another would be to limit the scope of administrative access for any given employee. Another would be to put a tight process around how and why administrators are granted administrative access. Nothing new here, it’s just the impact of a lapse is magnified. And the issue isn’t just Google, it relates to any cloud-based services provider.

So, how do we gain confidence (trust) that the security basics are sufficiently addressed by a cloud provider? We must become excellent in our ability to incorporate specific security requirements in our request for information and request for purchase processes. The Cloud Security Alliance’s Cloud Controls Matrix is an excellent resource to get started in understanding the types of controls that should be required.

6 Comments »

Category: Cloud Cloud Security Next-generation Data Center Virtualization Security     Tags: , , , ,

6 responses so far ↓

  • 1 Secure Cloud Review   September 16, 2010 at 11:03 am

    Neil, it just goes to show the humans are still the weak link in the chain, cloud or not. Have written more at our blog on this one at http://securecloudreview.com/2010/09/even-in-the-cloud-humans-are-the-weak-link/

  • 2 Saqib Ali   September 16, 2010 at 11:35 am

    the question is:

    Can we trust the employees of a cloud provider more than we trust our own internal IT staff?

  • 3 Neil MacDonald   September 16, 2010 at 11:50 am

    @Saqib,

    I think the conclusion is not whether we trust them more or less, it is that we don’t trust anyone completely.

    In our enterprise data centers, we must assume that an insider will try and steal our stuff and take precautions against this.

    We must assume the same of employees within cloud providers. While we can ask for proof of background checks and so on, there will always be cases like the one above where someone abuses the level of trust we have placed in them.

    Our focus must shift to understanding the layers of compensating controls and security precautions to detect and prevent such a scenario. Thus, the recommendation for excellence in our RFI/RFP processes.

    PROTECTION = PREVENTION + DETECTION
    see this post:
    http://blogs.gartner.com/neil_macdonald/2010/07/15/security-thought-for-thursday-protection-prevention-detection/

  • 4 CorpInsider   September 16, 2010 at 3:35 pm

    This sounds like basic IAM. How does a site reliability engineer have access to so many applications and the data associated with it? Stolen credentials, admin privileges, collaborators within Google,……on how many fronts did Google drop the ball here?

    I guess Google SOC was not monitoring these transactions either?
    Sounds like Google missed SEC-101 courses

  • 5 Saqib Ali   September 16, 2010 at 3:54 pm

    @Neil,

    I think no amount of excellence in RFI/RFP process will prevent a employee of a datacenter from turning disgruntled – whether they are internal IT staff or employees of a Cloud provider.

    Segregation of Duties (SoD) is very difficult and sometimes costly to achieve, and even if implemented properly, can not prevent the problem of collusion.

    Saqib

  • 6 Neil MacDonald   September 17, 2010 at 7:36 am

    @Saquib,

    Agree – I believe we are both saying the same thing. Internal threats are there whether it is internal IT staff or a Cloud provider.

    Also agree that SOD alone doesn’t stop collsion. However, on collusion – we deal with this in our enterprise systems with activity monitoring. Specifically for ERP systems transaction monitoring tools from vendors such as Oversight, ACL, Approva and others.

    That’s why I empahsized activity monitoring in the orginal blog and my other response. Despite preventative mechanisms, bad stuff will happen and we must be able to detect this. Google was complacent. We can’t be. Let’s learn from their mistakes.