Neil MacDonald

A member of the Gartner Blog Network

Neil MacDonald
VP & Gartner Fellow
15 years at Gartner
25 years IT industry

Neil MacDonald is a vice president, distinguished analyst and Gartner Fellow in Gartner Research. Mr. MacDonald is a member of Gartner's information security and privacy research team, focusing on operating system and application-level security strategies. Specific research areas include Windows security…Read Full Bio

Coverage Areas:

Don’t Trust Your Servers

by Neil MacDonald  |  June 17, 2011  |  3 Comments

One of the toughest problems in information security is addressing advanced intrusions that have bypassed traditional security controls and now reside undetected on enterprise systems. With financially motivated attacks and state-sponsored “advanced persistent threats” both on the rise, intrusions can remain undetectable for extended periods of time.

We have reached a point where our systems must be considered to have been compromised, even if we don’t have a signature to prove it. All workloads are suspect, even if they appear to be healthy.

How do we protect ourselves in such an environment? There are multiple ways (defense in depth) to counter the threat of APTs; however, one important and radically new approach is to systematically reprovision server OS and application workloads from high-assurance repositories and templates. We call this SWR – short for “systematic workload reprovisioning”.

Rather than having to trust every production server, we can reduce the scope of trust to the high-assurance libraries, models, templates and files that are used to periodically reprovision the servers. This reduces the ability of the hacker to maintain their undetected foothold in our systems.

I’ve just published two research notes for Gartner clients that detail the SWR strategy. The first explains the concept and the second explores the implications and considerations for information security and operations management where SWR is adopted.

Systematic Workload Reprovisioning as a Strategy to Counter Advanced Persistent Threats: Concepts

Systematic Workload Reprovisioning as a Strategy to Counter Advanced Persistent Threats: Considerations

For some curmudgeonly information security and operations professionals, this approach will seem radical. “Take down perfectly good (ostensibly) server workloads? Heresy!”

However, there is a precedent in human physiology. The human immune system has a similar challenge with cancer — a situation where the instructions within the body’s own workloads (cells) are compromised and cause damage from within. Much like APTs, cancer isn’t detectable by the human immune system using traditional signature-based (antibodies) and the adaptive immune system (T cell and B cell) mechanisms.

The human immune system uses apoptosis — programmed cell death — as one of its strategies to counter the advanced and persistent threat of cancer (if apoptosis is inhibited, then cells have a greater chance of becoming cancerous). With apoptosis, all workloads (cells) are autonomically regenerated from a high-assurance set of instructions (DNA) located in the nucleus of the cell or another location within the body, such as the bone marrow for blood cells. Similar to an SWR strategy, apoptosis occurs when cells appear to be damaged, as well as when they appear to be healthy.

Why can’t information security take some lessons from the human immune system? We’ve been dealing with advanced threats for millions of years and routinely deal with threats that have bypassed our perimeter protection mechanisms.

Food for thought.

I’ll be talking about SWR next week at Gartner’s Information Security Summit in Washington DC. I hope to see you there.

 

3 Comments »

Category: Beyond Anti-Virus Next-generation Security Infrastructure Virtualization Virtualization Security     Tags: , , , , , , , ,

3 responses so far ↓

  • 1 Andre Gironda   June 17, 2011 at 4:32 pm

    Service-provisioning prototyping? Sign me up!

    Hasn’t this been the goal of Opsware since Marc Andreessen announced it over a decade ago? It was a shift away from the Tivoli mindset, which was itself a shift away from the Bell Systems / Telcordia TIRKS mindset.

    From my perspective, I’d like to see this integrated with applications more cohesively. Take the differences between Chef and Puppet as an example (DevOps would prefer Chef, while classic Ops would prefer Puppet). There is a huge difference in the philosophies of how to control an environment from a metastructure-specific vSphere solution to a more-tightly app-controlled solution like Vagrant.

    Adversaries maintain control of applications as well as servers in this era of computing. Saving your servers only provides solutions for “one kind of cancer” instead of a more holistic solution that cover a multitude of scenarios. If we are going to suggest solutions in this space, we should consider the consequences of an over-focus on metastructure and an under-emphasis on infostructure.

  • 2 Nick Southwell   June 20, 2011 at 4:20 am

    While the concept of periodic re-provisioning from a trusted source is a worthy one, are we not shifting the focus from the edge server back to the template source as the next gold mine for attack? Compromising the fresh spring that feeds all the rivers and wells is surely a more efficient and devastating way of spreading infection?

    To follow the cancer analogy, yes cells die via apoptosis but isn’t a main cause of future cancerous cells the misfiring of the DNA template itself i.e. contamination or corruption of the repository.

    One master source should be easier to contain and manage but the consolidation of risk and the homogenisation of exposed platforms especially at the edge would require careful consideration.

    Thank you for a thought provoking article.

  • 3 Sean Doherty   June 21, 2011 at 3:41 pm

    For me the most obvious an interesting workload that this can be applied to is the user workload. For many the best practice following suspected compromise of an endpoint has been wipe, re-provision and profile. To some extend VDI and Application Virtualization/Streaming technologies are doing this today. Ensuring the integrity of the source “gold image” is always going to be the weakest point, after that the technology that is used to re-provision and verify would be the next logical targets of attack.

    Assuming that theses issues are manageable, and I believe that they are we should look at how the features of the modern virtualized data center (aka Private Cloud) can be used to increase the security posture when the techniques are applied to server workloads. For example provisioning the service on different physical hardware, within a new network configuration and how the technique can be used as part of the load balancing strategy to re-provision rather than move workloads.