Neil MacDonald

A member of the Gartner Blog Network

Neil MacDonald
VP & Gartner Fellow
15 years at Gartner
25 years IT industry

Neil MacDonald is a vice president, distinguished analyst and Gartner Fellow in Gartner Research. Mr. MacDonald is a member of Gartner's information security and privacy research team, focusing on operating system and application-level security strategies. Specific research areas include Windows security…Read Full Bio

Coverage Areas:

Virtualization of Security Controls Enables Flexible Data Center Design

by Neil MacDonald  |  February 23, 2009  |  8 Comments

My colleague David Cappuccio recently provided his observations on tiered data center structures. As I read it, I was stuck by the similarities to what he was describing in IT operations to what I am seeing in information security.

“Rather than build a tier 4 fully redundant data center that supports all mission critical systems, and everything else, why not build a tier 4 zone that supports mission critical (which may only be 15% of my overall workload), and assign tier 1, 2, or 3 status to other areas in the same building?”

This is true in information security as well. It doesn’t always make sense to apply the strictest security controls and the “best” security possible to all workloads. All workloads are not equally important from a security controls perspective. Not all information is equally important either. Just as Dave is suggesting to zone applications based on power densities and availability requirements, we also have a need to zone applications and information based on security policy requirements. Most information security organizations use zoning concepts in their networks today.

Thus, we approach the crux of what I see as an emerging issue.

What if the zoning requirements in a next-generation data center for power, SLA requirements and security don’t neatly align? What if an application holding sensitive information doesn’t require the highest levels of availability? What if an application that isn’t handling sensitive information requires 99.999% uptime? And how do we handle changes over time without creating massive amounts of complexity to manage this?

This is where virtualization and the virtualization of security and management controls will have a a significant impact over the next several years of data center architectures. The physical layout of the data center can factor in physical factors like power densities and cooling requirements. The logical layout of the data center can be decoupled from the physical layout and incorporate attributes like security and operational policy requirements.

As we move beyond virtualization-for-cost-savings (efficiency), we will start to use virtualization to do things better and in new ways (effectiveness). Over the next several years, security and operational policy will increasingly tied to the workload and the information itself, not to the physical container (server) holding them enabling workloads and information to move around as needed – with their associated policy.

It gets better. Once decoupled, we can start to incorporate contextual information into the logical layout decision at run-time – such as the time of day, the time of the year (perhaps the financial reporting applications need tighter auditing at the close of the quarter), the cost of power regionally and so on. This moves beyond efficiency, beyond effectiveness and into the transformational capabilities of virtualization. Exciting stuff.

Back to reality. For 2009 cost savings and efficiencies will be the driver of virtualization. In the back of your mind, realize you are laying the foundation to truly transform your IT capabilities over the next decade.

8 Comments »

Category: Next-generation Data Center Virtualization Security     Tags:

8 responses so far ↓

  • 1 Kent Cook   February 23, 2009 at 2:06 pm

    Makes sense Neil. Virtualization is one element, but as cloud computing grows, this introduces another set of variables and opportunities. As you point out however, this is exciting stuff as it opens the door for truly transformational architectures.

  • 2 Virtualization of Security Controls Enables Flexible Data Center … « Security   February 23, 2009 at 6:42 pm

    [...] R­ead the or­i­gi­n­al pos­t: Vi­rtua­li­za­ti­on­ of S­e­curi­ty­ Con­trol… [...]

  • 3 Neil MacDonald   February 24, 2009 at 8:38 am

    Kent, you raise a good point. The skills and mindset that comes with the move to virtualize the data center will help ease the transition to cloud-based services. Once you get comfortable with the notion that a workload and information encapsulated within a VM can reside on any server in any of your data centers (assuming it conforms to the operational and security policies linked to the VM), then it is not a big leap to have those workloads start running off premises on equipment you don’t own. Again, as long as I can ensure that my operational and security policies are enforced, it shouldn’t matter where the workload runs. However, the standards for expressing security and operatinal policy in a way that spans from the enterprise to the Cloud and across multiple Cloud platform providers are embryonic (at best). I’ll provide more thoughts on this in future posts.

  • 4 VMware Unveils vShield and Raises the Security Bar for all Virtualization Vendors   February 27, 2009 at 11:55 am

    [...] As I have discussed from the beginning, the policy enforcement capabilities of information security technologies like firewalls, intrusion prevention systems, network access control and so on need to be virtualized in order to secure the next-generation adaptive data center. [...]

  • 5 Virtual Appliances are Real   March 9, 2009 at 7:58 am

    [...] previous posts, I have discussed how security controls need to be virtualized to support the next-generation highly virtualized data center. I have also talked about how most of [...]

  • 6 VMware’s Message of Cloud Choice (and Security)   September 2, 2009 at 5:38 pm

    [...] industry – both directly with its capabilities and as an enabler/catalyst for the transition to virtualized security controls in next-generation virtualized data [...]

  • 7 MIke   November 13, 2009 at 9:19 am

    I agree that the tier structure CAN be effective for data protection purposes. BUT, it cannot be relied on too heavily or be handled carelessly. Many times hackers will infiltrate a system with very high security at its weakest point. Once they have gained entry, it is much easier to slowly navigate vertically, than it would be to just enter at the highest level of security.

  • 8 Identity-Awareness Should be a Feature, not a Product   December 16, 2009 at 5:21 pm

    [...] I talked about in this post, as we move to virtualize and secure our next-generation data center infrastructure, security [...]