My colleague David Cappuccio recently provided his observations on tiered data center structures. As I read it, I was stuck by the similarities to what he was describing in IT operations to what I am seeing in information security.
“Rather than build a tier 4 fully redundant data center that supports all mission critical systems, and everything else, why not build a tier 4 zone that supports mission critical (which may only be 15% of my overall workload), and assign tier 1, 2, or 3 status to other areas in the same building?”
This is true in information security as well. It doesn’t always make sense to apply the strictest security controls and the “best” security possible to all workloads. All workloads are not equally important from a security controls perspective. Not all information is equally important either. Just as Dave is suggesting to zone applications based on power densities and availability requirements, we also have a need to zone applications and information based on security policy requirements. Most information security organizations use zoning concepts in their networks today.
Thus, we approach the crux of what I see as an emerging issue.
What if the zoning requirements in a next-generation data center for power, SLA requirements and security don’t neatly align? What if an application holding sensitive information doesn’t require the highest levels of availability? What if an application that isn’t handling sensitive information requires 99.999% uptime? And how do we handle changes over time without creating massive amounts of complexity to manage this?
This is where virtualization and the virtualization of security and management controls will have a a significant impact over the next several years of data center architectures. The physical layout of the data center can factor in physical factors like power densities and cooling requirements. The logical layout of the data center can be decoupled from the physical layout and incorporate attributes like security and operational policy requirements.
As we move beyond virtualization-for-cost-savings (efficiency), we will start to use virtualization to do things better and in new ways (effectiveness). Over the next several years, security and operational policy will increasingly tied to the workload and the information itself, not to the physical container (server) holding them enabling workloads and information to move around as needed – with their associated policy.
It gets better. Once decoupled, we can start to incorporate contextual information into the logical layout decision at run-time – such as the time of day, the time of the year (perhaps the financial reporting applications need tighter auditing at the close of the quarter), the cost of power regionally and so on. This moves beyond efficiency, beyond effectiveness and into the transformational capabilities of virtualization. Exciting stuff.
Back to reality. For 2009 cost savings and efficiencies will be the driver of virtualization. In the back of your mind, realize you are laying the foundation to truly transform your IT capabilities over the next decade.
View Free, Relevant Gartner Research
Gartner's research helps you cut through the complexity and deliver the knowledge you need to make the right decisions quickly, and with confidence.Read Free Gartner Research
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.