Gartner Blog Network

Security Shouldn’t Have to be Rationed

by Neil MacDonald  |  April 28, 2009  |  4 Comments

In my daily conversations with clients on virtualization security, one of the issues that we frequently discuss is whether or not they need virtualized security controls like firewalls and intrusion prevention systems to isolate and inspect traffic between virtual machines.

One line of reasoning goes like this: If the workloads in the VMs have similar trust levels, we don’t need these controls between VMs because we don’t have these types of controls between servers of similar trust levels in our physical environments.

While this is true, it assumes that we merely want to replicate what we have done in the physical world in the virtual world. In physical environments, we sprinkle security controls here and there in part because we constrained by the cost and physical limitations of physical appliances. But, what if you could deploy security controls like firewalls and IPSs with a push of a button in the form of software-based appliances? What if a virtualized security control was a tenth or a hundredth of the cost of the physical appliance-based physical control it replaced?

Virtualization offers us a clean slate — a chance to do things differently. Radically differently.

We’ve ended up with today’s security architectures where the limitations and costs of physical controls played a major factor in their placement. Virtualization changes the cost economics and lowers barriers to deployment and adoption. Security architectures will absolutely change —  and improve — as a result.

Ask yourself – if firewalls and IPSs were essentially free and could be deployed anywhere they were needed at little or no incremental cost, would you change how you secure your infrastructure?

Additional Resources

View Free, Relevant Gartner Research

Gartner's research helps you cut through the complexity and deliver the knowledge you need to make the right decisions quickly, and with confidence.

Read Free Gartner Research

Category: virtualization-security  

Tags: next-generation-data-center  virtual-appliances  virtualization-security  

Neil MacDonald
VP & Gartner Fellow
15 years at Gartner
25 years IT industry

Neil MacDonald is a vice president, distinguished analyst and Gartner Fellow in Gartner Research. Mr. MacDonald is a member of Gartner's information security and privacy research team, focusing on operating system and application-level security strategies. Specific research areas include Windows security…Read Full Bio

Thoughts on Security Shouldn’t Have to be Rationed

  1. […] Neil MacDonald has a provocative post about the economics of securing virtualized environments here. Neil’s thesis: In a virtual world, security should not have to be rationed on the basis of […]

  2. knujlla says:

    We have been continually told that most of the cost of security is in operationalizing security. You mentioned FW and IPS. Challenge on FW is that with every application coming online, more and more ports have to be opened or they go tunneled over a well known port. For IPS, it managing the policies and logs and then the ultimate kicker of FP+.

    In a virtualized compute environment, do you think the likes of VMWare, Microsoft, Citrix etc. can do more than having zoning around a trust construct (vShield) or anything beyond making sure ABI is confirmed too (Determina).

    If not then security in the virtual world will continue to rhyme with that of the physical world.

  3. Neil MacDonald says:

    Sure. There are two parts to the issue of reducing the high cost of today’s information security infrastructure. There’s the issue of reducing the cost of security controls — reducing complexity and the number of vendors and including the virtualization of security controls as a way to introduce a discontinuity to reduce cost.

    Then there’s the care and feeding of the controls themselves which I believe is your point. I’ll blog in a future entry in more detail, but let me give you an idea of what my research shows will happen over the next 3-8 years. In short, why should it require a team of people to program firewall rules when the application already knows what it needs to communicate on the network? We’ve got it backwards. Rather than manually apply these policies as these applications are put into production, why not gather the requirements directly from the application (via static analysis of the source code) from the developer (in the form of annotated models/metadata) or from the business analysts (also in the form of models/metadata). As long as what the application requests conforms to policy, why do we need to manually enter this after the application is developed when all of this knowledge was available in the development cycle?

    This note explains the concept in detail and I’d be glad to talk to you about multiple vendors and offerings that are showing this is possible — today.

    On false positives, I’ll defer to my colleagues that cover IPS in detail – however, a) the technology has matured to the point where the use of out of the box rules and filters (“high-fidelity”) will produce very few false positives. For example, the use of ‘vulnerability-facing filters’ is an example of an approach that provides effective protection with few false positives. Log monitoring can be consolidated with a SIEM or similar (or outsourced).

  4. […] Security shouldn’t have to be rationed. […]

Comments are closed

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.