Neil MacDonald

A member of the Gartner Blog Network

Neil MacDonald
VP & Gartner Fellow
15 years at Gartner
25 years IT industry

Neil MacDonald is a vice president, distinguished analyst and Gartner Fellow in Gartner Research. Mr. MacDonald is a member of Gartner's information security and privacy research team, focusing on operating system and application-level security strategies. Specific research areas include Windows security…Read Full Bio

Coverage Areas:

Whitelisting, Meet Virtualization. Virtualization, Meet Whitelisting.

by Neil MacDonald  |  April 10, 2009  |  10 Comments

As I have discussed, x86 hardware virtualization creates a new IT platform that must be securely maintained (e.g. patch, configuration and vulnerability management) like any other IT platform we are responsible for. This layer is extremely sensitive as a compromise of this layer puts all of the hosted VMs at risk.

I’ve also discussed the foundational power of whitelisting, especially when brought to the application level with application control solutions.

What’s really interesting is that the intersection of these two — the use of whitelisting in the virtualization layer/platform itself to control what applications are allowed (or not allowed) to execute in this critical layer.

With VMware’s ESX, this is an issue in the Linux-based service console (which can run compatible Linux applications). It is a much more serious issue with Microsoft’s Hyper-V parent partition architecture based on Windows Server 2008 “core” and Xen’s Dom0 based on Linux. With Hyper-V and Xen, a failure or compromise of the parent partition puts all of the child VMs at risk.

From a security perspective, thinner is always better, so ideally we wouldn’t run any additional software in the parent/Dom0 partition. Sounds like a perfect application of application whitelisting, doesn’t it? And, since we don’t have pesky end-users downloading and executing arbitrary code at this layer (or browsers, or email, or …), it makes the approach of whitelisting much, much easier to apply.

As I discussed in a comment on this post, we should expect our IT platform vendors to provide whitelisting capabilities built into the platforms they sell us. Likewise, basic whitelisting capabilities should be built into our virtualization platforms and should not require that we install an additional agent to achieve this.

(virtualization platform providers, repeat after me: agents at the hypervisor/VMM layer are a bad thing, agents at the hypervisor/VMM layer are a bad thing)

However, none of the virtualization platform vendors that I am aware of provides application control as a standard built-in capability (yet). This is another no-brainer that I would expect to appear as a standard capability in virtualization platform solutions within the next 12-24 months.

10 Comments »

Category: Beyond Anti-Virus Virtualization Security     Tags: , ,

10 responses so far ↓

  • 1 ricardo   April 10, 2009 at 8:58 am

    you better look at Windows 2008 group policies then…
    by name, hash, etc…
    allowed services…
    gets better with r2

  • 2 Neil MacDonald   April 10, 2009 at 9:11 am

    Ricardo, good point.

    Software Restriction Policies have been around for a long time. They are an option for organizations looking for an Application Control capability. What I need to confirm is that the GPOs work in server core (the parent of Hyper-V) and also in the dedicated version of Hyper-V which is based on a slimmer foundation than even Windows Server core.

    Yes – in R2 / Windows 7 code base, AppLocker is introduced which makes this better for Windows. It’s not clear how much of this goes into “core”.

    It would be quite useful for Microsoft to prepopulate the hash list with its own applications along the lines I talk about here: http://blogs.gartner.com/neil_macdonald/2009/04/03/we-need-a-global-industry-wide-application-whitelist/
    rather than have us put in all these names/hashes manually.

  • 3 Rishi Bhargava   April 10, 2009 at 5:21 pm

    Neil,
    Great post and I am very much inline with your thoughts. This is one of the great scenario where whitelisting makes perfect sense. I want to take the discussion further to not only talk about extending the whitelisting of executables in dom0/console OS/parent partition but generalize the discussion to virtualization security and what does security for dom0(replace this with console OS or parent partition for other solutions) mean?

    If the security of dom0 is compromised, the user can get complete control of all other VMs. The security solution should not only implement the whitelisting for anything that runs as an application but needs to authenticate the kernel modules or drivers that are loaded. The reason being the dom0 has higher level of access to hypervisor calls and a new driver/kernel module can take control of all VMs.

    In addition, the buffer overflow exploits could potentially cause similar problems by hijacking not only user land (ring 3) binaries but also kernel modules.

    A complete lockdown solution for dom0 needs to go beyond plain binary whitelisting and include the other pieces as well.

    Thanks for the great discussion board.

  • 4 Wyatt Starnes   April 10, 2009 at 7:13 pm

    Neil,

    I am in total agreement with both your point (virtualization needs whitelisting) and with the observations of methods that either are using (or have the ability to leverage) white or “allow” lists (Windows 2008 Server/SRP and the announced Windows 7 Applocker).

    I would also add that Intel has done some very good work to enable whitelisting in the vPro platfrom, specificallyTrusted Execution or TxT. (Full disclosure Intel is an investor in SignaCert).

    In a similar vein, Sun includes an allow list method call Validated Execution, as well as Secure Containers in Solaris. See:

    http://opensolaris.org/os/project/valex/Design.pdf

    Thanks for opening this 3rd thread as I agree that the virtualization/cloud services challenge deserves special focus and comment.

    As we transition more to “thinner clients”, hypervisors, cloud services, and streamed applications – whitelists have several important (I would say even crucial) value-add use cases.

    In fact, we believe that virtualization likely IS the *killer app* for whitelisting, but as I have eluded to in other posts, it goes far beyond simple binary-level locking blocking for security purposes. Think trusted builds and compliance (in a world of dynamic software stack deployment where awareness of where the stack is physically running in the datacenter is a new challenge).

    How do I prove to my compliance officer that the software instantiation and de-instantiation were compliant? How do I capture a definitive view so that I can reproduce the stack to full integrity and show provenance when the image is no longer running in the environment?

    And in cloud computing we have privacy/sandboxing issues relating to MY applications, code and private information living/running on hardware that I don’t own/control. How do I know that that MY environment (the one I logged off of yesterday) is the same one I signed into today? I don’t.

    So I posit that positive affirmation methods to a known-provenance, trusted-code reference (or references) is *enabling* – not a luxury – for the ICT world of the future. And the future is now.

    And it is much more than *security* as we know it today.

    Also, given that we are changing our compute demarcations dramatically with all of these shifts, what a great opportunity to *fix* the known issues we have struggled with in our legacy systems management world since industry inception.

    Build it in folks – and them let’s work together as an industry to create standardized, above-platform methods and resources to ensure implicit platform trust and business process integrity through the entire business service delivery cycle.

    Wyatt.

  • 5 Neil MacDonald   April 13, 2009 at 3:12 pm

    Ricardo,

    Yes – I confirmed that SRPs work in the core server configuration of Windows Server 2000 (used to support the role of a Hyper-V parent).

    This is also the case the with dedicated version of Hyper-V which Microsoft calls “Microsoft Hyper-V Server 2008″ (notice the Windows brand name is removed)
    http://www.microsoft.com/hyper-v-server/en/us/how-to-get.aspx

    As I mentioned above, since Microsoft knows what should be running in order support the Hyper-V parent, it would make sense if they prebuilt these whitelisting policies for us.

    Neil

  • 6 Kirk Larsen   April 15, 2009 at 8:10 pm

    Hi Neil,

    Beyond whitelisting. Overshadow:
    http://www.cs.mtu.edu/~zlwang/virtualmachines/slides/columbia-overshadow-final.pdf

    –Ksl

  • 7 Neil MacDonald   April 15, 2009 at 9:38 pm

    Kirk,

    Thanks for the interesting link. Cool stuff. I am convinced that virtualization can be used to radically reinvent legacy approaches to security and this is an example of just such an approach.
    See these two research notes from 2008:
    http://www.gartner.com/DisplayDocument?id=623340
    and
    http://www.gartner.com/DisplayDocument?id=631312

    In the approach in this presentation, the key of course is trusting the VMM. What mechanisms do we have to know for sure that it hasn’t been compromised? I’m a proponent of TPM-based root of trust measurements of hypervisor/VMM integrity but this is not yet mainstream.

  • 8 Neil MacDonald   April 15, 2009 at 9:39 pm

    whoops, second link was wrong:
    http://www.gartner.com/DisplayDocument?id=623342

  • 9 Security No-Brainer #4: EV-Certs for ISVs   May 1, 2009 at 5:48 pm

    [...] The second was in reference to the use of whitelisting in the hypervisor/VMM (especially the “pare… [...]

  • 10 Twitter Trackbacks for Whitelisting, Meet Virtualization. Virtualization, Meet Whitelisting. [gartner.com] on Topsy.com   August 24, 2009 at 10:40 am

    [...] Whitelisting, Meet Virtualization. Virtualization, Meet Whitelisting. blogs.gartner.com/neil_macdonald/2009/04/10/whitelisting-meet-virtualization-virtualization-meet-whitelisting – view page – cached [...]