Neil MacDonald

A member of the Gartner Blog Network

Neil MacDonald
VP & Gartner Fellow
15 years at Gartner
25 years IT industry

Neil MacDonald is a vice president, distinguished analyst and Gartner Fellow in Gartner Research. Mr. MacDonald is a member of Gartner's information security and privacy research team, focusing on operating system and application-level security strategies. Specific research areas include Windows security…Read Full Bio

Coverage Areas:

Will Whitelisting Eliminate the need for AntiVirus?

by Neil MacDonald  |  March 31, 2009  |  21 Comments

You know the saying “everything old is new again”? That’s exactly comes to mind when I listen to some of the hype around whitelisting and the use of a ‘positive model’ for information security.

The Application Control vendors would have you believe that application whitelisting is the latest (and only) answer to the increasing ineffectiveness of antivirus signatures.

Not exactly:

  • Whitelisting isn’t new. We’ve used a “default deny” approach in firewall functionality for more than a decade. What’s relatively new is trying to extend whitelisting up the stack to control which applications are allowed to execute on an endpoint.
  • Whitelisting is not a silver bullet. The fundamental issue with any whitelisting approach is “who builds and maintains the list?”. This is a significant and potentially culturally explosive issue on end-user desktops. That’s just one of many issues and considerations with the use of Application Control solutions that we have been advising clients on for years.

Don’t get me wrong. Whitelisting is foundational for comprehensive information security protection and will have an increasingly important role to play in your endpoint protection strategy. However, for the majority of endpoint systems, whitelisting alone is not enough and must be combined with blacklisting (yes, technologies like antivirus) and other protection styles to create an endpoint protection system.

21 Comments »

Category: Beyond Anti-Virus Endpoint Protection Platform     Tags: , ,

21 responses so far ↓

  • 1 Wes Miller (CoreTrace Corporation)   April 1, 2009 at 11:19 am

    Posted on the day before Conficker is scheduled to come to life (whatever the net effect of it ends up being), your post couldn’t be more timely.

    As you noted, whitelisting isn’t new – “lockdown” has been around for quite some time. What was always missing was the ability for whitelisting to NOT equal lockdown, where your management and patching processes can work seamlessly with your whitelisting solution to keep systems patched where you need to (current versions of Windows), and secure business critical systems where you can no longer patch (NT/2000, and before too long, XP) and causing zero friction (something we have worked diligently to design, and we believe we have built). Your point about “who builds and maintains the list” is spot on for whitelisting vendors who insist on the notion of “cloudlisting” or “crowdlisting”, where we feel latency and poisoning outweigh the benefits of either approach. Starting with the desktops you already have, a whitelist allows you to secure what you know already.

    I have to disagree with your point about blacklisting being a necessity in addition to a whitelisting product. That is only true if you are explicitly talking about whitelisting using either “cloudlisting” or “crowdlisting”, where again the potential for poisoning means that you need to watch the cloud, and latency/line of business apps can mean a painful deployment or painful application upgrades.

    A good whitelisting solution that has a comprehensive local list generation as well as memory protection can provide significantly more benefit than any blacklisting approach can ever hope to. Honestly adding blacklisting to such a whitelisting approach is a lot like paying for an aftermarket undercoating when you buy any new car – it does nothing beyond what the existing approach can already protect you from. Only whitelisting can protect you from polymorphic, unknown/zero-day, targeted, and social engineering attacks using malware – all threats that explicit blacklisting can miss. In short, whitelisting is extremely complementary to blacklisting, blacklisting isn’t really complementary to a properly designed whitelisting product.

    All that said, I’d be interested to get your $0.02 on what threats blacklisting stops better than whitelisting?

  • 2 Rishi Bhargava   April 1, 2009 at 7:24 pm

    Wes,
    You beat me to the response… I agree with every word in the above comment. I think whitelisting is ready for prime time in enterprise desktop world. I still think there are some challenges in the consumer desktop side though.

    Rishi

  • 3 Wyatt Starnes   April 1, 2009 at 7:25 pm

    Neil, thanks for the post.

    Indeed we tend to allow our marketing people to stamp “new and improved” on the things we do. Just the nature of the beast. And IMHO (and as you point out), whitelisting is just another important facet of comprehensive information communications technology (ICT) management and security.

    When you contrast to other industries with similar challenges, “whitelisting” is just as “old” as blacklisting really. Think about the age old challenge of gated and controlled facility access management.

    Look at what the guard uses and you’ll find it is primarily whitelist-based (He knows the people that are authorized to come in as he has their names, pictures, and/or they present their badges/credentials). Multi-factor authentication. The guard usually also has a taped up picture or two of the “bad guys” that he wants to keep out too (interestingly – many times former employees). So he has primarily whitelist supplemented by blacklist.

    I raise this view on the one hand to agree – and on the other to emphasize that in ICT we have done a particularly poor job of managing the whitelist opportunities. Especially when it comes to even first-level code authentication of the critical software that runs our business process. All of them.

    So it is all a bit of “keen sense of the obvious” from my point of view. The goal remains the same. For security it is about Defense is Depth. For compliance it is about knowing what to “measure” and then that well. For cost-effective systems management its about maximizing MTBF and minimizing MTTR – and doing that more cheaply and efficiently than last year (and so it repeats).

    And a “new” one — the lifecycle view of ICT devices. Let’s make sure we get our build/QA/UAT handoffs done well. And let’s ‘build good, stay good” thruout the IT service delivery cycle. Simple to say, hard to do without managed devices, best practices and whitelists.

    At the end of the day whitelist is another ICT tool/method. A very important and under leveraged tool as we have tended to think it was “too hard” or “too big” a challenge to undertake. I believe that now, as an industry, we are realizing (as with the physical security challenge above) whitelisting is, in many ways, more *finite* than defending with just blacklisting.

    As for how the eco-system shapes up, I agree with Wes above with the addition of an important key component.

    Yes, “self-whitelisting” or customer/domain specific whitelisting is important – and there are several vendors evolving ways to do this effectively.

    The biggest question you pose (again IMHO) is “who builds and maintains the list?”. I would offer to you that many people will – but until the large software providers and platform vendors get serious about this – its all a bit of an academic discussion.

    In advance (or in preparation perhaps) of this we should work on method and standards. It is just a matter of time until the ISV and platform vendors come on board.

    Stay tuned. I agree with your post for the most part. No silver bullet and that whitelisting IS indeed foundational to any comprehensive and effective ICT Compliance, Operation Excellence, and Lifecycle management strategy.

    And it is useful for security too BTW.

    Wyatt.

  • 4 Neil MacDonald   April 2, 2009 at 8:00 am

    Rishi and Wes, I agree it is time to for whitelisting of applications to play an important role in an overall endpoint security strategy. I stated this: “Whitelisting is foundational for comprehensive information security protection and will have an increasingly important role to play in your endpoint protection strategy”

    What I was saying in the post is that whitelisting alone isn’t sufficient any more than blacklisting alone. In most cases, we need both.

    Full disclosure: CoreTrace is a vendor of a point solution for Application Control.

    On the question of what challenges I am concerned about that whitelisting can’t handle well — this is the subject of multiple pieces of research, but I’ll point out a few things:

    1) Browsers are just one example of a platform on top of the OS. Just because an application control product can control which applications run at the OS level doesn’t mean they can control what applications (i.e. plugins) are allowed to be used within the browser. Ditto for macros and similar constructs in Office. Beyond plugins and macros, what about malicious Javascript, Flash and other types of scripted code downloaded as objects within web pages?

    2) “Good applications gone bad” are always an issue. Specifically, an attack on a vulnerability in a whitelisted application. You are assuming the attack on a vulnerability manifests itself as a file which is then executed (or blocked); however, this is not always the case. Of course, you can supplement with memory protection technology (a form of blacklisting), but this also doesn’t provide 100% protection.

    3) Attacks to compromise the whitelist itself are always a concern, especially if community-based whitelists and reputation services are used.

    And that’s just scratching the surface…

  • 5 Neil MacDonald   April 2, 2009 at 9:33 am

    Wyatt,

    I also see whitelisting as another tool in our arsenal. It’s not a silver bullet, but it *is* foundational. That’s why I underlined the statement in my original post.

    I look forward to whitelisting becoming a standard part of an endpoint security platform (or from the desktop management platform). Symantec, McAfee, Trend, Sophos, etc etc all have whitelist enforcement capabilities. So do BigFix, LANDesk, Altiris (now Symantec) and others from the management side. Microsoft is adding this in Windows 7. The enforcement of a whitelist is a commodity function. The value resides in the management of the list. As I stated, the key is “who manages the list” which I will diagree with you and say that (IMHO) this is not an academic question when you are managing end-user desktops and users are complaining that they aren’t able to use a piece of legitimate software functionality they feel they need to do their job because it is not on the “list”. I look forward to the time (hopefully soon) when an industry consortium or worldwide standards effort brings together legitimate ISVs to create a shareable whitelist for all to use.

  • 6 David Thomason   April 2, 2009 at 10:04 am

    Niel,
    Thanks for taking the time to discuss whitelisting as you are correct that “Whitelisting is foundational for comprehensive information security protection” yet so few organizations are using whitelisting to compliment their existing blacklisting solutions. We see antivirus, antispyware, antiadware and a multitude of other “anti-malware” products, but very little adoption of whitelisting solutions.

    In doing a bit of research, I’ve found that the TCO of doing application whitelisting has been in the past prohibitive. Maintaining huge databases, keeping up with the latest changes and then implementing updated code became even more difficult when you added whitelisting. However, I feel that with the next generation of whitelisting solutions we are seeing significant improvement. In fact, I’m seeing many organizations replacing their antivirus with a whitelisting solution. The whitelist prevents all the viruses, not to mention other malware not even looked at by blacklisting solutions, so keeping the antivirus after a short time with application whitelisting makes no sense economically.

    You bring up some great points in your comments to Wes and Rishi, it’s not a silver bullet, but many of the same things you comment are not stopped by application whitelisting are also common problem areas for antivirus.

    You are right on the money with the need for application whitelisting as part of the security foundation. And I think now is the time for more organizations to be investing in this technology (whether to enhance or replace antivirus).

    To describe the different application technologies I have researched, I wrote the following: http://www.securityevangelist.com/Home/Blog/Entries/2009/4/1_Whitelisting_Done_Right.html

  • 7 Rishi Bhargava   April 2, 2009 at 11:48 am

    Neil,
    Thanks for a great discussion that you have started here. You raise some very good issues on the desktop deployment of whitelisting. I think there are some solutions and compromises that can help enterprises get to much better security (than what they get with AV and much lower operations overhead). Before I expand on the desktop cases, I would like to hear your thoughts on the use of application control technologies on servers and fixed function devices like Point of Service Terminals or thin clients. These systems are pretty static in nature and dont run some of the applications you point as issues. The adoption of application control on these category of system is easy and my belief is move much faster.

    Now on the desktop use cases…

    1) Totally agree with you on the challenges of malicious code within the browser etc… This is something that application control would not handle and IMHO the solution to these is browser sand-boxing. If you are able to use application control and add browser sand-boxing to it, the malicious javascript and other controls wont be able to harm. Windows 7 already provides some rudimentary capabilities for sand-boxing applications and google chrome does too.

    2) On the good applications gone bad, I believe memory protection techniques can do a whole lot. I think of these as whitelisting on running code. In the spirit of full disclosure, Solidcore’s product has 4 different memory protection techniques which follow the fundamental rule of dont let any foreign code run, even though it is part of valid application as a process on injection.

    3) I also want to reiterate the fact that I dont think central whitelist is the way to go but has poisoning issues etc… so I would say enterprise can create a local whitelist and maintain it. This is not an expensive or cumbersone operation if the updating whitelist is tied to the change management process and provisioning tools like altiris, SMS etc…

    To summarize, I agree with some of the challenges you raise but I believe if the enterprise is willing to make some process changes, then whitelisting is a much secure and cost effective way for enterprises. On consumer desktop, it is a much tougher issue.

  • 8 Wes Miller (CoreTrace Corporation)   April 2, 2009 at 4:44 pm

    Thanks for continuing the conversation, Neil.

    Rishi’s post echoes the sentiment we feel as well. In general, I agree with your reply, Neil – which is why we have built the product we did. To specifically answer how we think those questions are answered, see the following:

    1) Agree, to a point – we completely control what _executable_code_ runs on a system (as any comprehensive whitelisting solution should, or it is ineffective), whether it is an ActiveX control or a Browser Helper Object. Yes – for browsers that have a JavaScript-based extensibility model, we don’t protect against new plug-ins for it – however that is not executable code, and generally does not have access to OS resources. Indeed, if it does attempt to, it will either need to exploit a buffer overflow (see 2, below) or drop new executable code down on the system – showing the exact benefit that ONLY whitelisting can provide. In such a scenario, with custom-crafted or zero-day exploits (as occurred with the C-level exploits that occurred last year) the systems are now owned if you are only protecting them with blacklist-based software.

    As to macros, Microsoft years ago deferred to a “default to off” model in Office WRT macros, only letting digitally signed macros run. As a result, macros have today been cordoned off to almost a non-existent security threat, unless a customer chooses to _explicitly_ not follow Microsoft’s security best practices.

    Almost all other browser-borne exploits such as those carried through Flash use a buffer overflow – the most common way to infect a Windows system without using on-disk code – to do so (see http://www.google.com/search?q=Flash+buffer+overflow for an impressive collection of URL’s). Thus, see point 2.

    2) Absolutely. A whitelist solution is only as good as it can protect up the stack. If you only let on-disk code you know run, great. But yes, you are still vulnerable to memory-borne exploits. Which is why we have spent a considerable amount of time building memory protection, as any comprehensive whitelisting product should. To be clear, most blacklisting applications _dont_ include comprehensive memory protection. In fact, we use one of the primary blacklisting AV vendors in our demo to show how they don’t stop a _two_year_old_ exploit in a popular software product, but we do (as well as many much newer – including brand new, completely unknown buffer overflows.

    3) Absolutely. Poisoning (fundamentally compromising the whitelist) and cloudlist/crowdlist updating latency (causing work downtime, system failure, or delays in deploying patches and updates are the exact reasons why we have considered – but abandoned – any approach to use an ethereal list to “define” security of our whitelist.

    Wes Miller (CoreTrace Corporation)

  • 9 Neil MacDonald   April 3, 2009 at 9:50 am

    Interesting how a form of blacklisting — buffer overflow protection (blocking something that is known to be bad during execution) — is used to address a major weakness in whitelisting (“good apps gone bad”). Also interesting how most of the comments come from vendors that specialize in whitelisting solutions.

    Rishi, yes -embedded systems and servers that don’t change often are excellent candidates for a whitelisting approach supplemented with memory protection for a variety of reasons which I discuss in the research. Full disclosure – Solidcore has a form of whitelisting protection technology.

    Wes, don’t underestimate the importance of the browser as a platform. It is not a trivial issue to support the whitelisting and blacklisting of plugins and BHOs in a way that is meaningful to the end user.

    Two more quick comments
    1) Web 2.0 applications and “cloud-based” applications are a challenge. They don’t execute in ways that traditional application control products can exert policy control. I mentioned Javascript above (the “J” in AJAX Web 2.0 applications) but don’t overlook that a company might want to restrict access to, for example, social networking sites, Gmail (or any of the hundreds of cloud-based application services where the code executes on their machines, not yours)

    2) Notice how the issues of the political, cultural and process challenges were pretty much ignored in this stream of comments. We’ve also written quite a bit of research here advising clients on how to manage a transition to a whitelist-based model. If you are moving from an environment where end-users can do anything to an environment where they can only do what has been whitelisted for them, you will almost certainly encounter issues. In most organizations I talk with, employees want more freedom to innovate, not less. IT can become too heavy-handed with whitelisting policy in the name of security and risk interfering with the legitimate work of employees. Not a good idea.

    Back to what I said in the post: Don’t get me wrong. Whitelisting is foundational. It will become increasingly important for endpoint protection. I wouldn’t buy an endpoint protection platform solution that didn’t offer this capability. Just don’t expect a silver bullet.

  • 10 We Need a Global Industry-wide Application Whitelist   April 3, 2009 at 11:57 am

    [...] ← Will Whitelisting Eliminate the need for AntiVirus? [...]

  • 11 Wes Miller (CoreTrace Corporation)   April 3, 2009 at 3:30 pm

    Thanks again for continuing the conversation, Neil.

    Actually, our buffer overflow protection isn’t blacklisting derived. It uses the same list of what is trusted on the system to see if we should trust the source of the buffer overflow or dll injection attempt – since there actually can be valid reasons for both – but you want to make sure that anything coming in to memory did so only from executables that would have been allowed to run from disk.

    I think that the responses have been primarily from whitelisting-related vendors because most of us feel rather strongly that, designed properly, whitelisting _can_ secure systems by itself far better than blacklisting, and in fact largely nullifies the need for “classic” blacklist-based approaches. Blacklist vendors probably wouldn’t be inclined to comment due to the fact that you are questioning their very existence by this post.

    I’m not trivializing the browser as a platform, or at least I’m not intending to. :-) Today, we block any executable content – whether it’s an OCX or a BHO, or any other type of executable. That’s the rub. For an upcoming release, we are encompassing the browser as a mechanism of Trusted Change, so within the scope your IT group deems it acceptable on the whitelist, new ActiveX content can be downloaded using IE’s Install On Demand (IOD) functionality without the user knowing any differently. Thus keeping them secure, yet not getting in the way of workflow when they need to download a new OCX for GoToMeeting, LiveMeeting, etc.

    I think you’ve brought up an important point, Neil. Protecting “from the cloud/in the cloud” is important; but it’s also important to bear in mind that the exploits occurring at that level are fundamentally different. Today’s Windows exploits exist both to compromise systems in order to create a botnet, steal data, create an extortion target, etc. Largely, to (literally) take advantage of the resources on/of that computer.

    The security of an “application”, or frankly any data “in the cloud” is a fundamental problem that more and more of us are going to have to start thinking about as critical information begins to seep out onto the Internet whether you want it to or not. But that seepage occurs at layers 1-7 in the OSI model – so approaches to securing against exploits that are “cloud-flavored” is literally a world apart from securing the resources of an enterprise. For organizations wise enough to avoid cloudifying critical data until they can be sure it comes closer to being STRIDE-proofed ( http://blogs.msdn.com/larryosterman/archive/2007/09/04/threat-modeling-again-stride.aspx), the combination of whitelisting, DLP, and FVE does a good job of truly securing _their_ systems, _their_ data, and _their_ local network resources. Arguably, securing your information while on Gmail is both your responsibility as well as Google’s. Thus my personal belief that any organization that outsources critical IT resources to the cloud without seeing the full security model that backs up the talk is making a _very_ poor decision – but that’s another discussion for another day.

    I couldn’t agree more with your second comment. In fact my comment to your next blog post mirrors that exact sentiment. Blacklisting got where it is today by being “good enough”. Well, it isn’t anymore, and that’s why we’ve built what we built – a whitelisting approach that does everything it can to approach zero friction from both a systems management and from a daily user perspective. Letting employees do what they need to do – within IT defined guidelines (see my comment on the next blog post for more).

  • 12 Wyatt Starnes   April 6, 2009 at 10:31 am

    Neal,

    I didn’t mean to avoid the social, cultural and usability question that you raised in the thread.

    I agree with you that there are some really tricky issues here with any kind of active filtering – white or blacklist. It is clear that there has routinely, it seems, been a one for one trade-off between risk and usability. We believe that whitelist presents an opportunity to change this for the better.

    At the end of the day it is really about managing the signal-to-noise ratio. In the case of platform management – we need to pump up the signal and dampen the noise. The question is how.

    Noise comes from many sources – but largely from ambiguous data validation – whether or not it is the attempt to filter “bad” or undesired code or the process of creating “allow lists” for trusted code. Once we have precisely detected what we want/don’t want – then we can employ policy to effect the decisions. Policy must be “quieter” with its decisions in order to meet the goal of better user experience.

    The the extent the data sources used for positive and negative filtering is more accurate, we should be able to create better policies leading to enhanced user experience.

    I would also add here that there is the question of whether “third-party” agents are (in the long-term) the best way to handle active filtering and policy. Shouldn’t more of the safety and user-experience method be built into the platform? It is in the physical world – why not in cyber?

    I would offer that as we make the transition to drive new benefit from whitelists, we close large blindspots in our platform awareness. I also believe we should revist the question of “do we really to add yet another agent to the compute platform to get the full benefit of whitelisting?”

    Why shouldn’t/doesn’t the platform have implicit ability to ask “should I run this code or not?” — I think it should.

    Effectively implemented platform instrumentation coupled with known-provenance, high-value software measurements should improve all of the major feature metrics (security, compliance, better opex, lifecycle stability) all while reducing the load on users to make manual policy decisions, or worse yet – to be “locked down” because we’re “not sure” we are making good measurements and policy decisions.

    IMHO, yet another reason to get the platform vendors and ISV’s onboard.

    Wyatt.

  • 13 Neil MacDonald   April 9, 2009 at 1:31 pm

    Wyatt,

    You ask “Why shouldn’t/doesn’t the platform have implicit ability to ask “should I run this code or not?”

    Yup. Completely agree. Microsoft is adding this to Windows 7 with a feature called “AppLocker” (think software restriction policies 2.0). Many mobile devices have this capability. The browser is another platform and it should also have this capability. Ditto for SOA, scripting, etc etc – *any* IT platform should have basic whitelisting enforcement capabilities built in, including emerging x86 virtualization platforms.

  • 14 Whitelisting, Meet Virtualization. Virtualization, Meet Whitelisting.   April 10, 2009 at 8:50 am

    [...] also discussed the foundational power of whitelisting, especially when brought to the application level with application control [...]

  • 15 Gartner and Whitelists « IT in Transition   April 11, 2009 at 3:16 pm

    [...] http://blogs.gartner.com/neil_macdonald/2009/03/31/will-whitelisting-eliminate-the-need-for-antiviru… [...]

  • 16 We now Have a Quorum: Blacklists Aren’t Cutting it.   September 14, 2009 at 5:55 pm

    [...] that it needs to do more at the application level. Rather than take an approach solely rooted in whitelisting or building a global whitelist, Symantec is instead using the Quorum technology to focus on the [...]

  • 17 What did the Infoworld survey on whitelisting not cover? « Circular Insanity   November 10, 2009 at 11:32 am

    [...] Niel McDonald’s @ Gartner has an interesting blog article and discussion about the same http://blogs.gartner.com/neil_macdonald/2009/03/31/will-whitelisting-eliminate-the-need-for-antiviru… [...]

  • 18 Dustin Smyth   January 21, 2010 at 8:36 am

    I must say this is very informative and well guided article to insert and filter data using custom objects in C#. It has made simpler especially for beginners to implement the concept. This blog is awesome and I must congratulate author for sharing the knowledge with us.

  • 19 bluedragon99   February 6, 2010 at 1:16 am

    \Browsers are just one example of a platform on top of the OS. Just because an application control product can control which applications run at the OS level doesn’t mean they can control what applications (i.e. plugins) are allowed to be used within the browser. Ditto for macros and similar constructs in Office. Beyond plugins and macros, what about malicious Javascript, Flash and other types of scripted code downloaded as objects within web pages?\

    When would that be? So you expect the user to keep that browser open (in the browsers memory remember) for an entire attack goal to be achieved? It doesn’t work that way in the real world, any exploit analyst could tell you that. They are using droppers. Every time.

    You are fighting the inevitable, there is no other way to stop the malware madness. You can’t even compare a whitelisting to a typical AV solution, the difference in real world scenario is night and day protection wise.

  • 20 Neil MacDonald   February 6, 2010 at 3:09 pm

    Actually, the better application control products can enforce whitelistiing within the browser today.

    There is no silver bullet, but that doesn’t mean we give up either. A combination of whitelisting, blacklisting and behavioral protection combined will provide the best protection. That’s why we no longer publish a magic quadrant for AV providers. Please see my research on endpoint protection platforms starting in 2007 – the research is referenced in this blog post:

    http://blogs.gartner.com/neil_macdonald/2009/03/04/defense-in-depth-doesnt-mean-spend-in-depth/

  • 21 Cross-Site Scripting (XSS): Attack Vectors and Defenses | janhenrik dot com   March 29, 2010 at 9:45 am

    [...] 12. MacDonald, Neil. Will Whitelisting Eliminate the need for AntiVirus? Gartner. [Online] Gartner Inc., March 31, 2009. [Cited: December 1, 2009.] http://blogs.gartner.com/neil_macdonald/2009/03/31/will-whitelisting-eliminate-the-need-for-antiviru…. [...]