Neil MacDonald

A member of the Gartner Blog Network

Neil MacDonald
VP & Gartner Fellow
15 years at Gartner
25 years IT industry

Neil MacDonald is a vice president, distinguished analyst and Gartner Fellow in Gartner Research. Mr. MacDonald is a member of Gartner's information security and privacy research team, focusing on operating system and application-level security strategies. Specific research areas include Windows security…Read Full Bio

Coverage Areas:

We Have a Quorum: Blacklists Aren’t Cutting it.

by Neil MacDonald  |  September 14, 2009  |  7 Comments

Symantec recently announced the latest release of its consumer protection technology which includes a new malware technology code-named “Quorum”. Essentially the technology uses visibility (or lack thereof) of behavior of executable code across a community to aid in the determination if a given piece of code is “good” or “bad”. We are working on our full analysis and recommendations for our enterprise clients but here are my initial high-level observations.

Despite Symantec’s rhetoric, the idea of using visibility of executable code across a community for better security decision making isn’t new. Prevx (which I wrote up in Gartner research as a Cool Vendor in 2006 because of its community approach to endpoint intrusion prevention) has been using “herd” intelligence across its community for years. McAfee’s Artemis announced more than a year ago uses a similar approach.

The good news is that Symantec understands that signature-based detection alone is increasingly ineffective and that it needs to do more at the application level. Rather than take an approach solely rooted in whitelisting or building a global whitelist, Symantec is instead using the Quorum technology to focus on the vast greyspace between blacklists (which can’t keep up) and the whitelists (which also struggle to keep up and are too restrictive for many end-user desktops – especially consumers which have no IT department to manage the whitelist).

By using visibility into code behavior (usage, propagation patterns, prior user history, system calls and so on) across a larger population, Symantec is able to build more accurate models as to whether a given piece of code is “good” or “bad”. No behavioral modeling-based approach for security is perfect, but it is a fact that the more data points you have, the better the model you build and the fewer false negatives and, more importantly, false positives that result when the model is used to make security decisions. Quorum taps into the large Symantec installed base for precisely this reason.

There are no silver bullets in security, but Quorum is a welcome innovation in endpoint protection which has fallen woefully behind the bad guys by relying too heavily for too long on an increasingly ineffective blacklisting-based protection model at the application level.

7 Comments »

Category: Beyond Anti-Virus Endpoint Protection Platform Next-generation Security Infrastructure     Tags: , , , ,

7 responses so far ↓

  • 1 Wes Miller (CoreTrace Corporation)   September 15, 2009 at 1:21 pm

    Good post, Neil;

    There is no doubt that blacklist anti-virus simply isn’t providing protection for endpoints anymore and we need change. Our CEO, Toney Jennings blogged about this yesterday in a post titled “Anti-virus’ days are numbered”: http://www.coretraceblogs.com/2009-09/anti-virus-days-are-numbered/

    As you can imagine I might, one thing I’d like to point out is that whitelisting doesn’t have to mean maintaining a master list of approved applications in the cloud. That is one approach, but one at that we at CoreTrace think is flawed for the exact reasons you point out.

    Using a whitelist automatically built from each client and then managing change in an efficient way is a significant advance over blacklisting and doesn’t suffer the false positive problems that characterize a behavioral or heuristic approach.

    Thanks for furthering this discussion on the future of endpoint security.

  • 2 Doug Finley (Naknan, Inc.)   September 15, 2009 at 5:17 pm

    There are today several whitelist vendors with different approaches to overcoming some of the problems usually associated with whitelisting. Of course, we think we have the best solution (Security Assistant), but we’re all progressing rapidly.

    Herd intelligence, or whatever you want to call it, can’t tell you if a piece of software is authorized to run on a particular device; whitelisting can, reliably, and without management headaches.

    I suspect the market’s preparing for a gigantic shift away from blacklisting because (1) it is so ineffective, and (2) there are now viable options. You’ve just heard from two of them.

  • 3 Neil MacDonald   September 16, 2009 at 11:25 pm

    Our published analysis and advice rearding the Symantec announcement is located here:
    http://www.gartner.com/DisplayDocument?id=1180713

  • 4 Neil MacDonald   September 16, 2009 at 11:31 pm

    @Wes,

    Yes, managing change from a known good starting point is an alternative approach to help with some of the issues of whitelisting. Even then, I talk with many clients that have mobile users that feel they want the ability to make changes themselves that don’t fit neatly within the notion of managed change. You may choose to enable this as well (with logging of course) – but at that point we’re back to pretty much install and run anything you want.

    As we state in the research – the preferred approach for most end-user desktops will be a combination of whitelisting, blacklisting and greylisting techniques for handling executable code.

    I’ve stated in previous blog postings that a pure whitelisting approach is *exactly* the right for embedded devices and some types of users with structured work environments.

  • 5 Neil MacDonald   September 16, 2009 at 11:34 pm

    @Doug,

    It depends on the nature of the worker and the workspace you are trying to protect. See the response above – whitelisting is no more a silver bullet than blacklisting was. For many end-user desktops, a combination of whitelisting, blacklisting and greylisting is the best approach. For most servers and fixed function / embedded devices, a pure whitelisting approach at the application executable level is the best approach.

  • 6 Doug Finley (Naknan, Inc.)   September 18, 2009 at 12:45 pm

    Neil:

    How well whitelisting fits a specific setting depends to a great extent on how you define whitelisting, and the supporting features/capabilities the whitelisting solution offers. If all we offer is a static whitelist that can be updated either with great effort or by significantly compromising security, there are few environments it would be suited for. In most business settings, for example, frequent changes are either a great headache or a show-stopper if your “solution” doesn’t accomodate frequent changes without jeopardizing security. On the other hand, if keeping your whitelist in sync with patch, update, and application deployments adds no (or no appreciable) work to IT staff, then whitelist management becomes a non-issue and whitelisting becomes a viable alternative to everything else.

    Our challenge, as whitelist vendors, is to deliver clearly better security solutions, but in order to be clearly better we must preserve the classic whitelisting level of security (no unauthorized software will execute) in a nearly friction-free package regardless of the environment in which it is deployed. That means we have to recognize and accept the world as it is, flaws and all, and make our solutions work in the presence of those flaws (users will visit infected web sites; they will open infected email attachments; techs and sysadmins will inflict configuration errors on the systems under their control; IT staff will delay some patch deployments and miss some machines completely) without creating additional workload for IT staff.

    The sticking point in all this is cultural. Whitelisting means control. If users are permitted to disable or work around these solutions, we’re back where we started. There’s too much at stake to permit users to install whatever software they want, or think they need. Mobile users may “feel that they want the ability to make changes themselves” whether it impacts security or not, but I would argue that anything that they cannot make a business case for should be disallowed and anything they can make a business case for should be allowed. Of course, some users must have great flexibility, and our solutions must accommodate that, but the vast majority have no such legitimate requirement. And it isn’t an either/or choice. Whitelisting done well can offer a wide range of alternatives, tailored to each individual user. If management wants security, and in every enterprise they really should, they need to cause a cultural change. The whitelist vendors can take care of the rest of it.

  • 7 Wyatt Starnes   September 19, 2009 at 3:24 pm

    Neil,

    Thanks for continuing to cover this subject. I agree with your post, and many of the comments. Ultimately the best scenario will be a combination of whitelist, blacklist and even some modest use of greylist.

    At the end of the day, this whole discussion involves an “old” notion of differentiating signal from noise. We hosted a seminar in Washington DC last week where we zoomed in on this with the standard-setters and architects at both NIST and NSA.

    Essentially both whitelists and blacklists are “noise filters” — where the goal is to use the best knowledge and intelligence that we have available to make sure that the “platform” (any device that runs code) is both protected (via blacklist) and “in state” (using whitelisting).

    When I founded and build Tripwire I think we took “self-learned” whitelisting (aka Integrity Monitoring) to its technical limit. Since I left around 5 years ago, Tripwire has continued to refine the notion of deriving the reference from the target platform.

    In practice, (learned from 10′s of thousands of installations and thousands of customers) we have learned that there are many limitations to “self-learned or platform/provisioning derived” whitelisting.

    Most notably:

    How do I definitively know that I am creating a trusted baseline from a platform in the wild?

    And how do I scale to large installations when I need to carry one image for every platform under management?

    And if an exception occurs on a platform, how do I turn that noise into signal? (i.e. Which package changed?). — also called “intelligent correlation” by some.

    So IMHO a multi-tiered architecture is required where application and individual code origin is established via cloud services populated with “measurements” directly traceable to the authoring ISV’s.

    So to differ with you Wes, but CLOUD SERVICES ARE KEY.

    (these known-provenance measurements can also be used by other “self-learned” methods to lower the prevalence of false positives).

    At the domain level these measurements can serve as the foundation for establishing the “trusted reference” in parallel to domain-specific software and configuration settings.

    And on the device or platform any “agent” can be used depending on the specific interest and use case of our customers.

    So welcome aboard Symantec! Let’s go solve some real customer needs now!

    Wyatt.