Neil MacDonald

A member of the Gartner Blog Network

Neil MacDonald
VP & Gartner Fellow
15 years at Gartner
25 years IT industry

Neil MacDonald is a vice president, distinguished analyst and Gartner Fellow in Gartner Research. Mr. MacDonald is a member of Gartner's information security and privacy research team, focusing on operating system and application-level security strategies. Specific research areas include Windows security…Read Full Bio

Coverage Areas:

Thought for Thursday: Extending Whitelisting to Information Access

by Neil MacDonald  |  September 9, 2010  |  7 Comments

I’ve written multiple times on the power of whitelisting (default deny) for applications running on end-user workstations and servers. I am convinced that whitelisting should be foundational in our strategy for securing endpoints.

So far, the application control vendors have focused on whitelisting what applications are allowed to run. This is straightforward in concept, but more difficult in practice. This approach works well for servers and some types of users where the compute environment is relatively static, but becomes more difficult in environments where the applications change frequently or where the end user’s needs to modify and extend their computing environment change frequently.

Is there another approach?

What if we instead placed mandatory access controls with a default deny approach on the data instead?

The thinking would be to allow any application to run, but limit access to sensitive data that might be stored on the endpoint to only those applications that require it (the whitelist). Instead of whitelisting which applications are allowed to run, we whitelist which applications are allowed to see a given set of sensitive data. How? Likely using some type of cryptographic protection mechanism where only the applications which are whitelisted are given the key.

To be clear, not all data access on the endpoint would need to be whitelisted – only the sensitive data (including the OS system files).

Any application can run. Even malware. But without being whitelisted for sensitive data access, they can’t access sensitive information. The vast majority of malware is rendered harmless.

Food for thought.

7 Comments »

Category: Beyond Anti-Virus Endpoint Protection Platform Information Security     Tags: , , , , ,

7 responses so far ↓

  • 1 Saqib Ali   September 9, 2010 at 11:11 am

    hmm interesting concept. Sort of like OAuth/UMA for controlling access to data in web-based (clouded) apps……

  • 2 Rishi Bhargava   September 9, 2010 at 11:56 am

    Neil,
    Brilliant thought. I support the thought and there are some other very interesting extensions of your idea that can make whitelisting mainstream. An example is controlling network access based on the whitelisted applications and or controlling other system resources like internal critical servers.

    I think of this of current whitelisting technology as controlling CPU cycles to whitelisted apps only. This concept can be extended to other system resources.

    Regards,
    Rishi

  • 3 Swaroop Sayeram   September 9, 2010 at 1:11 pm

    Absolutely agree with you. Whitelisting products should provide features to read/write protect sensitive data. In fact, that would provide a utility beyond security. It could be extended to protect sensitive configuration from accidental changes.

  • 4 Neil MacDonald   September 9, 2010 at 2:41 pm

    There are really two points being made:

    1) The whitelisting of applications to data can be made in conjunction with application whitelisting, further refining which applications and processes can access which files. Some of the application control vendors do this already. So not only can we control what applications run, we can control what data/folders they access.

    However, there is another way to look at this

    2) This could be used as an alternative to application control/ whitelisting. Let any application run, but only allow specified applications to access sensitive data.

    There is value in both approaches.

    Neil

  • 5 Tom Murphy   September 9, 2010 at 5:36 pm

    Bit9 has implemented both application whitelisting (default deny) and granular file integrity controls … control the processess that can read/write sensitive data files.

    We call the capability File Integrity Control. Unlike File Integrity Monitoring which is passive, File Integrity Control defines the processes that can access or modify sensitive data or directories.

    This capability was introduced in V6.0 of Bit9 Parity … earlier this year.

    The same concept of granular control is applied to registry changes, portable storage device usage, memory protection and operating system files … define the processes that can read/write/modify these resources.

  • 6 Kishore Y   September 9, 2010 at 6:35 pm

    Niel,

    I think the technology that can address the use case you are talking about existed long before white listing came in terms of access control list based on process.

    The question you are asking is whether that approach solves the security problem or not. Most of the malware you see nowadays don’t run in their own context. As you know all the dangerous malware is running in the context of approved applications like system services or browsers. Unless it is made sure that the approved application context is preserved (memory protection??), one can not guarantee that only whitelisted application can access sensitive data.

    Then another side is to determine what is critical data on the system. Is registry hive one? System Files? Memory? Network?. Doesn’t it smell like another configuration night mare?

  • 7 Neil MacDonald   September 9, 2010 at 7:00 pm

    @Kishore,

    Agree that the notion of default deny and mandatory access controls aren’t new, but I don’t see where the ability to assign access controls based on process are built into the most common OS, Windows.

    On the positive side, Microsoft added Services Hardening in Vista / Windows 7 which provides some of this. However, only Windows Services have SIDs and this capability was not extended to every Windows application or process, only Windows services.

    Most Windows vulnerabilities (90% or so) allow malware to be run in the context of the user. If they are running with admin rights – yup – everything is attackable. If they are running with reduced rights, the OS-based separation should hold.

    The final point is a good one – any whitelisting approach suffers from the fundamental problem of “who builds and updates the whitelist” but what is the harder configuration problem?

    Identifying which applications a user should be allowed to run and keeping this up to date over time as applications change
    or
    identifying which sensitive data is accessed by which applications and keeping this up to date over time?