Neil MacDonald

A member of the Gartner Blog Network

Neil MacDonald
VP & Gartner Fellow
15 years at Gartner
25 years IT industry

Neil MacDonald is a vice president, distinguished analyst and Gartner Fellow in Gartner Research. Mr. MacDonald is a member of Gartner's information security and privacy research team, focusing on operating system and application-level security strategies. Specific research areas include Windows security…Read Full Bio

Apple’s iOS 7 is a Significant Step Forward

by Neil MacDonald  |  August 14, 2013  |  2 Comments

From a security perspective, I’ve been keeping a close eye on iOS and Android. From what I’ve seen so far, iOS 7 is a significant step forward.

To get deeper insight as to the changes, I’ve asked my colleague, Garter VP and Distinguished Analyst Ken Dulaney, to provide a guest post. Here’s what Ken has to say:

Sources say that iOS 7 will be released to the market and on new devices September 10, 2013. While consumers will be excited by this release there are many improvements for the enterprise, an area which has been in desperate need of more features from Apple. Gartner has produced a new note for clients on iOS 7 for the enterprise titled: “iOS 7 Offers Major Improvements for the Enterprise“. Since the release of iOS 4 where Apple expanded its Mobile Device Management APIs, Apple has only provided limited enhancements to enterprise security and manageability. This has changed with iOS 7. In this release, Apple through its single signon, background processing and per-App VPN improvements has enabled enterprises in cooperation with a licensed Mobile Device Management solution to dramatically change how iOS devices are managed.

Today, most devices are managed either by limited policies (e.g. device wipe, password enforcement), containers in which users must log into to access business applications and images where the entire disk is managed through a single console. Blackberry introduced a new idea called managed communities when Blackberry 10 devices were introduced with Blackberry Balance. Apple’s approach is also classified as a managed community.

Managed communities assume that applications start off in the consumer realm but can join, at the user option, an enterprise community governed by an MDM tool. Once they become a member they enjoy or are denied certain privileges to work with the other application members of the community. This provides separation of business applications from consumer but doesn’t require the user to log into a container to access business applications. Consumer and business applications are equally accessible but the business applications are governed by advanced policies supported by the MDM tool.

One of the key feature in iOS 7 gets around the issue that when attachments are opened, the file to be viewed must be copied to the sandbox area (all Apple applications are sandboxed with true separation of data to prevent viruses from spreading) controlled by the reading application which may be in turn a consumer application. This flow of enterprise information from a protected to an unprotected area troubled many iOS supporters. With the new features, this can be controlled.

There are many more enterprise features coming and many implications for how you manage iOS 7 devices that we explore in detail in the full research note for Gartner clients.

Bottom line: Start adapting your mobile device security and management strategy to include iOS 7 now.

2 Comments »

Category: Mobile security     Tags: , ,

Virtualization, Containers and Other Sandboxing Techniques Should be on Your Radar Screen

by Neil MacDonald  |  March 16, 2013  |  Comments Off

 

The idea of “sandboxing” potentially malicious content and applications isn’t new but interest in this type of approach on Windows desktops is growing. Further, the increasing variety of virtualization and abstraction techniques available on Windows create isolation that can be used to provide security separation – aka “sandboxing”.

Given the innovation around virtualization techniques and the decreasing effectiveness of signature-based approaches to protect us from advanced targeted attacks and advanced persistent threats, we believe that there will be a renaissance in sandboxing/virtualization/container technologies on Windows and mobile devices.

The idea is compellingly simple: define a core set of OS and applications as “trusted”. Then, if you need to handle a piece of unknown content or application , by default treat it as untrusted and isolate its ability to damage the system, access enterprise data and launch attacks on other enterprise systems.

In reality, it is harder than this. There is no silver bullet in information security. Isolation can be powerful, but has its drawbacks.

One issue is that in the real world, in order to be useful, you can’t completely lock out all content and applications. There will be cases where trusted applications need to handle untrusted content. There will be cases where end users want to download and run new, untrusted applications and they will want these applications to handle trusted content. Untrusted content and applications may have a need to persist on the file system and survive a reboot. All of these use cases involve risk, especially if end users are called upon to make decisions as to when and where untrusted content and applications can be “trusted”. An analogy will make this clearer. Even the strongest prison needs doors in order to be useful and those same doors can be used to escape.

Another issue is that the hackers will target the containment mechanism itself (e.g. the prisoners cut a hole in the fence, tunnel out or someone from the outside flies a helicopter over the fence and gets them out). The highly publicized recent Java zero day was a direct result of a breach of containment. Bromium (one of the solution providers in the virtualization containment space) recently presented on this topic at Blackhat EU demonstrating how to break containment of several leading sandboxing solutions. Interestingly, rather than attack the walls/doors of the containment mechanism directly, their breaches originated by attacking the OS kernel underneath. In our analogy, it’s the equivalent of saying “I don’t care how thick your walls and roof are, or what they are made of — these containment structures are built on a foundation with a bunch of holes”.

To help clients cut through the hype, I’ve just published a research note for clients titled “Technology Overview for Virtualization and Containment Solutions for Advanced Targeted Attacks”. In the note, we provide a framework for evaluating these virtualization/containment/sandboxing solutions and the use the framework to take a close look at the pros/cons of Bromium’s solution.

There are many emerging alternatives at all layers in the stack. Make sure you understand the pros/cons of the solutions and approaches before you buy.

Comments Off

Category: Beyond Anti-Virus Endpoint Protection Platform Next-generation Security Infrastructure Virtualization Virtualization Security     Tags: , , , , , , , , ,

This Just In: Signature-based Protection Ineffective Against Targeted Attacks

by Neil MacDonald  |  January 31, 2013  |  1 Comment

 

Seriously, is anyone surprised?

I’m sure you’ve seen the news about Chinese infiltration at the New York Times:

http://www.nytimes.com/2013/01/31/technology/chinese-hackers-infiltrate-new-york-times-computers.html

According to the article:

Over the course of three months, attackers installed 45 pieces of custom malware. The Times — which uses antivirus products made by Symantec — found only one instance in which Symantec identified an attacker’s software as malicious and quarantined it, according to Mandiant.

Signature-based protection alone hasn’t been enough to protect endpoints for years – see this post titled “Is antivirus obsolete?”. That’s why Gartner dropped its antivirus magic quadrant in 2006.

Further, like other advanced attacks, application control (also referred to as whitelisting) solutions likely would have stopped this attack in its tracks – see this post from 2010.

Unfortunately, application control has a historical reputation of not being deployable or manageable for end-user systems. The reality is that application control can and will be successfully deployed for end user systems and provides excellent protection from these types of attacks.  I just published a research note for Gartner clients on this topic titled “How to Successfully Deploy Application Control” that provides specific guidance on adopting this approach.

Why aren’t you deploying this type of approach, at least for some segments of your user population?

1 Comment »

Category: Beyond Anti-Virus Endpoint Protection Platform     Tags: , , , , , ,

Software Defined Data Centers and Security–What’s in a Name?

by Neil MacDonald  |  January 29, 2013  |  2 Comments

Last fall, I wrote a research note for Gartner clients titled “The Impact of Software-Defined Data Centers on Information Security” that explored the impact of software defined infrastructure on security – and the evolution of information security infrastructure to become software-defined itself.

Today, I saw that NetCitadel had announced an offering in this emerging space and had used both the “software defined security” and “security policy orchestration” terms.

Many vendors have jumped on the “software defined X” bandwagon (just like “Cloud” a few years ago) including:

  • software defined networking
  • software defined storage
  • software defined security
  • software defined infrastructure
  • software defined data centers

But, what does “software defined” really mean?

A common misconception is that “software defined” means that everything is accomplished in software. That’s not correct. Even within software defined networking, ultimately something has to connect to a wire and forward packets in the data plane. The same is true with security policy enforcement.

Here’s what I propose: “Software defined” is about the capabilities enabled as we decouple and abstract infrastructure elements that were previously tightly coupled in our data centers: servers, storage, networking, security and so on.

I believe to truly be “software-defined”, these foundational characteristics must be in place

  • Abstraction – the decoupling of a resource from the consumer of the resource (also commonly referred to as virtualization when talking about compute resources). This is a powerful foundation as the virtualization of these resources should enable us to define ‘models’ of infrastructure elements that can be managed without requiring management of every element individually.
  • Instrumentation – opening up of the decoupled infrastructure elements with programmatic interfaces (typically XML-based RESTful APIs).
  • Automation – using these APIs, wiring up the exposed elements using scripts and other automation tools to remove “human middleware” from the equation. This is an area where traditional information security tools are woefully inadequate.
  • Orchestration – beyond script-based automation, automating the provisioning of data center infrastructure through linkages to policy-driven orchestration systems where the provisioning of compute, networking, storage, security and so on is driven by business policies such as SLAs, compliance, cost and availability. This is where infrastructure meets the business.

If those are the four characteristics, what is the goal of software defined infrastructure?

To me, it’s the capabilities enabled by the 4 characteristics above that are really driving the interest in “software defined everything”:

  • Agility – speed to respond human middleware, speeding the ability of infrastructure to be provisioned.
  • Adaptability – ability to change infrastructure usage to dynamic meet dynamically changing requirements and changing context – such as location, sensitivity of the data being handled and so on. Also ability to adapt to changes in the infrastructure elements underneath without changing the models being managed (new hardware, new vendors, etc.)
  • Accuracy – by removing the human middleware component, reducing the chance for misconfiguration and mistakes by making infrastructure “programmable” and tieing this into automation systems
  • Assurance – confidence that what is deployed accurately meets your policy and compliance requirements

These 4 characteristics and 4 capabilities that arise from being “software defined” are the key to all software defined infrastructure, including security. So when you hear the hype about “software defined X”, see if it delivers against the above characteristics and capabilities.

Ignore the hype and navel-gazing arguments on the definition of “software defined”. It’s all about the capabilities enabled.

2 Comments »

Category: Cloud Security Next-generation Security Infrastructure Software Defined Data Center Virtualization Security     Tags: , , , , , ,

Virtual Firewalls or Physical? Wrong Question.

by Neil MacDonald  |  November 5, 2012  |  2 Comments

I still see people getting bogged down in rather meaningless arguments as to whether or not firewalls will be virtualized.

They will (and, in fact, are).

The bigger trend is the shift from proprietary hardware to software running on commodity hardware (in almost all cases, x86). That’s the big shift. Whether or not a given security control is packaged as a virtual machine is a matter of requirements (and to some extent preference).

Some information security curmudgeons prefer to see a separate box perform the security policy enforcement. They like the sense of “strong” separation of duties that comes with security controls being embedded in a separate physical appliance. The mistake here is equating physical separation with logical separation of duties or an outdated belief that “infrastructure can’t protect infrastructure”. Absolutely we must maintain separation of security policy formation, but it is a mistake to assume that the policy enforcement can’t take place within the infrastructure itself.

Indeed, a recent 2012 Gartner survey (full copy for clients here) showed that there is a preference for virtualized security controls over external security controls run outside the VM environment for virtualization/private cloud projects. Further, they indicated a preference for integration security controls (e.g. VMware’s vShield) within the platform – in other words, integrated into the infrastructure.

Figure 9.<br /><br />
Virtualization vs. Cloud<br /><br />
” /></p>
<p>But again, an argument as to whether or not these should be run as virtualized machines / virtual appliances misses the broader shift to software-based security controls that can be placed in physical appliances, <a href=virtual appliances, installed as software on commodity hardware or in the cloud (as IaaS based virtual machines).

In a recent research note for clients titled “The Impact of Software Defined Data Centers on Information Security”, I outlined many of the drivers to software-based security controls in the context to the switch to software defined security:

A common misconception with the shift to software-defined security (SDSec) is that all security controls must move to software. There are cases where this makes sense, and cases where it does not. The security data plane (where packets and flows are inspected) can benefit from the processing power of hardware-based inspection. Like SDN, hardware has a role to play in SDSec, especially when high throughput is needed. However, there are cases where SDSec policy enforcement is useful, such as:

  • To scale out (as opposed to hardware scale-up) and parallelize the enforcement of security controls.
  • To potentially reduce the cost, as compared with the use of physical appliances
  • To speed the provisioning of security controls by making their provisioning as easy as provisioning a new VM.
  • To enable flexible placement of security controls. For example, as most data center appliances shift to standardized hardware, the security controls may be placed onto the infrastructure platforms — for example, embedded within a storage array or as a software blade in a router. Another example would be placement within the SDN controller.
  • To provide visibility into blind spots, such as inter-VM communications, without requiring all traffic to be routed to physical appliances.

Virtual or physical? The answer is both!

The myth is that this is an either/or question … and add to this – on branch routers and in the cloud and on software based SDN controllers and on commodity hardware and in converged servers – all within a consistent management console and policy management framework. The future of information security is in software.

You choose what makes the most sense for your workloads, not the vendor. Favor vendors that get this and support your journey.

2 Comments »

Category: Cloud Cloud Security Next-generation Security Infrastructure Virtualization Virtualization Security     Tags: , , , , , , ,

What the Most Recent Zero Day in IE Should Teach Us

by Neil MacDonald  |  September 22, 2012  |  Comments Off

 

I saw yesterday that Microsoft had released the out of band patch for Internet Explorer as they had committed to do. Certainly, Microsoft’s motivation to quickly release the patch out of band was affected by calls from various enterprises and governments to ban the use of IE until the issue was resolved.

What can we learn from this incident? This is not the first time this has happened on Internet Explorer and it will not be the last.

Google Chrome has had them. So has Firefox.

When will we learn? The answer isn’t to switch browsers, the answer is to standardize on more than one browser.

After a similar zero day incident with IE in 2009, I worked with my colleagues David Smith and Ray Valdes to put together a research note for clients in early 2010 titled “Organizations Should Still Say No to Standardizing on One Browser”.  The research note provides multiple justifications for enterprises to standardize on two or more browsers. In the research, we specifically called the scenario of a zero day out:

Offsetting the reduced patching of a single version, the recommended approach of not standardizing on one browser would have provided immediate alternatives for those who were looking to take action during any of the recent zero-day security issues with IE  By avoiding dependency on a single supplier, an enterprise provides itself more agility in the event that the supplier exits the market or fails to adequately protect and secure its product. If an enterprise officially supported or enabled multiple browsers, it could simply instruct users to use the other browser in case of such an event (and temporarily block the use of the vulnerable browser). Instead, what results is often panic, scrambling and overreactions, such as some calling to ban IE entirely (which is impossible, because it is part of the Windows OS). Because all browsers contain yet-to-be-discovered vulnerabilities, such an overreaction doesn’t solve anything, and simply moves the issue to another browser and another vendor.

That pretty much describes what we saw recently (again) with IE.

Don’t let this happen again. Standardize on two or more browsers for users.

Comments Off

Category: Microsoft Microsoft Security Windows 7 Windows 8     Tags: , , , ,

Is Antivirus Obsolete?

by Neil MacDonald  |  September 13, 2012  |  3 Comments

I blogged about this question years ago, but a recent blog on CSO got me thinking once again. Has anything changed?

Thoughts:

1) The question “Has antivirus outlived its value?” is wrong. AV hasn’t been AV for years. Gartner stopped calling the market “AV” back in 2006. Modern Endpoint Protection Platforms (EPP – the term Gartner has used since 2006 to describe the market) include a variety of protection styles- signature and non-signature based to protect machines.

In other words, AV has been obsolete since 2006, but signature-based antimalware protection still lives on as a part of defense-in-depth strategy to protect endpoints.

2) Here’s how I’ve explained this in a graphic for the past five years:

image

Reactive, signature-based detection mechanisms alone are not enough. A market-leading EPP should provide a variety of protection styles that combine whitelisting, blacklisting and heuristics-based protection approaches in a system where these elements work together.

3) If you have a signature, by all means use it. Signatures (if you have them) can be much more efficient and have a lower chance of introducing a false positive onto a user than behavioral heuristics. Think of it this way, would you rather have your children inoculated against measles and smallpox, or have them get infected and let their adaptive immune system identify it and respond?

4) Whitelisting-based approaches are at the bottom of the pyramid and a more more foundational and critical approach to protecting any type of endpoint – desktop, mobile device or server. Why doesn’t iOS require antimalware? Two big reasons – reduced user rights and the Apple appstore functioning as an implicit whitelist. If we can bring this model to PCs, the need for antimalware will be reduced as well. By the way, this is exactly the model Microsoft is using on the WinRT side of Windows 8. Unfortunately, the legacy Windows Desktop side is still there and must be secured and so traditional EPP protection is still needed (see this recent research note on Windows 8 security for clients).

It comes down to this – if you have a general purpose OS where users run around will full admin rights and can download arbitrary executable code from anywhere, then signature-based antimalware protection as a part of an integrated EPP solution should be a requirement to protect the user and the information being handled on the device.

That’s is the state of todays Windows PC. And yes for those that are still in denial, Apple’s Mac OS.

Oh, and Android by the way   – this is one of the factors in Google’s recently announced acquisition of VirusTotal.

3 Comments »

Category: Beyond Anti-Virus Endpoint Protection Platform Information Security Next-generation Security Infrastructure Windows 8     Tags: , , , , , , , ,

If a Tree Falls in the Forest, is it Encrypted?

by Neil MacDonald  |  September 6, 2012  |  1 Comment

File:Tree fallen over Water of Assel - geograph.org.uk - 1177152.jpg

There’s a story behind the title of this blog

Recently, I had a discussion in regards to Microsoft’s BitLocker with a client. One of the issues I call out in my research on BitLocker is that (unlike competing third party products), Microsoft doesn’t have an option to synchronize the pre-boot PIN with the Windows login credential. In a securely deployed system leveraging the TPM, the end-user enters a PIN to unlock the drive, Windows boots and then they are prompted for their Windows credential.

Net/Net the user is prompted twice and must enter two different credentials. Competitor’s solutions for full drive encryption enable the Windows password to be synched to the preboot environment so that the credential entered to unlock the drive also logs them into Windows – a “single sign on” so to speak where one credential is entered.

Clearly, there are many enterprises that prefer the latter scenario. However, Microsoft considers the synch of of the Windows credential a potential security risk, so it doesn’t support this option.

However, many organizations licensed under Software Assurance”on the Windows OS get rights to BitLocker with the Enterprise version of Windows 7 (in other words, it appears to be “free”).

Hmmmm. What to do?  End user acceptance versus security versus free.

The client had decided to implement BitLocker with no preboot authentication, thus the end-user would only be presented with the Windows login prompt.

Essentially, while the drive is technically encrypted, there are no controls on the retrieval of the encryption keys. As soon as you boot, the drive is unlocked.

So I asked them “If an encrypted drive automatically unlocks on boot with no checking of credentials, is it really encrypted?” (kinda like “if a tree falls in the forest and no one is there to hear it, does it make a sound?”)

I explained to them that this was a grey area, and that I could not endorse the deployment of BitLocker in this way. I’m not a lawyer, but it sure seemed to me that the drive can’t be considered to be encrypted if anyone (including the bad guys, including if the device is lost or stolen) can boot it and have the drive unlock itself.

So I asked them if they had consulted with their legal counsel.

Good news — they had.

Bad news — their legal counsel took the same position I had taken. Deploying BitLocker in this way didn’t enable them to claim compliance with a requirement they had to encrypt their laptop drives.

By the way, does Windows 8 change this?

No and Yes – see this research note for clients

No – Microsoft doesn’t support the synching of Windows credentials to the preboot environment. So the same problem remains for laptop users.

Yes – For fixed device scenarios, Microsoft now supports a form of automatic network-based authentication that can be used as a form of authentication credential instead of a user-entered PIN. This works well for fixed desktops and servers encrypted with BitLocker, but requires Windows 8 and Windows Server 8.

If you’ve deployed BitLocker on Windows 7 with no preboot authentication, you might want to check with your legal counsel to get their position on whether or not this meets internal and/or regulatory requirements to encrypt hard drives.

1 Comment »

Category: General Technology Information Security Microsoft Microsoft Security Windows 7     Tags: , , , , ,

Five Myths and Realities of Virtualization Security

by Neil MacDonald  |  September 6, 2012  |  3 Comments

I’ve been researching the intersection of virtualization and security since 2007 and find myself continually running into these myths pertaining to virtualization and security:

1) Myth: Physical is better than virtual.

Reality: Define “better”. Software and virtual appliance-based security controls are more adaptable to the rapidly changing infrastructure requirements of a modern, virtualized data center. A recent case study by Intuit at VMworld documented their time to secure a VM being provisioned dropped from 30 days to 30 minutes using software-based and automatically provisioned security controls

2) Myth: Physical security control provide better separation of controls than virtual

Reality: This confuses physical separation with logical separation. Role based access control to security control functionality as well as the use of a separate security and management control plane provides the necessary separation of duties. A related myth is that “infrastructure can’t protect infrastructure:. Sure it can – and quite well.

3) Myth: Physical security appliances are faster than virtual implementations.

Reality: Yes and no. If you think of security as the serialized application of security policy enforcement at ‘choke’ points in the network (like placing an IPS at the perimeter of the enterprise or a next-generation firewall at the perimeter of the data center). The mistake in this thinking is the rationing of security policy enforcement based on physical network topology. Some of this is caused by the cost of physical appliances. Some of this is a byproduct of physical network topology. In both cases, challenge the assumption that placing big boxes at aggregation points is the best architecture. Parallelize the security policy enforcement closer to the workloads they are protecting using hypervisor-based or virtual appliance based security controls.

4) Myth: Virtual security appliances won’t achieve 40gb of inline IPS inspection speed

Reality: True, at least in the next few years – at least in a single box. The myth is in the need for 40 gb of inline speed – related to #3 above – the future of information security (like IT in general) is scale out, not scale up. Bigger and bigger proprietary boxes that consume an ever-increasing amount of our budget are not the way forward. Ask Unix vendors. Commodity x86 computing cycles are the future. Four Intel based servers each providing 10gb of inspection speed can do this today (see virtual firewalling benchmark data from Juniper/Altor). Our current security architectures based on the rationing of security controls at network choke points is a historical artifact, not necessarily the best path forward.

5) Myth: The future of information security is all virtual

Reality: The future is hybrid – physical and virtual security controls working cooperatively. Both will be used. Physical versus virtual is an enterprise deployment option, not a vendor dictate. Vendors should let you choose. Oh, and add cloud to this as well – for example, placing the security control with workloads when consuming Infrastructure as a Service.

3 Comments »

Category: Cloud Security Virtualization Security     Tags: , , , ,

What we Need is the Equivalent of Apple for Enterprise Data Centers

by Neil MacDonald  |  August 29, 2012  |  Comments Off

I’ve spent the last three days in Silicon Valley – some of it at VMworld and some of it with a client. With the flight out and back to the West Coast, I’ve had some time to do some thinking.

Cleary, there’s a perception that hardware is commoditizing and that there’s little or no premium left in hardware. It’s a commodity. It’s just a provider of x86 compute cycles (servers). It’s just a provider of packet pumping (networking). It’s just a provider of bit persistence (storage). No differentiation, swappable, commoditized.

For example, at VMworld, there was a big push on the “software defined data center” and the shift of value out of hardware (like network switches) and into software (ie software-defined networking) that further propagates the perception that hardware is a commodity.

But are servers, software and storage truly going the way of corn (undifferentiated and commoditized)?

I could say the same things about MP3 players. They are an undifferentiated commodity. Here’s 4GB unit with a color screen for less than US $25. 

But that’s not exactly the case. Apple has shown that customers place a value on the seamless experience that is a synthesis of hardware, software and services. The pay a premium for hardware in the context of an overall solution that delivers a seamless experience. They value what this seamless experience enables them to do (and, likewise, what they don’t have to do, like the simplicity, automation and ease of purchasing music or backing up the device).

What we need is the equivalent of Apple for Enterprise Data Centers, enabling enterprise IT a low friction, seamless experience to enable the next generation of It-enabled business requirements – agile, responsive, rapid time to value and so on. Definitely NOT the state of most enterprise IT systems today.

Which vendor with the necessary hardware, software and services capabilities will step up to deliver this?

VMware? It focuses on the software and pretty much avoids services. Ditto for Microsoft.

IBM? HP? Dell?  They have the pieces, but can they deliver the experience? The one that figures this out will quickly find that servers, storage, networking  (and security for that matter) are NOT a commodity when consumed in the context of an overall seamless solution.

IT systems management and security are far too complex today. Give me a seamless experience and I’ll pay a premium for the solution – and for the hardware that enables it.

Comments Off

Category: Cloud Security General Technology Information Security Next-generation Data Center Next-generation Security Infrastructure Virtualization Security     Tags: , , , , ,