by Neil MacDonald | March 16, 2013 | Comments Off
The idea of “sandboxing” potentially malicious content and applications isn’t new but interest in this type of approach on Windows desktops is growing. Further, the increasing variety of virtualization and abstraction techniques available on Windows create isolation that can be used to provide security separation – aka “sandboxing”.
Given the innovation around virtualization techniques and the decreasing effectiveness of signature-based approaches to protect us from advanced targeted attacks and advanced persistent threats, we believe that there will be a renaissance in sandboxing/virtualization/container technologies on Windows and mobile devices.
The idea is compellingly simple: define a core set of OS and applications as “trusted”. Then, if you need to handle a piece of unknown content or application , by default treat it as untrusted and isolate its ability to damage the system, access enterprise data and launch attacks on other enterprise systems.
In reality, it is harder than this. There is no silver bullet in information security. Isolation can be powerful, but has its drawbacks.
One issue is that in the real world, in order to be useful, you can’t completely lock out all content and applications. There will be cases where trusted applications need to handle untrusted content. There will be cases where end users want to download and run new, untrusted applications and they will want these applications to handle trusted content. Untrusted content and applications may have a need to persist on the file system and survive a reboot. All of these use cases involve risk, especially if end users are called upon to make decisions as to when and where untrusted content and applications can be “trusted”. An analogy will make this clearer. Even the strongest prison needs doors in order to be useful and those same doors can be used to escape.
Another issue is that the hackers will target the containment mechanism itself (e.g. the prisoners cut a hole in the fence, tunnel out or someone from the outside flies a helicopter over the fence and gets them out). The highly publicized recent Java zero day was a direct result of a breach of containment. Bromium (one of the solution providers in the virtualization containment space) recently presented on this topic at Blackhat EU demonstrating how to break containment of several leading sandboxing solutions. Interestingly, rather than attack the walls/doors of the containment mechanism directly, their breaches originated by attacking the OS kernel underneath. In our analogy, it’s the equivalent of saying “I don’t care how thick your walls and roof are, or what they are made of — these containment structures are built on a foundation with a bunch of holes”.
To help clients cut through the hype, I’ve just published a research note for clients titled “Technology Overview for Virtualization and Containment Solutions for Advanced Targeted Attacks”. In the note, we provide a framework for evaluating these virtualization/containment/sandboxing solutions and the use the framework to take a close look at the pros/cons of Bromium’s solution.
There are many emerging alternatives at all layers in the stack. Make sure you understand the pros/cons of the solutions and approaches before you buy.
Category: Beyond Anti-Virus Endpoint Protection Platform Next-generation Security Infrastructure Virtualization Virtualization Security Tags: APTs, Beyond Anti-Virus, Browser Security, Defense-in-Depth, Endpoint Protection Platform, Lockdown, Virtualization, Virtualization Security, Whitelisting, Windows
by Neil MacDonald | January 31, 2013 | 1 Comment
Seriously, is anyone surprised?
I’m sure you’ve seen the news about Chinese infiltration at the New York Times:
According to the article:
Over the course of three months, attackers installed 45 pieces of custom malware. The Times — which uses antivirus products made by Symantec — found only one instance in which Symantec identified an attacker’s software as malicious and quarantined it, according to Mandiant.
Signature-based protection alone hasn’t been enough to protect endpoints for years – see this post titled “Is antivirus obsolete?”. That’s why Gartner dropped its antivirus magic quadrant in 2006.
Further, like other advanced attacks, application control (also referred to as whitelisting) solutions likely would have stopped this attack in its tracks – see this post from 2010.
Unfortunately, application control has a historical reputation of not being deployable or manageable for end-user systems. The reality is that application control can and will be successfully deployed for end user systems and provides excellent protection from these types of attacks. I just published a research note for Gartner clients on this topic titled “How to Successfully Deploy Application Control” that provides specific guidance on adopting this approach.
Why aren’t you deploying this type of approach, at least for some segments of your user population?
Category: Beyond Anti-Virus Endpoint Protection Platform Tags: APTs, Best Practices, Beyond Anti-Virus, Defense-in-Depth, Endpoint Protection Platform, Lockdown, Whitelisting
by Neil MacDonald | January 29, 2013 | 2 Comments
Last fall, I wrote a research note for Gartner clients titled “The Impact of Software-Defined Data Centers on Information Security” that explored the impact of software defined infrastructure on security – and the evolution of information security infrastructure to become software-defined itself.
Today, I saw that NetCitadel had announced an offering in this emerging space and had used both the “software defined security” and “security policy orchestration” terms.
Many vendors have jumped on the “software defined X” bandwagon (just like “Cloud” a few years ago) including:
- software defined networking
- software defined storage
- software defined security
- software defined infrastructure
- software defined data centers
But, what does “software defined” really mean?
A common misconception is that “software defined” means that everything is accomplished in software. That’s not correct. Even within software defined networking, ultimately something has to connect to a wire and forward packets in the data plane. The same is true with security policy enforcement.
Here’s what I propose: “Software defined” is about the capabilities enabled as we decouple and abstract infrastructure elements that were previously tightly coupled in our data centers: servers, storage, networking, security and so on.
I believe to truly be “software-defined”, these foundational characteristics must be in place
- Abstraction – the decoupling of a resource from the consumer of the resource (also commonly referred to as virtualization when talking about compute resources). This is a powerful foundation as the virtualization of these resources should enable us to define ‘models’ of infrastructure elements that can be managed without requiring management of every element individually.
- Instrumentation – opening up of the decoupled infrastructure elements with programmatic interfaces (typically XML-based RESTful APIs).
- Automation – using these APIs, wiring up the exposed elements using scripts and other automation tools to remove “human middleware” from the equation. This is an area where traditional information security tools are woefully inadequate.
- Orchestration – beyond script-based automation, automating the provisioning of data center infrastructure through linkages to policy-driven orchestration systems where the provisioning of compute, networking, storage, security and so on is driven by business policies such as SLAs, compliance, cost and availability. This is where infrastructure meets the business.
If those are the four characteristics, what is the goal of software defined infrastructure?
To me, it’s the capabilities enabled by the 4 characteristics above that are really driving the interest in “software defined everything”:
- Agility – speed to respond human middleware, speeding the ability of infrastructure to be provisioned.
- Adaptability – ability to change infrastructure usage to dynamic meet dynamically changing requirements and changing context – such as location, sensitivity of the data being handled and so on. Also ability to adapt to changes in the infrastructure elements underneath without changing the models being managed (new hardware, new vendors, etc.)
- Accuracy – by removing the human middleware component, reducing the chance for misconfiguration and mistakes by making infrastructure “programmable” and tieing this into automation systems
- Assurance – confidence that what is deployed accurately meets your policy and compliance requirements
These 4 characteristics and 4 capabilities that arise from being “software defined” are the key to all software defined infrastructure, including security. So when you hear the hype about “software defined X”, see if it delivers against the above characteristics and capabilities.
Ignore the hype and navel-gazing arguments on the definition of “software defined”. It’s all about the capabilities enabled.
Category: Cloud Security Next-generation Security Infrastructure Software Defined Data Center Virtualization Security Tags: Adaptive Security Infrastucture, Context-aware Security, Next-generation Data Center, Next-generation Security Infrastructure, Reducing Complexity, Software Defined Security, VMware
by Neil MacDonald | November 5, 2012 | 2 Comments
I still see people getting bogged down in rather meaningless arguments as to whether or not firewalls will be virtualized.
They will (and, in fact, are).
The bigger trend is the shift from proprietary hardware to software running on commodity hardware (in almost all cases, x86). That’s the big shift. Whether or not a given security control is packaged as a virtual machine is a matter of requirements (and to some extent preference).
Some information security curmudgeons prefer to see a separate box perform the security policy enforcement. They like the sense of “strong” separation of duties that comes with security controls being embedded in a separate physical appliance. The mistake here is equating physical separation with logical separation of duties or an outdated belief that “infrastructure can’t protect infrastructure”. Absolutely we must maintain separation of security policy formation, but it is a mistake to assume that the policy enforcement can’t take place within the infrastructure itself.
Indeed, a recent 2012 Gartner survey (full copy for clients here) showed that there is a preference for virtualized security controls over external security controls run outside the VM environment for virtualization/private cloud projects. Further, they indicated a preference for integration security controls (e.g. VMware’s vShield) within the platform – in other words, integrated into the infrastructure.
virtual appliances, installed as software on commodity hardware or in the cloud (as IaaS based virtual machines).
In a recent research note for clients titled “The Impact of Software Defined Data Centers on Information Security”, I outlined many of the drivers to software-based security controls in the context to the switch to software defined security:
A common misconception with the shift to software-defined security (SDSec) is that all security controls must move to software. There are cases where this makes sense, and cases where it does not. The security data plane (where packets and flows are inspected) can benefit from the processing power of hardware-based inspection. Like SDN, hardware has a role to play in SDSec, especially when high throughput is needed. However, there are cases where SDSec policy enforcement is useful, such as:
- To scale out (as opposed to hardware scale-up) and parallelize the enforcement of security controls.
- To potentially reduce the cost, as compared with the use of physical appliances
- To speed the provisioning of security controls by making their provisioning as easy as provisioning a new VM.
- To enable flexible placement of security controls. For example, as most data center appliances shift to standardized hardware, the security controls may be placed onto the infrastructure platforms — for example, embedded within a storage array or as a software blade in a router. Another example would be placement within the SDN controller.
- To provide visibility into blind spots, such as inter-VM communications, without requiring all traffic to be routed to physical appliances.
Virtual or physical? The answer is both!
The myth is that this is an either/or question … and add to this – on branch routers and in the cloud and on software based SDN controllers and on commodity hardware and in converged servers – all within a consistent management console and policy management framework. The future of information security is in software.
You choose what makes the most sense for your workloads, not the vendor. Favor vendors that get this and support your journey.
Category: Cloud Cloud Security Next-generation Security Infrastructure Virtualization Virtualization Security Tags: Adaptive Security Infrastucture, Best Practices, Defense-in-Depth, Next-generation Security Infrastructure, Software Defined Security, Virtual Appliances, Virtualization Security, VMware
by Neil MacDonald | September 22, 2012 | Comments Off
I saw yesterday that Microsoft had released the out of band patch for Internet Explorer as they had committed to do. Certainly, Microsoft’s motivation to quickly release the patch out of band was affected by calls from various enterprises and governments to ban the use of IE until the issue was resolved.
What can we learn from this incident? This is not the first time this has happened on Internet Explorer and it will not be the last.
Google Chrome has had them. So has Firefox.
When will we learn? The answer isn’t to switch browsers, the answer is to standardize on more than one browser.
After a similar zero day incident with IE in 2009, I worked with my colleagues David Smith and Ray Valdes to put together a research note for clients in early 2010 titled “Organizations Should Still Say No to Standardizing on One Browser”. The research note provides multiple justifications for enterprises to standardize on two or more browsers. In the research, we specifically called the scenario of a zero day out:
Offsetting the reduced patching of a single version, the recommended approach of not standardizing on one browser would have provided immediate alternatives for those who were looking to take action during any of the recent zero-day security issues with IE By avoiding dependency on a single supplier, an enterprise provides itself more agility in the event that the supplier exits the market or fails to adequately protect and secure its product. If an enterprise officially supported or enabled multiple browsers, it could simply instruct users to use the other browser in case of such an event (and temporarily block the use of the vulnerable browser). Instead, what results is often panic, scrambling and overreactions, such as some calling to ban IE entirely (which is impossible, because it is part of the Windows OS). Because all browsers contain yet-to-be-discovered vulnerabilities, such an overreaction doesn’t solve anything, and simply moves the issue to another browser and another vendor.
That pretty much describes what we saw recently (again) with IE.
Don’t let this happen again. Standardize on two or more browsers for users.
Category: Microsoft Microsoft Security Windows 7 Windows 8 Tags: Browser Security, Defense-in-Depth, Microsoft, Microsoft Security, Windows
by Neil MacDonald | September 13, 2012 | 3 Comments
I blogged about this question years ago, but a recent blog on CSO got me thinking once again. Has anything changed?
1) The question “Has antivirus outlived its value?” is wrong. AV hasn’t been AV for years. Gartner stopped calling the market “AV” back in 2006. Modern Endpoint Protection Platforms (EPP – the term Gartner has used since 2006 to describe the market) include a variety of protection styles- signature and non-signature based to protect machines.
In other words, AV has been obsolete since 2006, but signature-based antimalware protection still lives on as a part of defense-in-depth strategy to protect endpoints.
2) Here’s how I’ve explained this in a graphic for the past five years:
Reactive, signature-based detection mechanisms alone are not enough. A market-leading EPP should provide a variety of protection styles that combine whitelisting, blacklisting and heuristics-based protection approaches in a system where these elements work together.
3) If you have a signature, by all means use it. Signatures (if you have them) can be much more efficient and have a lower chance of introducing a false positive onto a user than behavioral heuristics. Think of it this way, would you rather have your children inoculated against measles and smallpox, or have them get infected and let their adaptive immune system identify it and respond?
4) Whitelisting-based approaches are at the bottom of the pyramid and a more more foundational and critical approach to protecting any type of endpoint – desktop, mobile device or server. Why doesn’t iOS require antimalware? Two big reasons – reduced user rights and the Apple appstore functioning as an implicit whitelist. If we can bring this model to PCs, the need for antimalware will be reduced as well. By the way, this is exactly the model Microsoft is using on the WinRT side of Windows 8. Unfortunately, the legacy Windows Desktop side is still there and must be secured and so traditional EPP protection is still needed (see this recent research note on Windows 8 security for clients).
It comes down to this – if you have a general purpose OS where users run around will full admin rights and can download arbitrary executable code from anywhere, then signature-based antimalware protection as a part of an integrated EPP solution should be a requirement to protect the user and the information being handled on the device.
That’s is the state of todays Windows PC. And yes for those that are still in denial, Apple’s Mac OS.
Oh, and Android by the way – this is one of the factors in Google’s recently announced acquisition of VirusTotal.
Category: Beyond Anti-Virus Endpoint Protection Platform Information Security Next-generation Security Infrastructure Windows 8 Tags: Adaptive Security Infrastucture, Apple, Beyond Anti-Virus, Defense-in-Depth, Endpoint Protection Platform, Information Security, Microsoft, Microsoft Security, Windows
by Neil MacDonald | September 6, 2012 | 1 Comment
There’s a story behind the title of this blog
Recently, I had a discussion in regards to Microsoft’s BitLocker with a client. One of the issues I call out in my research on BitLocker is that (unlike competing third party products), Microsoft doesn’t have an option to synchronize the pre-boot PIN with the Windows login credential. In a securely deployed system leveraging the TPM, the end-user enters a PIN to unlock the drive, Windows boots and then they are prompted for their Windows credential.
Net/Net the user is prompted twice and must enter two different credentials. Competitor’s solutions for full drive encryption enable the Windows password to be synched to the preboot environment so that the credential entered to unlock the drive also logs them into Windows – a “single sign on” so to speak where one credential is entered.
Clearly, there are many enterprises that prefer the latter scenario. However, Microsoft considers the synch of of the Windows credential a potential security risk, so it doesn’t support this option.
However, many organizations licensed under Software Assurance”on the Windows OS get rights to BitLocker with the Enterprise version of Windows 7 (in other words, it appears to be “free”).
Hmmmm. What to do? End user acceptance versus security versus free.
The client had decided to implement BitLocker with no preboot authentication, thus the end-user would only be presented with the Windows login prompt.
Essentially, while the drive is technically encrypted, there are no controls on the retrieval of the encryption keys. As soon as you boot, the drive is unlocked.
So I asked them “If an encrypted drive automatically unlocks on boot with no checking of credentials, is it really encrypted?” (kinda like “if a tree falls in the forest and no one is there to hear it, does it make a sound?”)
I explained to them that this was a grey area, and that I could not endorse the deployment of BitLocker in this way. I’m not a lawyer, but it sure seemed to me that the drive can’t be considered to be encrypted if anyone (including the bad guys, including if the device is lost or stolen) can boot it and have the drive unlock itself.
So I asked them if they had consulted with their legal counsel.
Good news — they had.
Bad news — their legal counsel took the same position I had taken. Deploying BitLocker in this way didn’t enable them to claim compliance with a requirement they had to encrypt their laptop drives.
By the way, does Windows 8 change this?
No and Yes – see this research note for clients
No – Microsoft doesn’t support the synching of Windows credentials to the preboot environment. So the same problem remains for laptop users.
Yes – For fixed device scenarios, Microsoft now supports a form of automatic network-based authentication that can be used as a form of authentication credential instead of a user-entered PIN. This works well for fixed desktops and servers encrypted with BitLocker, but requires Windows 8 and Windows Server 8.
If you’ve deployed BitLocker on Windows 7 with no preboot authentication, you might want to check with your legal counsel to get their position on whether or not this meets internal and/or regulatory requirements to encrypt hard drives.
Category: General Technology Information Security Microsoft Microsoft Security Windows 7 Tags: Best Practices, Endpoint Protection Platform, Information Security, Microsoft, Microsoft Security, Windows
by Neil MacDonald | September 6, 2012 | 3 Comments
I’ve been researching the intersection of virtualization and security since 2007 and find myself continually running into these myths pertaining to virtualization and security:
1) Myth: Physical is better than virtual.
Reality: Define “better”. Software and virtual appliance-based security controls are more adaptable to the rapidly changing infrastructure requirements of a modern, virtualized data center. A recent case study by Intuit at VMworld documented their time to secure a VM being provisioned dropped from 30 days to 30 minutes using software-based and automatically provisioned security controls
2) Myth: Physical security control provide better separation of controls than virtual
Reality: This confuses physical separation with logical separation. Role based access control to security control functionality as well as the use of a separate security and management control plane provides the necessary separation of duties. A related myth is that “infrastructure can’t protect infrastructure:. Sure it can – and quite well.
3) Myth: Physical security appliances are faster than virtual implementations.
Reality: Yes and no. If you think of security as the serialized application of security policy enforcement at ‘choke’ points in the network (like placing an IPS at the perimeter of the enterprise or a next-generation firewall at the perimeter of the data center). The mistake in this thinking is the rationing of security policy enforcement based on physical network topology. Some of this is caused by the cost of physical appliances. Some of this is a byproduct of physical network topology. In both cases, challenge the assumption that placing big boxes at aggregation points is the best architecture. Parallelize the security policy enforcement closer to the workloads they are protecting using hypervisor-based or virtual appliance based security controls.
4) Myth: Virtual security appliances won’t achieve 40gb of inline IPS inspection speed
Reality: True, at least in the next few years – at least in a single box. The myth is in the need for 40 gb of inline speed – related to #3 above – the future of information security (like IT in general) is scale out, not scale up. Bigger and bigger proprietary boxes that consume an ever-increasing amount of our budget are not the way forward. Ask Unix vendors. Commodity x86 computing cycles are the future. Four Intel based servers each providing 10gb of inspection speed can do this today (see virtual firewalling benchmark data from Juniper/Altor). Our current security architectures based on the rationing of security controls at network choke points is a historical artifact, not necessarily the best path forward.
5) Myth: The future of information security is all virtual
Reality: The future is hybrid – physical and virtual security controls working cooperatively. Both will be used. Physical versus virtual is an enterprise deployment option, not a vendor dictate. Vendors should let you choose. Oh, and add cloud to this as well – for example, placing the security control with workloads when consuming Infrastructure as a Service.
Category: Cloud Security Virtualization Security Tags: Adaptive Security Infrastucture, Next-generation Security Infrastructure, Virtual Appliances, Virtualization Security, VMware
by Neil MacDonald | August 29, 2012 | Comments Off
I’ve spent the last three days in Silicon Valley – some of it at VMworld and some of it with a client. With the flight out and back to the West Coast, I’ve had some time to do some thinking.
Cleary, there’s a perception that hardware is commoditizing and that there’s little or no premium left in hardware. It’s a commodity. It’s just a provider of x86 compute cycles (servers). It’s just a provider of packet pumping (networking). It’s just a provider of bit persistence (storage). No differentiation, swappable, commoditized.
For example, at VMworld, there was a big push on the “software defined data center” and the shift of value out of hardware (like network switches) and into software (ie software-defined networking) that further propagates the perception that hardware is a commodity.
But are servers, software and storage truly going the way of corn (undifferentiated and commoditized)?
I could say the same things about MP3 players. They are an undifferentiated commodity. Here’s 4GB unit with a color screen for less than US $25.
But that’s not exactly the case. Apple has shown that customers place a value on the seamless experience that is a synthesis of hardware, software and services. The pay a premium for hardware in the context of an overall solution that delivers a seamless experience. They value what this seamless experience enables them to do (and, likewise, what they don’t have to do, like the simplicity, automation and ease of purchasing music or backing up the device).
What we need is the equivalent of Apple for Enterprise Data Centers, enabling enterprise IT a low friction, seamless experience to enable the next generation of It-enabled business requirements – agile, responsive, rapid time to value and so on. Definitely NOT the state of most enterprise IT systems today.
Which vendor with the necessary hardware, software and services capabilities will step up to deliver this?
VMware? It focuses on the software and pretty much avoids services. Ditto for Microsoft.
IBM? HP? Dell? They have the pieces, but can they deliver the experience? The one that figures this out will quickly find that servers, storage, networking (and security for that matter) are NOT a commodity when consumed in the context of an overall seamless solution.
IT systems management and security are far too complex today. Give me a seamless experience and I’ll pay a premium for the solution – and for the hardware that enables it.
Category: Cloud Security General Technology Information Security Next-generation Data Center Next-generation Security Infrastructure Virtualization Security Tags: Adaptive Security Infrastucture, Apple, Cloud Security, DC-Summit-NA, Reducing Complexity, Virtualization
by Neil MacDonald | May 25, 2012 | 4 Comments
One of the common misconceptions that I run into is that a public cloud services provider can’t have an on-premises element to their offering and that having this footprint somehow “breaks” the cloud model.
The root of this misconception lies in equating cloud to a location. Cloud is a computing style, not a location.
There are already cloud-based services providers that use an on-premises element to their architecture. For example, Qualys provides security as a service (vulnerability management) using an on-premises physical or virtual appliance to launch the local scanning from. Using the on-premises appliance, significant amounts of bandwidth are preserved as well as providing network connectivity into an organization’s internal networks to perform its scanning services.
So, how is this Cloud? Remember cloud is a computing style. The key is how the appliance is managed by the cloud provider and, more importantly, not managed by the enterprise consuming the service. The on-premises element is just a “black box” to the enterprise. In most cases, they shouldn’t have to pay for or provision the appliance footprint, even if it is a physical piece of hardware. The appliance is just a part of the overall service delivery. Further, the enterprise shouldn’t have to install software on it or perform updates. Essentially, it should be a “lights out” footprint — everything should be handled by the cloud services provider.
Why would an on-premises footprint be important? Multiple reasons:
- To provide network connectivity (e.g. VPN) into protected locations in the enterprise’s internal network, systems and information
- To reduce bandwidth consumption for scanning related services (vulnerability management, dynamic application security testing, etc)
- To improve performance and reduce bandwidth requirements through intelligent caching, compression and other bandwidth optimization techniques
- To keep large datasets local for local processing and analysis – again primarily to save bandwidth costs
- To keep sensitive data local
- To keep regulated data local (e.g. geolocation requirements)
The latter two are becoming increasingly important as more critical business information, systems and processes move to the cloud. I’m sure there are more requirements that you could add to the list. The takeaway is to expect more cloud-services providers to offer on-premises extensions of their architectures to address specific usage requirements.
Category: Cloud Cloud Security Tags: Cloud Security, Virtual Appliances