In my recent dangerous times blog post I posed the question:
“How can the [enterprise security pro] battlefield units in our asymmetrical war [with cybercrime] call in the equivalent of an air strike or a SWAT team when they’re attacked? The international cybersecurity community and the security industry itself must address the issues of certification, attribution, due process, and international cooperation that arise.”
It seemed like an insurmountable problem and who wants to tilt at windmills…but once I started to write about it, a trickle of ideas began. Here’s the first few of hopefully many more.
Yesterday I enjoyed a long briefing with Kurt Natvig, Righard Zwienenberg, and John Callahan from Norman Defense – an anti-malware vendor whose malware analysis product Norman Sandbox is sold to the usual high security verticals such as government, defense, financial services, ISPs, and telecommunications (as well as other security vendors). Norman said that the Sandbox is also selling to more enterprise niches such as higher education and pharmaceuticals. Indeed, I used to know a full-time malware researcher at a high tech manufacturing company, so they do exist at enterprises. Norman says it has one pharmaceutical company with 20 (!) full-time malware researchers.
The apparent surge in malware research interest is justified by the increase in targeted attacks on enterprises, like Operation Aurora. Norman Sandbox Analyzer and Sandbox Analyzer Pro take the advanced security research facilities that other security vendors have and put them in the hands of enterprise security pros.
The functionality of Sandbox and other security research tools may be similar, but there are advantages to running the tools in the enterprise environment. First, some targeted malware won’t fully reveal itself until it scents the target, that is, finds something like a customer-specific application file, or a PC with the CEO’s name. To get the malware to fully decrypt itself you have to recognize and supply the fragment of the artifact it seeks.
The other critical difference between running malware research inhouse and using today’s typical vendor research service is time. Send your virus sample to a security vendor, and it will likely take a few days to respond. Run it through your own full-time virus researcher with a good inhouse analysis product and you’ll know what the malware sample does in minutes. How’s that for empowering battlefield units?
Just a start, actually. Norman and other security vendors should be doing more to help enterprise security pros connect the dots between the malware, the attacker, the target, and potential vulnerabilities. Initial analysis of a targeted malware sample may tell you it was going to install a run key in the Windows registry and communicate with a botnet, but it generally doesn’t reveal the identity of the attacker and may not reveal the ultimate target.
What malware is doing and how it does it may only be the first step to neutralizing (let alone prosecuting) an attacker and defending against future, related attacks. As I wrote in my report “Threat Assessment Guidance for Dangerous Times” the security industry has a huge blind spot here. Vendors and customers alike conflate the word “threats” (which should refer to the criminals themselves) with the word “malware” (which should refer just to the threat agent).
To get out of the trenches and take our defense to the next level, we must tear cybercrime up by the roots, not just treat its symptoms. This starts with learning the threat’s motivation, capabilities, and intent. Only then can we hope to neutralize the threat and inform a risk assessment of how to protect all likely attack paths to the targeted asset by future threats.
Let’s play “what if” for a minute: I’ll start by quoting an actual exchange from Norman’s demo of emulated malware attempting to connect and password-authenticate itself to a list of 20 botnet controllers. I asked:
“Is there a button I could push to send each botnet controller’s address and password to the hosting ISP and the host country police force?”
See how attribution and collective intelligence aren’t even a feature of an important security analysis product? To be fair, customers aren’t demanding it (yet). I understand why: The typical enterprise is in the trenches hoping not to be attacked and hoping not to spend too many hours interacting with a law enforcement system that only occasionally resolves the issue. And few enterprises want the negative publicity of having been attacked. So I suggested:
“I’d like to be able to push a button and send all this to the police in one of three source attribution modes: as an identified Norman user, anonymously so that even Norman can’t track me, or anonymously with the option to respond to a dialogue request at my discretion.”
“These days it’s difficult to be anonymous.”
“Really? Hackers do it all the time.”
All of us – analysts, enterprise security pros, security vendors, and law enforcement – must work to put an end to the perverse incentive system that’s making us less effective against cybercrime. John Callahan from Norman did say that they share information from their sandboxes with CERTS and cybersecurity police all the time, that they genuinely want to solve the problem.
Enlightened self interest is wonderful, but an incentive system that rewards security intelligence sharing would be even better.
What if a vendor built a malware analysis product with automated community information exchange and the three levels of source attribution that I suggested to Norman? What if they awarded customer discounts for reporting malware analysis results to the cybersecurity police (ISPs, police, and other interested parties)? Wouldn’t this improve the quantity and quality of the research community? Wouldn’t the improvement in the customer’s ability to connect the dots increase the value of the malware analysis product? Wouldn’t crowdsourcing malware analysis reports also enable the vendor to get more revenue from cybersecurity police and others that use the analysis feeds and databases?
Norman and other security vendors could set up these services to crowdsource malware analysis today; there’s no law against it that I know of. Either they haven’t thought of this (but I’ll try to make sure they do) or they don’t know how they could make money doing it.
It’s a fair question whether the cybersecurity police would pay enough to make it profitable for vendors to crowdsource malware analysis. Perhaps as taxpayers that fund the police and as enterprises that lobby the politicians we could have something to say about that. We need to lobby for effective action against cybercrime. Crowdsourcing malware analysis is one idea whose time may come.
The vision should ultimately globalize, transcend individual vendors to reach its full potential. What if there were a thousand crowdsourced malware analysis services? Not all of them need to be products; some could be cloud computing services with near-real response and electronic teleconferencing back to the (non-full time) security pro. Many such products and services could be syndicated together, with incentives to share attack and threat information in a highly-efficient, automated manner among all the authorized users of the civilized world. Radio stations and music royalties do something like this, why can’t security pros?
Then maybe those windmills of international cybercrime wouldn’t look so big anymore.
Category: Uncategorized Tags: