In the recent month I’ve done both a Security Summit talk and a webinar about application security. The gist of the presentations – at least what I wanted customers to take away – is that we can’t sell application security to developers and architects by perpetuating the train-test-fix cycle of pain. It feels, though, like many organizations, vendors, and others in the industry are doing exactly that. Making pain less painful in the hopes that it becomes bearable.
My proposition? Externalization and standardization. Take care of security (as much as possible) FOR the developers, not BY the developers. It’s 2012 and I believe we shouldn’t have to look at vendor stats that show 25%+ XSS and SQLi incidence for the applications they test. And at the same time I believe developers shouldn’t actually have to worry about those two specific things every time they create or modify code either. And I’m strongly convinced that we can eat our cake and have it too on this one. But it may require some changing of minds. A change that says bolting on is OK, as you as you planned to do so.
It’s a discussion that I think need to take place more often. Jack Daniel actually just posted about this article that covers my recent Security Summit talk. In it I see a valid complaint that the article feels like it too heavily favors WAF over ever fixing code, and I agree that it could be misinterpreted to say don’t ever worry about this code stuff again (which I most definitely don’t want anyone to do). But along with that I also see a perpetuation of the notion that WAF is a band aid, with which I most definitely do not agree.
You see, ideally many elements of security would be taken care of in the application code. It could be that the platform does not expose certain vulnerabilities (e.g., in the case of buffer overflows), or it could be that a module or framework is available to assist the developer in taking care of them (e.g., OWASP ESAPI, Log4j/Log4net, or the MS anti-XSS module). But unfortunately this doesn’t always work. Aside from the obvious “it’s off-the-shelf,” not all environments are (relatively) safe and many don’t have these frameworks and modules available for them. And if they are available, security teams are often either not aware of them or not working with development teams to get them in place. They instead still try to sell application security with these long application testing reports … and the pain continues.
But there’s another aspect of externalization: sometimes you might not want to build something into the code. That is, some other component in the architecture takes care of it. Single sign-on support would be an easy pick here, but even something like anti-CSRF tokens might make a lot of sense to deploy consistently and independent of the application. And I think this is where some adjustment may be required. Seeing WAF less like an IDS (patch) and more like an XML GW (application architectural component). I believe that doing so – and taking advantage of the non-security benefits of these technologies as well – will make the application security diet a lot more appetizing.
The economic argument in fixing can be a factor, too. Should we filter input in an application with method X or filter it at the WAF or XML GW with the same method instead? To me this is not about always fixing code or always falling back on WAFs, but it’s about knowing where an when you can effectively use and manage it. Just like you can misconfigure a WAF you can make a mistake in the code fix – and absent industry figures on what’s more likely to succeed or fail you just have to figure out where your strong points lie, and test, test, and test some more.
So let’s me be clear: WAFs or any other mediating technology shouldn’t and won’t fix everything. But a balanced and appetizing application security diet – one that developers and architects will be able to live on – can’t be made of custom code (and custom code fixes) alone. I prefer frameworks for much of this, but operational components need to fill in where gaps exist.