Neil MacDonald

A member of the Gartner Blog Network

Neil MacDonald
VP & Gartner Fellow
15 years at Gartner
25 years IT industry

Neil MacDonald is a vice president, distinguished analyst and Gartner Fellow in Gartner Research. Mr. MacDonald is a member of Gartner's information security and privacy research team, focusing on operating system and application-level security strategies. Specific research areas include Windows security…Read Full Bio

Coverage Areas:

Security No-Brainer #9: Application Vulnerability Scanners Should Communicate with Application Firewalls

by Neil MacDonald  |  August 19, 2009  |  27 Comments

If a web application security testing tool tells me I have a vulnerability in an application, what do I do? “Fix it” is the right answer, but not always so easy if my development organization is backlogged or, worse, I don’t have access to the source code. Another answer is to shield the application from attacks on the vulnerability using an application-level firewall – in this example a web application firewall.

Why can’t the web application security testing tool simply exchange knowledge of the vulnerability with the firewall in a standardized way? Then the firewall could detect and block attacks on this known vulnerability. Seems like a no-brainer. However, attempts to standardize this have failed. Application Vulnerability Description Language (AVDL) is a defunct, XML-based standard for the exchange of application vulnerabilities between vulnerability assessment tools and other products, typically shielding tools, such as application firewalls, that could proactively shield the application from the vulnerability. AVDL was adopted as a standard in 2004 by the Organization for the Advancement of Structured Information Standards (OASIS); however, The AVDL committee was officially closed by OASIS in January 2006.

In Gartner’s 2008 Hype Cycle for Data and Application Security, I marked AVDL as “obsolete”. In this year’s Data and Application Security Hype Cycle, I dropped it all together.

Even if no successor to AVDL appears, proprietary linkages will suffice. Multiple web application scanners and web application firewalls provide this capability today with explicit partnerships. The value is too compelling. It’s time to start requiring this capability in our web application security testing tool providers via partnerships with web application firewall vendors.

27 Comments »

Category: Application Security Next-generation Security Infrastructure     Tags: , ,

27 responses so far ↓

  • 1 Andre Gironda   August 20, 2009 at 12:30 am

    Can you go over the developer backlog and lack of source code issues in detail? I think it’s better to FIND A WAY TO fix the code instead of using application firewall products in a larger majority of cases than what is being purported by the industry.

    The runtime tools need to change because they are becoming obsolete. At the very least, they could take into account HTTP server configuration files. Look at what Microsoft is doing with the Security Runtime Engine for ASP.NET Controls through an IIS IHttpModule. They are going to stamp out XSS and SQLi, a feat which no WAF, not even mod-security, can today claim. Even Ryan Barnett said that mod-security, at best, can only protect against first-order XSS — yet the industry has more commonly seen second-order and DOM-based XSS attacks in the past 4 years. Reflected, or first-order XSS, was a thing of the past when phishing utilized it in the early 2001-2004 years.

    These types of changes don’t require AVDL or anything like it. Another great example is the SPF work from Gotham Digital Science. Individual crafty attackers, would-be adversaries, and hackers are scared to even touch protection toolsets like SPF. They primarily concentrate on hobbyist equivalents, such as PHP-IDS. However, SPF is Enterprise-class, open-source software — in the same way that mod-security is.

    Today’s appsec/datasec industry needs less products and more of a strategy. We need to move faster than these products allow us. The only way to make this happen is by increasing every organization’s understanding of the appsec/datasec space through proper communication and risk maps. Experts are necessary to perform this activity. There is a limit on the number and quality of experts in these spaces, so it would make sense to outsource some, but not all, expertise to at least two application security consulting companies.

    So, out with the web application firewall vendors and in with the appsec consulting companies. My picks would largely be experts who have been performing expert security reviews since before the age/dawn of web application security scanners, web application firewall, and source code security analysis tools. Names come to mind such as Aspect Security, Gotham Digital Science Security, NGS Software, Cigital, iSecPartners, Stach & Liu, Casaba Security, Leviathan Security Group, Corsaire, Consciere, Security Compass, Security Innovation, ClearNetSec, Korelogic, AsTech Consulting, Denim Group, HP ASC, Fortify Software On-Demand, Cenzic Click2Secure, Verisign, McAfee, Booz Allen, Accenture, E&Y, IBM/ISS, Verizon Business, Matasano, Independent Security Evaluators, possibly even Accuvant, Coalfire, Trustwave SpiderLabs, Neohapsis, Secure State, Sensepost, Matta, n.runs, Tech Mahindra, IOActive, Inguardians, and last (but very least of all because of their shortsightedness and focus on low-quality SaaS instead of high-quality sevices), WhiteHatSec.

  • 2 Jeff Laurinaitis   August 20, 2009 at 12:32 am

    If you use the best-of-breed tools that you mentioned this is not only feasible, some security vendors have had the same vision and have built integration points. Whitehat Security is (my & RKON’s opinion) the industry leader in web application vulnerability assessments (combines traditional scanners with humans/experts reviewing the business logic). They have worked with F5 & Imperva to integrate their solutions to enable a “virtual” application patch and thus provide a layer security while the developers have time to fix the source code.

    > http://www.whitehatsec.com/home/partner/imperva.html

    > http://www.whitehatsec.com/home/partner/f5.html

  • 3 Neil MacDonald   August 20, 2009 at 7:03 am

    Andre, agree – “Fix it” is the right answer as I stated up front.

    However, I regulary talk with clients starting up their application security testing program that have a portfolio of 100’s of untested web applications. They are on a fast path to get these tested using internal and external resources. The backlog problem in development is real. Development already has a pipeline. One client estimated it would take more than a year with a complete halt of all new development and changes to go back and fix the applications they had that were vulnerable. With many commercial web-enabled applications, access to the source code isn’t made available to fix or voids the support contract.

    Sometimes the right answer is to shield a vulnerable application until we can get it fixed. By all means, fix it. But don’t go unprotected in the interim.

  • 4 Andre Gironda   August 20, 2009 at 2:06 pm

    I’d be interested to hear more about the kinds of frameworks, third-party components, and other development artifacts in-use at those clients. If existing test harnesses exist for any of these hundreds of apps, then it may be easier to get them tested, albeit in a non-industry “officially” approved fashion.

    The client that estimated a year may be a bit short-sighted, depending on the types and kinds of vulnerabilities. I recently heard a story about a large developer environment that was looking at a certain 1.5MLOC application. This application was riddled with the types of web application vulnerabilities you’d expect — every one of them. It only took 4 developers 2 days to stamp out SQL-injection related bugs — that whole class of vulnerabilities solved!

    A lot of commercial web-applications are written in ASP.NET. In this case, I’d rather see the Security Runtime Engine utilized, as mentioned previously. Similar protection capabilities and solutions are available. If the developers understand the coding issues, usually they can provide an interim fix that at least covers the whole application and prevents the actual attacks. Getting them involved in this process immediately prevents chasing them down later and wholly curtails the IT/Ops vs. AppDev competition.

    If AppDev continues to treat IT/Ops or InfoSec as a service, instead of a partner, apps will always have security issues.

    As for access to source code and contractual agreements, this becomes a risk management situation. In many risk scenarios, I could see either doing nothing while documenting a legal case against the application provider for negligence — or, in many cases, simply retiring the application in the not-so-distant future. How can IT/Ops support an application that can never be fixed? I’ve heard of situations where source code went into escrow. A risk management exercise may deem that the source code needs to come out of escrow and made available to the buyer of the application. I doubt any ISV selling binary-only web application packages will get return business from any Enterprise that they are unwilling to provide fixes and/or source code for, especially if the consequences are dire.

  • 5 Mandeep Khera   August 20, 2009 at 2:17 pm

    Neil/Andre

    I agree with some of your comments above. We are finding that most of our customers have hundreds of vulnerabilities with limited time to fix them. Our quantitative score for each vulnerability helps them in prioritizing which ones to fix first. However, it still might take them a few weeks or months to fix the most critical ones. That’s where our integration with WAFs help. We never got on the AVDL train and a simple XML level integration is working fine. WAFs by themselves have a lot of issues with false positives and configuring rules based on vulnerabilities found make a lot of sense.

    I do believe that consulting companies like ts/sci can add a lot of value because a lot of developers are still struggling to find the right processes and expertise to remediate a lot of vulnerabilities. So, IMHO, the right approach is – use scanners/saas to find vulnerabilities, prioritize based on critcality, configure WAF rules until they are fixed, use consulting companies to help in remediation and streamline ongoing SDLC processes…

    Mandeep Khera

  • 6 Andre Gironda   August 20, 2009 at 4:08 pm

    Mandeep,

    You said, “it still might take them a few weeks or months to fix the most critical ones”. Do you agree that it might take them a few hours or days to fix the most critical ones? I have found that “hundreds of vulnerabilities” can be fixed just as easily as a handful of vulnerabilities when centralizing security controls using well-known application components, such as the ones included in popular frameworks (i.e. JEE, .NET, PHP, Python, RoR, Perl, Classic ASP, JSP, and ISAPI) or less-popular component add-ons (e.g. OWASP ESAPI).

    Do you see the possibility to come up with an interim fix besides placement of a WAF? What value do you or your customers get out of black-box web-application penetration-testing, whether done by a scanner or SaaS? I feel that these activities will always produce low-value, both in the short and long term. Can’t the appsec consulting shops also provide penetration-testing services, including SaaS or cloud-based ones, say, after a code review (more below on this)?

    IMHO, the right approach is to define an appsec policy with the help of appsec consultants. When approaching a specific-app risk assessment, first get an understanding of what the application is used for in its business context, who its users are, and what kinds of data is produced/consumed. This can be done performing a crawl of the application while mapping URLs to source code.

    In some situations, however, it may be easier to apply an OWASP ASVS L2B or L3 verification solely using the development artifacts, which may or may not include a limited amount of documentation. For very large apps, the documentation could include UML class/seq diagrams or simple domain models, or these can be elicited/reversed. Using a model, risky sections of code can be identified and focused in on in the same way that centralizing security controls provides similar focus.

    I find that there is little value in performing OWASP ASVS L1 or L2A verifications alone, even to build a risk map like you describe (i.e. prioritizing based on criticality). L2A verifications are almost always best performed following a L2B verification. In other words, it’s always best to code review before pen-test.

    Also, please note that the “ts/sci security blog” is not a consulting company, nor a company at all. We’re merely industry analysts, much like Gartner.

    Developers do need to find the right processes and expertise. Typically, developers that utilize high-quality hygiene and behaviors have no problem integrating centralized security controls in legacy and/or new applications. In other words, if the developers follow any common Agile paradigm e.g. MSF Agile, Scrum, xP — and these processes are integrated with application lifecycle management platforms (e.g. Microsoft VSTS, Sun/Java JFeature, HP Quality Center, IBM Rational Quality Manager, or other portfolios such as the ones from Coverity, Atlassian, or Parasoft) — then it’s extremely easy to modify that hygiene and ALM platform to support all application security needs.

    For developer teams who do not have a sense of hygiene around their quality improvement process, they must first build that in. I can understand that for some organizations, this can take a considerable amount of time. Usually, this is because of lack of expertise in quality control, especially developer-testing patterns. Today’s ALM platforms are turn-key solutions, often bootable from CD (such is the case with Buildix, which is an open-source and free bootable CD) or loaded as a virtual appliance into a virtual infrastructure (in the case of Microsoft VSTS and TFS VHDs, which have 60-day trials from the Microsoft download center).

    Without this developer hygiene towards both quality and security issues, I believe it is impossible to stamp out any class of bugs — security or not — in any organization. There is extremely high ROI attached to these projects. Can an application security SaaS solution offer similar ROI? Can application penetration-testing alone offer similar ROI? Can a web application firewall offer similar ROI?

    Most importantly, what classes of security bugs can appsec SaaS (or app pen-test) plus WAF stamp out entirely? Can this solution prevent against second-order SQL-injections (or LDAPi, XPathi, XMLi, SMTPi), especially in, say, the case where stored procedures are executing system commands? Can this solution prevent second-order Cross-Site Scripting (XSS)? Can this solution prevent Remote File Inclusions, or Local File Inclusions that may cause remote execution of code through a second-order attack (e.g. log files)? Can they prevent insecure direct object references?

    While I realize that WAFs can prevent CSRF attacks, I don’t understand why an expensive, performance- and path-affecting solution should be utilized in place of free, open-source, turn-key alternatives such as OWASP CSRFGuard, Codeplex SPF or AntiCSRF, mod-security ala Ryan Barnett’s talk from BlackHat DC 2009, and similar.

    This covers the OWASP Top Ten, categories A1-A5. WAFs cannot cover OWASP T10 category A8 (obviously), and while they may provide some coverage of A6-7 and A9-10, these categories can also be solved by free, open-source alternatives including Codeplex SPF or mod-security.

    Please also note that for some organizations, the OWASP Top Ten may be the least of their critical vulnerability problems. Most “very critical” vulnerability problems at the application layer exist at the domain tier (aka “business logic”), or integration tier (e.g. Web Services, Ajax/Flash proxies/remoting, directory services, federation services, et al). Most unfortunately, WAFs are not in the path of these tiers because the WAF can only exist between the client tier and presentation tier.

    Security No-Brainer #10: Gartner was wrong about Security No-Brainer #9.

  • 7 Neil MacDonald   August 21, 2009 at 1:27 pm

    Andre, thanks again.

    It would be more accurate to say that you believe that I (Neil MacDonald) am wrong. This is a personal blog that I maintain.

    A couple clarifications:

    I never said “WAFs are the only solution”. I don’t believe every organization needs web application firewalling capabilities. For Gartner clients, we’ve published a decision framework to help clients understand when a WAF makes sense (in this case in regards to PCI):

    http://www.gartner.com/DisplayDocument?id=734720

    Sometimes they make sense, sometimes they don’t. Some of the cases where clients have found them useful we’ve already talked about – and I’d add some more — when development is offshored/outsourced, the turnaround can also take a long period of time during which the organization wants protection. Another case is out-of-support applications (or vendors out of business) where you can’t directly modify the source code. Another client had an offshored application development contract where the vendor claimed the contract didn’t cover security defects (obviously, a badly worded contract – but they were stuck nonetheless).

    What I did say is that web application vulnerability scanners should be able to communicate with web application firewalls and that is still the case. Whether or not you need a WAF is a different discussion.

    I also said “fix it” is the right answer and if you read my pother posts on application security, you’ll find that I also firmly believe that getting developers to write more secure code to begin with must be our priority. However, for an organization that is just getting started, they have a portfolio of already-deployed apps (sometimes in the hundreds) many of which are vulnerable because they were built before secure development processes were implemented. It takes time to get them fixed. For this reason and all of the reasons listed above, sometimes an interim shield is the best approach especially when the alternative is remaining exposed.

    Finally, I’d ask you to broaden your definition of a WAF. It doesn’t have to require a separate appliance and it doesn’t have to be expensive. Modsecurity is an open source plug-in that runs at the server. Most Application Delivery Controllers (e.g. F5) have WAF capabilities built in, some deep-packet inspection firewalls (e.g. Third Brigade) can provide WAF functionality, some XML firewalls can provide this and for smaller enterprises some multi-function firewalls have this capability built in. Probably lots more examples, but you get the idea. You can read all about the alternatives in detail here:

    http://www.gartner.com/DisplayDocument?id=677008

  • 8 Ryan Barnett   August 21, 2009 at 3:39 pm

    @Andre – one point of clarification with regards to the capabilities of the ModSecurity rules I covered at Blackhat Federal this year. The rules that I showed can actually identify missing output encoding in both 1st and 2nd order XSS scenarios. One set of rules looks for 1st order (reflected) by looking in the current page output for any of the input flagged. The other set is for 2nd order (stored), the inbound data that is flagged is also saved off to a persistent global collection which is inspected for in all other response bodies. Technically, 3rd order (DOM) could possibly be addressed through ModSecurity’s Content Injection features (that I showed for CSRF tokens) but that is not something that I have attempted yet.

  • 9 Andre Gironda   August 21, 2009 at 11:28 pm

    @ Ryan Barnett

    Great clarification. I almost always attribute this in addition to the other things that I do like about mod-security. It’s only partially relevant to this conversation though, because it doesn’t protect on the outbound, only provides the ability to monitor. In the case of Microsoft Security Runtime Engine, 2nd order XSS is actually prevented on all ASP.NET controls, except for Response.Write and ASP.NET AJAX.

    @ Neil McDonald

    Your supposedly “classic examples” of when a WAF applies, I often find hard to buy into. In the case of a defunct-company… where is the escrowed source code? Shouldn’t that be made available to customers of that defunct-company? Poor contractual language can always be changed… it simply sounds like excuses to me. Anyone sold on the “WAF Religion” will obviously convince themselves of anything — especially customers of current WAF offerings.

    I would prefer to say that WAFs “Never make sense for protection”, unless we open the definition of WAF to include Microsoft Security Runtime Engine, Gotham Digital Science SPF, or the other Microsoft and OWASP projects centered around AntiCSRF or CSRFGuard protections. I think mod-security can also provide great CSRF protection, so this is the only corner-case where my logic doesn’t apply so far.

    I also have an issue with web application security scanners, in particular, the idea that they should work with web application firewalls, or somehow communicate with each other. My issue is that web application security scanners are typically only useful for OWASP ASVS L1A evaluations, and therefore, they are suspect for any given risk assessment actions.

    Take this example: a web application security scanner scans a website and reports false positives. Those same false positives are then injected into the web application firewall, which could potentially create even more false-positives. Now the web application behind this web application firewall has functionality blocked due to the automated nature of this WASS+WAF interaction. Thus, this could create a scenario where IT/Ops has to test and/or remove these production filters in order to restore normal functionality for the web application users.

    WAF vendors try to push these “listening and learning” modes, and they also try to push the integration of web application security scanners — but only to attempt to remain relevant and/or profitable in today’s markets. There is no real risk management reduction (and absolutely zero ROI or cost-benefit favor) for web application firewalls. They are simply too complex to be automated in any way, and always will be. There has to be some mathematical formula or statistical conclusion to be made here: web application firewalls do not work as advertised today, nor will they ever work as advertised.

    For monitoring and program understanding, I will give in that web application firewalls provide some benefit to organizations “just learning the ropes”. I prefer to keep this type of technology in a lab. Production networks preclude the need for these types of path- and performance- affecting technologies.

    Correct me if I’m wrong, but by you saying, “already-deployed apps (sometimes in the hundreds) many of which are vulnerable”, it appears that you are attempting to suggest that hundreds of apps in one data center can be covered by a single WAF pair. This couldn’t be further from the truth, so I wanted to clarify it. Perhaps this type of wording gives the illusion to others that a “silver bullet” can be deployed, which is why I brought this up specifically.

    You also said, “For this reason and all of the reasons listed above, sometimes an interim shield is the best approach especially when the alternative is remaining exposed”. I firmly believe in enacting whatever interim “shields” or “fixes” can be possible — given a robust enough environment to do so. Aspect-oriented programming is one of these types of “shields” available, as are basics such as getting valid XHTML for all web pages. There are plenty of server and application hardening tricks that can be applied as interim fixes. There are plenty of situations where incident handling management and planned retirement will settle a risk management scenario around a vulnerable web application. There are numerous options outside of “web application firewalls” that perform nearly the same or better protection and detection capabilities.

    The problem that I have with the definition of WAF, or IPS+WAF, or Firewall+WAF, or “XML gateway + WAF”… is that all of these options are products. There are plenty of options that organizations have available to them outside of products. By going to the “security product vendors” for solutions instead of the more appropriate “appsec consulting services”, an organization is likely to become dependent and blinded by the vendor-vision and the product(s) they have in their environment. This leads to more-permanent misunderstanding about risk management, and its poor technical solutions to the application security problem, all at the strategic (ROI, cost-benefit), operational (resources involved per time-to-fix vulnerability), and tactical (i.e. doesn’t stop the numerous attacks that I’ve mentioned above) levels.

    Don’t get me wrong: I understand the ideas around reference architectures that can and cannot use a WAF. I’ve broadened my definition of WAF to include more things than you have, by looking outside of the product and vendor bias.

    Based on my findings, and my deep involvement with the subject, I can see many, many problem areas going down either the web application security scanner or web application firewall routes, whether merged or not. These products simply don’t conform to the OWASP ASVS outside of web application security scanners providing a limited L1A verification. I can’t see that as a viable alternative for any organization, lack of source code, forced-no-change contracts, legacy, millions of apps with millions of lines of code — or not.

    Furthermore, there are additional problems with web application firewalls that go unnoticed and are not talked about, such as vulnerabilities in these products themselves.

    Read more at: Short-term defenses for web applications, an article I wrote in March, 2008.

    You mentioned “PCI”. Did you know that web application firewalls may in fact permanently reduce the ability for an organization to become compliant, including to PCI-DSS 1.2.1?

    Read more at: Decreasing security for perceived security — all in the name of Compliance, an article written by Marcin Wielgoszewski in November, 2008.

  • 10 Twitter Trackbacks for Read Gironda's reply...sounds like something from my mouth. Complex issues require complex analysis [gartner.com] on Topsy.com   August 23, 2009 at 8:12 am

    [...] Read Gironda’s reply…sounds like something from my mouth. Complex issues require complex analysis blogs.gartner.com/neil_macdonald/2009/08/19/security-no-brainer-9-application-vulnerability-scanners-should-communicate-with-application-firewalls/ – view page – cached [...]

  • 11 Nathan McFeters   August 23, 2009 at 8:19 pm

    I find it interesting that one suggests using a flawed technology, application vulnerability scanners, to feed into a flawed technology, web application firewalls, to provide security.

    Sounds a good deal more like a marketing ploy to me, but hey, what do I know?

  • 12 Neil MacDonald   August 24, 2009 at 9:50 am

    @Nathan,

    As compared to doing nothing? I’d rather have incomplete testing and some protection than none at all.

    There is no perfect application security testing technology.
    It is impossible to produce applications with zero defects.

    I think the flaw in your reasoning is to assume that any approach would be perfect. There is no silver bullet.

    I think we would agree – and I have stated – that our first goal must be to produce more secure code. And, if defects are found post-production, the right answer is to “fix it”.

    In conversations with clients, it is clear to me that this is not always possible. Changing development processes (and developer behavior) takes time. Fixing a backlog of vulnerable applications takes time. Sometimes the right answer is to shield an application from a known vulnerability until it can be fixed.

  • 13 Are Web Application Security Testing Tools a Waste of Time and Money?   August 25, 2009 at 9:15 am

    [...] previous post on the value of linking web application vulnerability scanning tools with web application firewalls [...]

  • 14 Nathan McFeters   August 25, 2009 at 3:59 pm

    The problem is that with all the hype spin up of what WAFs are supposed to be able to do, people aren’t buying them as often for the reasons that you are mentioning. More often, they’re buying them as the silver bullet solution that you mention, and what’s worse? The devices are often marketed as such.

    My point is not to crush the usage of either tools or WAFs. They have their place. My point is that it seems absurd in my mind to have a device like AppScan, which will miss some findings, feeding into the WAF, which will miss some protections. Obviously, in the past, WAFs have been applied as a wholesale band-aid over problems… why limit yourself to what your scanning tool can find? Simply put, the scanning tools aren’t perfect, nor are the WAFs, but why limit further what your WAF can do by having it rely on the scanning tool?

    Is using a WAF better than doing nothing at all? Hmm… I’ll contest that in the future when we start seeing memory corruption flaws on these devices.

    -Nate

  • 15 Neil MacDonald   August 25, 2009 at 8:17 pm

    Nate,

    Ah, OK. Perhaps I wasn’t clear.

    WAFs should do more than just what a scanning tool tells them. My point was that if a scanning tool finds a problem, why shouldn’t it be able to tell a WAF precisely how to protect against an expoit on the vulnerability?

    And, as you point out, the WAFs protection is *not* limited to just this linkage.

  • 16 Nathan McFeters   August 25, 2009 at 8:47 pm

    Thanks for clarifying your point, I suppose if a tool finds an issue, there’s no good reason it shouldn’t make sure the WAF is protecting it after the fact, but then again, if these WAFs do what they are supposed to, then why should we need application scanning tools at all? I’ve never seen a production scanning tool truly capable of catching more then basic attack vectors, which a WAF SHOULD nab anyhow.

    Ultimately, I think people will come to realize that WAFs are just not cutting it, and in fact, might be creating more risk for an organization (see my previous point on memory corruption issues, as these things are typically written in C/C++, sometimes even in kernel land code).

    As far as tools go, a fool with a tool is still a fool (and maybe also a tool).

    -Nate

  • 17 Neil MacDonald   August 26, 2009 at 7:57 am

    The more context a rule has, the more effective the WAF will be.

    In a simplified example, a single quote in some contexts is perfectly legitimate. In other cases, its an indication of SQL injection.

    The ability to tie a very specific rule to a specific page and input field can provide very precise protection. Net/net – better context reduces the chance that a given rule creates a false positive that might occur if we tried to create a blanket rule for the entire site or even page.The scanner can express the detailed info. precisely to the WAF as it knows exactly where it found the vulnerability.

  • 18 Nathan McFeters   August 26, 2009 at 11:29 am

    I fail to see how your example assists with this. If the scanning tool finds a SQL Injection vulnerability, then you suggest that it apply a rule preventing single quotes? Couldn’t a single quote be perfectly normal for an application input, but still cause SQL Injection?

    Your concern is that we will protect false positives, thus creating problems for legitimate functionality. This is certainly a problem with WAFs, however, I fail to see how your solution truly helps, and, perhaps the bigger issue that you are missing is that the scanning tool may not catch all of the vulnerabilities, and by not applying protections given what the scanning tool finds, you’re could be neutering the WAF.

    Again, I should point out, I don’t believe WAFs to be a great solution except in very, very specific cases, but further, I don’t believe having a scanning tool neuter its capabilities further is a good idea either.

    Simply put, I think people will have to come to the conclusion that you can’t “automagically” secure anything, no matter what tools you are using.

    -Nate

  • 19 Neil MacDonald   August 26, 2009 at 11:39 am

    Thanks for the good discussion.

    You raise a good point. If a DAST tool found a vulnerability (scanning without a WAF in place), I’d try to exploit it first with the WAF in place. The WAF may shield the vulnerability from attack in its existing configuration.

    If it didn’t provide protection, then the DAST-to-WAF linkage could be used to generate the specific rule. Then, I’d test again.

  • 20 Nathan McFeters   August 26, 2009 at 12:42 pm

    Seems to be quite a wash, rinse, repeat cycle there. I understand the linkage, it seems to make logical sense, but I just think that garbage in equals garbage out.

    I think WAFs have a long way to go before they are worthy of being employed in production environments… I’ve just found too many attacks that get around them, and too many attacks against them. Considering the additional performance hit that you face by bottlenecking traffic for all of your apps through one device, I can’t envision anything but a very few scenarios in which using a WAF is sane.

    That all said, I don’t see linking them up with scanning tools as a realistic fix to their problems, even if logically they seem to be puzzle pieces that match up.

    -Nate

  • 21 Jeremiah Grossman   August 31, 2009 at 11:23 am

    @Nate @Neil, I’ve been reading the conversation with great interest and thought it time to toss in a quick comment. As you know, we (WhiteHat) have a lot invested in making this concept work, and let me assure you… it does… work… in the field. Anyway, it appears to me that you guys agree more than you disagree. When you gets right down to it… VA+WAF all comes down to scanner accuracy. You cannot fake it or market around that fact. I’ve written about this in the past.

    http://jeremiahgrossman.blogspot.com/2008/06/ultimate-scanner-accuracy-test.html

    I’m with Nate that unverified scanner results fed into a WAF is doomed to fail, that is why we verify findings first. This also gives us the ability to identify/block more than just the basic-scanner-found-stuff. We’re actively R&Ding with the WAF guys on how to block more and more vulnerability classes using the intelligence provided by VA. Im confident many types of business logic flaws can be tackled. Not asking you to believe, we aim to prove. :)

    Also remember, the industry is just as Phase 1 of VA+WAF. Soon WAFs will be able to tell scanners specifically what to test and when, since they can see the traffic and app changes.

  • 22 An Information Security Place » An Information Security Place Podcast – Episode 24   September 3, 2009 at 9:11 am

    [...] – Link 1 / Link 2 [...]

  • 23 Neil MacDonald   September 3, 2009 at 10:35 am

    An Information Security Place discusses this issue in their podcast here:

    http://infosecplace.com/blog/2009/09/03/an-information-security-place-podcast-episode-24/

  • 24 WAF Enthusiast   September 4, 2009 at 12:36 pm

    I would go one further and say that WAF’s need to evolve to deal with the issues / strains of cloud environments.

    Foundational security using black, white and grey listings for application requests and responses must be possible. To make sure pre-set policy enforcements are not activated or deactivated without approval from an administrator, deployment and policy refinement through establishing rulesets must be possible in a shadow monitoring or detection only mode. Once the shadow monitoring ruleset is stable, only then should it be allowed to deploy in an enforcement mode on the WAF. This allows complete transparency for the administrator into the real-world effect of this ruleset, while at the same time allowing layered rulesets to be tested without compromising existing policy enforcement. Avoiding false positives and relaxed established defenses are essential for a real-world, usable WAF in a cloud.

    More here:
    http://tinyurl.com/koraum

  • 25 Security Thought for Thursday: With DLP, Don’t Just Treat the Symptoms, Address the Cause   September 24, 2009 at 8:41 am

    [...] after the fact are symptomatic of a faulty development process. For example, we can put up a web application firewall to shield a vulnerable application but we really haven’t solved the problem. To properly address [...]

  • 26 didier   December 3, 2009 at 5:23 am

    Thanks for the innteresting article, we had use a similar tools SaaS online web application vulnerabilities scanr from GamaSec http://www.gamasec.com

    GamaSec identifies application vulnerabilities ( e.g. Cross Site Scripting (XSS), SQL injection, Code Inclusion etc.. ) as well as site exposure risk, ranks threat priority, produces highly graphical, intuitive HTML reports, and indicates site security posture by vulnerabilities and threat exposure.

    We were very satisfy with the report and the technical part of the report providing clear recommendation to closed the finding vulnerabilities also the price of the annual servie and the on demand scan schedular was a good add value to the service of http://www.gamasec.com

  • 27 Dan Cornell   January 16, 2010 at 12:31 pm

    Perhaps take a look at an open source tool we just put together:
    http://vulnerabilitymanager.denimgroup.com/

    This allows you to import results from a variety of static and dynamic scanning tools and auto-generate IDS/IPS/WAF rules. You can also upload logs from the IDS/IPS/WAFs and it will parse out the events associated with the generated rules so you can see which vulnerabilities are under attack.

    Blog post on the initial release with more info is here:
    http://blog.denimgroup.com/denim_group/2010/01/technology-preview-release-of-vulnerability-manager-now-available.html

    –Dan
    @danielcornell