Blog post

On LARGE Scale Vulnerability Management

By Anton Chuvakin | October 31, 2011 | 0 Comments

vulnerability managementsecuritycompliance

Vulnerability management is very easy, really. Get a scanner, scan a system, peruse the report listing all the flaws, then go and fix them. Done! Risk is presumably reduced and/or compliance is restored (e.g. in case of PCI DSS and fixing severe vulnerabilities with high CVSS scores).

Now, imagine the same process that attempts to achieve the same result – risk reduction, compliance, whatever – on 100,000 systems. Systems running many platforms – legacy and cutting edge, many applications – exposed to public, partners or internal audiences, deployed on many different networks – segmented by firewalls and sometimes hanging off the low bandwidth links. On top of this, systems are owned by many different people, organizational units, or even different organizations. Even more, these systems are serving different purposes – some criticality important while others redundant or even forgotten. Suddenly, scan-report-review-fix becomes an incredibly complex project, with every little detail presenting a separate challenge on many different levels. Even things that are beyond trivial while scanning a simple network or a single system – such as who has a right to press the “scan” button or when to run the scan – become exercises in frustration, organizational politics and sometimes just sheer esoterica.

As I am speaking with organizations that scan hundreds of thousands of systems as well as with providers of tools that enable such scanning, I am trying to converge the knowledge I am collecting into a common set of practices (some might call them “best practices”) for successfully deploying and vulnerability assessment tools on a LARGE scale and running a MASSIVE vulnerability management effort

I would love to hear from my readers about their experiences with large-scale vulnerability assessment (with high tens of thousands, hundreds of thousands or even millions of scan targets) and related vulnerability management.  Here are some questions to focus your thoughts:

  • How did you architect your environment – by network, by region or flat?
  • How did you figure where to place the scanners and how many to use?
  • How did you handle the organizational politics related to scanner deployment and ongoing scanning, credential access? 
  • What is your normal daily workflow around vulnerability management?
  • How well does your tool scale to your needs, both for scanning and remediation workflow?
  • How do you prioritize what to fix? And then how do you get the administrators to actually fix the systems?
  • How did you handle the challenges of remote environments, especially those with low bandwidth links?
  • What was hard or unexpectedly easy with your tool and your deployment process?

Any answers or feedback – or even additional useful questions to ask! – are much appreciated!

BTW, read my previous posts related to my current research effort on vulnerability assessment technology and vulnerability management practices:

  1. On Scanning “New” Environments that covers scanning virtual, cloud, mobile and other unusual and emerging IT environments
  2. On Vulnerability Prioritization and Scoring that talks about prioritizing the vulnerabilities for remediation.

The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.

Comments are closed