I am starting my new research project for Q4 2011 (stepping briefly away from PCI DSS compliance): on vulnerability management. As I am going through existing Gartner coverage of the matter (tools, practices) as well as recent customer calls on the subject, one interesting theme emerges: vulnerability prioritization for remediation presents THE critical problem to many organizations operating the scanning tools. The volume of data from enterprise scanning for vulnerabilities and configuration weaknesses is growing due to both additional network segments and new assets types.
In order to cut through the noise and hype around this issue, I have created a structured approach for analyzing the prioritization approaches observed in practice (sometimes these are also called “risk scoring” by the tool providers). Specifically, I have cut every dimension of the prioritization decision making into a separate row in the table below.
Note that some of the methods are more likely to be used in combination with others and not standalone. For example, you can first choose to fix vulnerabilities with CVSS base score > 8 in US DMZ (thus combining 3 of the method from the table: CVSS base score – 8, asset physical location – US, asset network location – DMZ). Or you can fix only issues which are High or Medium on the assets from the specific list of “key” assets, but only High on others (thus combining the good old H/M/L with a specific asset list).
Here is what I came up with – any comments, additions, related ideas, feedback?
Please also see my call to action below the table!
Method label | Prioritization Method details |
Asset list | List of select assets where vulnerabilities get fixed |
Asset network location | Asset location on the network, segment; internal/external |
Asset physical location | Asset location in country, city, corp location, etc |
Asset role | Business role of an asset |
Asset value | Assigned or computed business value of an asset; BC/DR asset scores |
Compliance | Organization-wide compliance or asset compliance; regulatory/internal |
Countermeasures | Delay fixing vulnerabilities on more protected networks |
CVSS environmental scores | Site-specific CVSS environmental scores |
CVSS score | CVSS base and (sometimes) temporal scores |
CVSS vector | CVSS vector components such as access, exploit etc in addition to a score |
Ease of fixing | Ease of fixing of vulnerability |
Exploitability | Check vulnerability exploitability in the real world |
H/M/L | High/Medium/Low |
Network topology | Model vulnerabilities across networks with ACL and network topology; use flow information |
Threat-based | Count attacks on assets or exploits observed |
Vulnerability popularity | Number of vulnerable assets fixable with a single patch |
Vulnerability type | Vulnerability type such as overflow, SQLi, XSS, etc |
So, here is my call to action!
To vendors: please point me to (or send me) your resources where you describe your prioritization algorithms, if any. If you still expect your customers to use only High/Medium/Low for prioritization, please don’t contact me
To tool users: please let me know (through whatever means) how you are prioritizing what to fix – at the very least, tell me what methods from the above table you use and in what combinations.
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.
Comments are closed
11 Comments
FAIR.
I have to agree with the other comment regarding FAIR or at least risk based approach. I didn’t even see “risk” in any of the buckets you created but you may have assumed risk analysis was performed in some of the buckets.
I recommend a multi-factor risk view when determining what to fix. Make sure to include operational risk.
Re:risk. There is a reason I didn’t just say “risk analysis” as such high-level statements are just not helpful in this context. Risk analysis via what method? Would you do “risk analysis” for 50,000 hosts each with 50 vulns scanned weekly? FAIR is a good example, but it is still not applicable in this context.
We are not pondering over ONE vuln here, we are looking at 6- 7-digit counts across the organization.
The methods in the table can be called “risk asseasment methods” (and many vendor do exactly that) since they connect vuln, asset, context, role, etc.
Use of an aggregation of CVSS scoring, Asset “value” and eminent/realized threat information to calculate an asset “Vulnerability Score” is very efficient and accurate. This score gives an algorithmic way to estimate a level of risk without laboring through other factors that would get closer to a full risk analysis.
I totally (did I say totally) agree with Anton. We are dealing with large, large, large (did I say large?) amounts of vulnerabilities and we are dealing with orders-of-magnitudes, not onsie-twosies. 99% of the time, the estimations remediation prioritization mirror more detailed risk analysis results.
Re: asset values are great, but IMHO few people can accurately (what does it even mean in this context?!) score all those boxes, systems, etc. If you vuln score uses asset values, there is a slight chance … not everybody is using it.
“99% of the time, the estimations remediation prioritization mirror more detailed risk analysis results”
I am wondering about the above point – how does it work? do they just coincide with formal risk analysis?
Those factors leave out the critical component of exposure. Adding exposure values via toplogy modeling, and mitigation factors is the only method for coming close to calculating risk. Your business owners should be accountable for assigning value to assets or entities. Your systems then need to be measured against mitigating factors to vulns. You must account for your protections such as firewalls, IPS, host security, and more.
Some orgs are doing it. It is not impossible.
Erik, have you tried reading the table? 🙂
“Model vulnerabilities across networks with ACL and network topology; use flow information” is there to indicate the approaches that your employer and its competitors are taking to solving this problem.
The table given lists almost all the possibilities. I would like to add few points here, hopefully I am not repeating any one mentioned.
1. Vendor/Third party dependencies – Dependencies in terms of applying patch or when the device management is outsourced.
2. Contract – When device management is outsourced or when the data is in the cloud, then probably some service providers have stronger contract terms, which may reduce the impact (financially).
@nrupak
Thanks a lot for the comment. Indeed, contract issues and other outsourcing (and cloud, in fact!) issues matter in this case. Sometimes you might want to patch, but the 3rd party provider owns the system…
Tabel has covered almost all points,it will be helpful to add rating for Ease of Exploit vulnerability which will depend on how much effort will be needed by threat agent to exploit the vulnerability.
As u had covered Exploitability, is it the same what i had express above.
Also we can calculate Annualized Loss Expectancy (ALE) so that can help us to prioritization remediation for vulnerability.
Thanks for the comment.
Ease of exploitation (an other explitation parameters) is in a CVSS vector (see http://www.first.org/cvss/cvss-guide.html)
ALE …. there is nothing “workplace-safe” that I can mention here on this subject. Let me exercise me newly-found powers of restraint and say: “this is not really relevant here” (and usually elsewhere in infosec, BTW)