I wanted to share another work-in-progress section from my upcoming uber-treatise on operating a SIEM tool effectively. So, I have created a SIEM maturity scale based on dozens of conversations with SIEM vendors, users and consultants as well as from my own 10+ year old experience with security information and event management tools.
The key purpose of this maturity scale is not just to stay agape of its pure, unadulterated awesomeness (even though I admit of doing just that…), but to evolve your SIEM deployment towards getting more value out of it at higher stages of the scale. Also, one can use it to make sure that specific operational processes are in place as your deployment evolves from stage to stage. For example, enabling alerts without having an alert triage process and incident response process is usually counterproductive and ends in frustration.
Please note that all the processes from lower stages might remain in place as SIEM deployment maturity grows.
State # | Maturity stage | Key processes that must be in place |
1 | SIEM deployed and collecting some log data | SIEM infrastructure monitoring process Log collection monitoring process |
2 | Periodic SIEM usage, dashboard/report review |
Incident response process Report review process |
3 | SIEM alerts and correlation rules enabled | Alert triage process |
4 | SIEM tuned with customized filters, rules, alerts and reports | Real-time alert triage process
Content tuning process |
5 | Advanced monitoring use cases, custom SIEM content | Threat intelligence process
Content research and development |
One may also choose to also add a stage 0 (“tool deployed, no process”) and possibly higher stages, sometimes seen at “unique”, “type A of type A” organizations (with such exciting activities as data modeling process, visual data exploration process, use case discovery process, etc). By the way, the maturity table above is NOT an exhausting list of all SIEM related processes (I am saving that for my full paper). For example, a process for remediating issues uncovered during security monitoring will likely rely to some incarnation of enterprise change management process as well as on escalation/collaboration processes as it involves multiple teams within the organization.
A key point to keep in mind is that most attempts to utilize more advanced functionality at lower maturity stages (such as randomly enabling untuned alerts and rules in bulk barely after some log collection is turned on) often ends in misplaced perception of “broken products” even though all that is broken in such cases is a process….
Furthermore, if you insist, the scale can be loosely matched to Capability Maturity Model (CMM). CMM in general encompasses these stages (source: CMM on Wikipedia)
- “Initial (chaotic, ad hoc, individual heroics) – the starting point for use of a new or undocumented repeat process.
- Repeatable – the process is at least documented sufficiently such that repeating the same steps may be attempted.
- Defined – the process is defined/confirmed as a standard business process, and decomposed to levels 0, 1 and 2 (the latter being Work Instructions).
- Managed – the process is quantitatively managed in accordance with agreed-upon metrics.
- Optimizing – process management includes deliberate process optimization/improvement.”
For example, stage 1 of out SIEM maturity scale matches “initial” CMM stage, stage 2 starts to get some “repeatable” tasks, while commitment to ongoing SIEM “optimizing” is common at stage 5.
Got any comments or your own SIEM maturity scales to share? Got any comments on a usefulness of this for your organization?
Related posts:
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.
Comments are closed
3 Comments
Interesting, anxiously awaiting the full paper.
Its also see the loose matching to CMM as it allows for easier adoption in some environments.
Hmm; the CMM is a process metric primarily out of despair – we have no idea how to create quality software, so we latched onto developing a quality development process. That said, we can discuss effective security metrics as well — for example, ratio of events we understand (not threats, stuff we can label) over total events received. I can’t tell from the table here, but I think the advantage of an SIEM process would be to inform you as to whether you were capable of creating metrics for measuring your relative security – awareness of how much you were monitoring vs. how much you could monitor, awareness of how much was entering vs. how much analyzed, awareness of analyst capability to process events.
I don’t want to harp on the CMM analogue, but something closer to CMMI – where you can mix and match individual process areas to develop maturity may be useful.
Well, I expected that 🙂 People in infosec either love or hate CMM.
Thanks for the insightful comments. Indeed, metrics can be used for a more granular maturity scale. Also, you are also correct that individual process maturity is more useful: such as IR/IH process maturity, report review, etc.