Anton Chuvakin

A member of the Gartner Blog Network

Anton Chuvakin
Research VP
2+ years with Gartner
14 years IT industry

Anton Chuvakin is a research VP at Gartner's GTP Security and Risk Management group. Before Mr. Chuvakin joined Gartner, his job responsibilities included security product management, evangelist… Read Full Bio

My UPDATED “Security Information and Event Management Architecture and Operational Processes” Publishes

by Anton Chuvakin  |  September 15, 2014  |  3 Comments

Finally, I completed an epic update to my 2012 paper “Security Information and Event Management Architecture and Operational Processes.” I think of this paper, interchangeably, as of SIEM’s missing manual” or a SIEM bible” … It now has expanded SIEM process guidance, new detailed use cases, more SIEM metrics, updated SIEM maturity framework and other fun new stuff – and of course a lot of the old good stuff that is still very useful for those planning, deploying and operating SIEM tools. It is LONG – but let me tell you – reading it is way cheaper than hiring 2 knowledgeable SIEM consultants for 2 weeks :-)

Some fun quotes:

  • “Organizations have to monitor complex, ever-expanding IT environments that sometimes include legacy, traditional, virtual and cloud components. Security monitoring in general, and SIEM in particular, become more challenging as the size and complexity of the monitored environments grows and as attackers, driven by improving defenses and organization response, shift to more advanced attack methods.”
  • “Ultimate SIEM program success is determined more by operational processes than by architecture or specific tool choice. SIEM implementations often fail to deliver full value due to broken organizational processes and practices and lack of skilled and dedicated personnel.”
  • “A mature SIEM operation is a security safeguard that requires ongoing organizational commitment. Such commitment is truly open-ended — security monitoring has to be performed for as long as the organization is in business.”
  • “A SIEM project isn’t really a project. It is a process and program that an organization must refine over time — and never “complete” by reassigning people to other things. Running SIEM as a project to “do and forget” often leads to wasted resources and lack of success with SIEM.”


P.S. Gartner GTP access required!

Others posts announcing document publication:

Blog posts related to SIEM research:


Category: announcement security SIEM     Tags:

Challenges with MSSPs?

by Anton Chuvakin  |  September 10, 2014  |  6 Comments

Let’s get this out of the way: some MSSPs REALLY suck! They have a business model of “we take your money and give you nothing back! How’d you like that?” A few years ago (before Gartner) I’ve heard from one MSSP client who said “I guess our MSSP is OK; it is not too expensive. However, they never call us – we need to call them [and they don’t always pick up the phone].” This type of FAIL is not as rare as you might think, and there are managed security services providers that masterfully create an impression in their clients’ minds along the lines of “security? we’ll take it from here!” and then deliver – yes, you guessed right! – nothing.

At the same time, I admit that I need to get off the high horse of “you want it done well? do it yourself!” Not everyone can boast about their expansive SOC with gleaming screens and rows of analysts fighting the evil “cyber threats”, backed up by solid threat intelligence and dedicated teams of malware reversers and security data scientists. If you *cannot* and *will not* do it yourself, MSSP is of course a reasonable option. Also, lately there have been a lot of interesting hybrid models of MSSP+SIEM that work well … if carefully planned, of course. I will leave all that to later posts as well as my upcoming GTP research paper.

So let’s take a hard look at some challenges with using an MSSP for security:

  1. Local knowledge – be it of their clients’ business, IT (both systems and IT culture), users, practices, etc – there is a lot of unwritten knowledge necessary for effective security monitoring and a lot of this is very hard to transfer to an external party (in our MSSP 2014 MQ we bluntly say that “MSSPs typically lack deep insight into the customer IT and business environment”)
  2. Delineation of responsibilities – “who does what?” has lead many organizations astray since gaps in the whole chain of monitoring/detection/triage/incident response are, essentially, deadly. Unless joint security workflows are defined, tested and refined, something will break.
  3. Lack of customization and “one-size-fits-all” – most large organizations do not look like “a typical large organization” (ponder this one for a bit…) and so benefiting from “economies of scale” with security monitoring is more difficult than many think.
  4. Inherent “third-partiness” – what do you lose if you are badly hacked? Everything! What does an MSSP lose if you, their customer, are badly hacked? A customer… This sounds like FUD, but this is the reality of different position of the service purchaser and provider, and escaping this is pretty hard, even with heavy contract language and SLAs.

In essence, MSSP may work for you, but you need to be aware of these and other challenges as well as to plan how you will work with your MSSP partner!

So, did your MSSP caused any challenges? Hit the comments or contact me directly.

Blog posts related to this research on MSSP usage:


Category: monitoring MSSP security     Tags:

How To Work With An MSSP Effectively?

by Anton Chuvakin  |  September 3, 2014  |  6 Comments

My next research project at Gartner GTP will be about working with managed security services providers (MSSPs). We have great content that compares major MSSPs (such as MSSP Magic Quadrant), but none yet on how to work well with one.

In the past, our team’s work focused on helping people who “do stuff” (as opposed to those who “find people to pay for doing their stuff for them”), and this document would be a bit of a departure from that tradition.

In my effort, I plan to tackle questions such as these:

  • What can an MSSP do well in security monitoring vs just OK vs not at all?
  • How to onboard an MSSP provider and prepare for an effective joint operation?
  • How to provide the right information for the MSSP to succeed?
  • How to work together with MSSP for improving security?
  • How to learn from the MSSP operations and improve yours?
  • How to define the right SLAs for various security activities?
  • How to build joint workflows with an MSSP?
  • How to MSSP-enhance various security operational practices?
  • How to avoid pitfalls with security monitoring outsourcing?
  • How to run a hybrid MSSP+SIEM operation?

(Got any other ideas? Hit the comments!)

And here is my call to action:

  • Are you at least a semi-decent MSSP and have something useful to say about it? Here is a briefing link … you know what to do! I’d love to hear what advice you give clients on how to succeed with your services
  • A consultant who advices clients to select [or avoid] MSSPs, care to share your experience?
  • Enterprises, got an MSSP story to share – both WIN stories or FAIL stories will do fine? Hit the comments or email me privately (Gartner client NDA will cover it, if you are a client).


Category: announcement MSSP security     Tags:

My Top 7 Popular Gartner Blog Posts for August

by Anton Chuvakin  |  September 2, 2014  |  Comments Off

Most popular blog posts from my Gartner blog during the past month are:

  1. SIEM Real-time and Historical Analytics Collide? (SIEM research)
  2. SIEM Magic Quadrant 2014 Is Out! (announcements)
  3. Detailed SIEM Use Case Example (SIEM research)
  4. Popular SIEM Starter Use Cases (SIEM research)
  5. On Comparing Threat Intelligence Feeds (threat intelligence research)
  6. Named: Endpoint Threat Detection & Response (ETDR research)
  7. SIEM and Badness Detection (SIEM research)


Past top posts:

Comments Off

Category: popular security     Tags:

Our “Selecting Security Monitoring Approaches by Using the Attack Chain Model” Publishes

by Anton Chuvakin  |  August 8, 2014  |  2 Comments

A while ago, we embarked on a long and tortuous journey in order to try to organize all monitoring/detection controls into a coherent whole, a framework for selecting security monitoring controls. The effort took some number of months to stew and we took a couple of detours, but there result is here.

Behold “Selecting Security Monitoring Approaches by Using the Attack Chain Model!” In paper abstract we say: “Implementing strong security monitoring requires an effective combination of technologies. This document compares monitoring approaches and technologies based on their effectiveness against malicious activities. “

Select fun quotes from the paper:

  • “Timing and layering of monitoring controls — even for covering a single attack type — is generally unavoidable. No single control is 100% effective, and few controls cover more than two of the six attack phases.”
  • “Clients often approach security monitoring from a specific driver, rather than from a larger perspective. This is no surprise, because they are generally trying to address a specific regulation, risk pain point or deal with an incident that just happened, and focus on what is the best and most cost-effective solution for that alone. But this path is dangerous, because it can lead to leaving large gaps in some areas and overspending in others — in part due to a focus on differences, rather than commonalities, in threats and attacks.”
  • “Not all attacks execute the exfiltration phase. Sabotage needs no exfiltration, and snooping or corporate resource misuse can be done without making electronic copies of data. Merely monitoring the exfiltration of data, therefore, does not necessarily create a full “monitor of last resort,” although it is valuable to monitoring information theft. “
  • “Do not buy more monitoring than you need — or can handle. Automated monitoring and response systems can be deployed widely, but many require investment in time and expertise. [...] Gartner research consistently demonstrates that organizations procure much more security control functionality than they can absorb, deploy or and operationalize (this challenge applies to all controls but is rampant for SIEM and DLP, in particular). ”
  • “Several types of security monitoring technology are not well-suited for immature security organizations or for those with limited security capabilities (NFT and ETDR, in particular). Enterprises should first be competent concerning basic network security technology, such as intrusion detection and prevention, network security zoning, and SIEM.”

Now, please go and read a related post from my co-author Ramon Krikken – he reveals more details on our approach and the attack chain model. And then of course go and read the paper [GTP subscription required]

P.S The paper users the word with the prefix “cyber” a grand total of 7 times. Sorry! :-)

Related blog posts:

Others posts announcing document publication:


Category: announcement monitoring security     Tags:

SIEM Real-time and Historical Analytics Collide?

by Anton Chuvakin  |  July 30, 2014  |  4 Comments

SIEM technology has evolved to a point where conflicting requirements are starting to tear it apart – and I am not the only one to observe that. See here:

  • Just as at its birth in the late 1990s, today’s SIEM must excel at real-time analysis using rule-based correlation and other methods and analyze thousands of events per second streaming from the collectors in order to detect threats affecting the organization.
  • At the same time, SIEM is expected to execute searches and interactive queries, as its users go through historical data, match indicators and run algorithms to extract values from stored pools of data.

For years, the dirty truth of SIEM was that most installations stored log data for 7-14 days only inside a SIEM. This limited SIEM’s mission primarily to the first point above – real-time and short-term analysis inside a SOC [short-term historical analysis over, say, 7 days of data is indeed very useful – but does not solve all the same problems as a multi-month one]. Sure, you can reload older data (yuck!) or peek into a connected log management tool that has much more data, but lacks the analytical brain powers [well, unless you build them yourself]. Thus, if you want to go longer AND analyze the data (a key point!), your choices are:

  1. Buy more SIEM at an obscene cost; some vendors’ technology will scale, but your wallet will not. Economic DoS strikes back?
  2. Use log management with limited analysis capabilities (indexed search and eh… actually, that’s it sometimes). New hope?
  3. Build or procure some other tool (big data something or other). The return of BDSA?

One enlightened fellow, upon reading my recent SIEM Evaluation Criteria document, noted that in his view, the criteria are too biased towards real-time, traditional SOC monitoring usage of SIEM at the cost of historical, long-term analytics. Despite the fact that historical algorithms, data exploration and profiling are featured in the report, it is indeed so. SIEM has evolved as primarily a monitoring technology, with investigative use and historical analysis often present, but in an auxiliary role at best. In essence, we have REAL-TIME ANALYSIS (via SIEM) and HISTORICAL AGGREGATION (via log management tools, ELK stack, etc).

And now, many organizations are flocking towards hidden/persistent/advanced threat discovery and longer-term profiling that calls for longer retention and stresses the data stores with queries that are both wide and deep. For example, read this enlightening thread on SIEM, log management and analytics. “Searching the last “N Days” [especially for large values of “N” – A.C.] of logs is much different than alarming and alerting on logs as they come in – they are very different” is a representative quote. However, while searching over 180 days of data will kill a SIEM [assuming merely having 180 days of data in it hasn’t killed it], actually running algorithms (profiling, clustering, rule learning– other stuff I mentioned here) will be much worse. Back in the day when I was doing it, my not-too-sophisticated profiling computations ran overnight over a mere week of data [and I used RDBMS, since nothing else was around in 2004] …

Let’s think together about how to balance SIEM’s dual mission today? Please treat this table as more of an “incomplete thought” rather than a research product, BTW.

Real-time and near term analysis Historical analysis
Object of analysis Stream of data or a small puddle of data A huge pile of data
Storage Short term (a few days) Long term (months to years)
Data Usually structured – logs after normalization May be unstructured- raw logs, indexed
Analysis types Mostly known patterns, statistics on data fields Mostly interactive exploration and models
Common performance bottlenecks Process streams: memory, CPU Store and query: storage, I/O
Focus Detect threats Discover threats
Usage Utilize found patterns for alerting Learn about patterns of data

(also see this table to better understand the difference in usage)

Still, SIEM can actually benefit from its duality; some organizations mine the historical data and then create rules based on patterns that are revealed by algorithms. Others create alerts based on what their analysts have dug out during their threat hunting activities. In the past, I always voted for “first log management, then SIEM”, but now with increased focus on historical and longer-term analysis this may change to “log management –> SIEM –> long-term analytics” or even “log management –> long-term analytics –> SIEM” Let’s think about the choices then:

  1. Want to collect the data and keep it for incident response/compliance? Get log management (commercial or OSS)
  2. Want to set up a SOC and real-time alerting and monitoring, make analyst workflows better? Get a good SIEM (ideally, you should have log management by now)
  3. Want to dig deep into historical data analysis over longer term, match indicators and explore the data? You are in the big data territory now, and are mostly on your own in regards to tools.

There you have it! It came our as a bit of a ramble, but – what the heck – this is a blog, not a research paper :-)

Select recent SIEM blog posts:


Category: analytics monitoring security SIEM     Tags:

SIEM and Badness Detection

by Anton Chuvakin  |  July 24, 2014  |  5 Comments

A long time ago, in a galaxy far far away … at the very dawn of my security career I attended a presentation by somebody who is now a notable incident response expert. Well … who am I kidding? He was a notable IR expert back in 2000, way…way before IR was cool and way before the word “APT” entered common usage. In any case, I don’t recall much from the presentation apart from one point he made: he has never seen a significant intrusion detected by an intrusion detection system (IDS) [another example of the same kind can be found here]. That line has been burned into my brain since that day…

We routinely talk about prevention/detection/response mantra, [which, some people, for some strange reason, hear as prevention/prevention/prevention as if the room is noisy or something …but I digress], but industry research often reminds that that we really suck at detection [BTW, I find calls for “more prevention” to solve this problem to be sheer idiocy].

Still, “deploying a SIEM – as with any detection technology – will result in things being detected. After things are detected then someone will need to respond to it to investigate it.” (source) This post includes a structured look at SIEM detection methods and approaches. By the way, this post explicitly talks about the THREAT DETECTION, which implies near-real-time observation, as opposed to THREAT DISCOVERY, which involves digging out traces of threats that persist in your environment. Threat discovery is a very fun topic, and we can talk about it again later.

First, I have to repeat something I think I mentioned a few times over the years: SIEM is not an old-style HIPS that matches vendor-provided character sequences to logs. Well, you can use it as such, for sure. But SIEM’s ability to normalize, enrich with context (users, assets, vulnerabilities, etc), correlate across log sources, apply algorithms to streams and “pools” of data, and visualize the data for exploration makes it a different technology – and one with much more difficult mission than a 1997 HIPS.

Here is my quick summary of SIEM detection methods in use today, with select pros/cons of each [NOT a comprehensive list – a longer table may show up in a future paper of mine].

SIEM Detection Method Pros Cons
Human analyst event stream review An analyst observes a filtered stream of events in the console
  • None :–)
  • Does not scale
  • Skilled analyst required
Simple log matching rules “HIPS mode”: if I see string X123 in logs, alert
  • Simple
  • Specific
  • Light on SIEM resources
  • Need to know what to match
  • Useless for advanced, multi-stage attacks
Vendor-provided cross-device correlation rules Vendor-provided / default/ OOB correlation rules
  • Cross-device correlation
  • No need to write rules
  • Relevance to customer use cases may be lacking
  • Need to tune the rules
Matching events to threat intelligence feed Match incoming events to collected threat intel data such as “bad” IPs, domains, etc
  • Useful detection with minimal tuning
  • Low FPs [given quality TI]
  • Requires high-quality TI data
  • Timing: TI data needs to be loaded before the event
Log to context matching via rules Match incoming events to context such as user role (user with role X should never do Y, etc)
  • Easy policy alerts
  • Site-specific content
  • Need a clear policy
  • Context data needs to be loaded and be current in SIEM
Custom-written stateful correlation rules The ultimate in SIEM detection for years, custom correlation rule enable many scenarios and use cases
  • Targeted to what the organization needs
  • Refine and adapt over time
  • Rules need to be written and refined by a SIEM content expert
  • Errors in rule logic often not obvious
Real-time event scoring Algorithms to assess event attributes (source, type, time, other metadata) to highlight events of interest
  • Easy way to highlight potentially interesting events
  • Prioritization may not match your priorities
  • “Potentially” interesting
Statistical algorithms on stream data Statistics such as average, standard deviation, skew and kurtosis [yes, really!]
  • Useful complement to rules
  • Can be used with rules to look beyond single events
  • Choosing meaningful stats is often harder than writing rules
  • FPs are common
Baseline comparisons Compare event streams to historical baselines and metrics; related to stat methods, but uses stored historical baselines
  • Useful complement to rules
  • Can be used with rules to look beyond single events
  • Fails to detect when baseline includes badness, or attack traffic is not anomalous
  • FPs are common

Note that this is not about the data sources, but about the methods themselves – they can apply to many/all data source combinations. Also, the use of context data (users, assets, application, data, vulnerabilities, etc) is useful to enrich many detection methods as well as improve their accuracy. Next I suspect I need to talk about the data sources enabling various types of detection…

As with other functionality, there is probably a maturity curve here somewhere (here!). Who will know how to create statistical models if he never created basic SIEM rules?

P.S. All of these method, separately and together, will fail once in a while. Two choices you have then:

  1. Wait for the threat to manifest visibly – then go to security incident response.
  2. Go and dig for threats; do threat discovery.

Select recent SIEM blog posts:


Category: analytics security SIEM     Tags:

My Blueprint for Designing a SIEM Deployment Publishes

by Anton Chuvakin  |  July 22, 2014  |  4 Comments

Another new document on SIEM that I wrote just published: Blueprint for Designing a SIEM Deployment. “Planning a distributed enterprise SIEM deployment is challenging for information security teams at many organizations. This Blueprint shows the architecture and timeline for an enterprise security information and event management deployment and highlights key tasks for each stage. “ This is another new Gartner GTP document type called “an architectural blueprint”, and it has distinctly non-Burton’ian length: 2 pages (!), with one taken by a picture. GTP Blueprints make perfect gifts for your favorite IT architect :-)

For reference, here are my other SIEM research papers [access requires Gartner GTP subscription]:

For those without a GTP subscription, some fun SIEM blog posts:


Category: announcement security SIEM     Tags:

“Stop The Pain” Thinking vs the Use Case Thinking

by Anton Chuvakin  |  July 17, 2014  |  3 Comments

“Hello, I am your anti-virus program. Which specific viruses would you like me to kill today? Enter names here: [……..]” While I don’t recall the exact state of the art of anti-virus back in the late 1980s, I do not remember any anti-virus program ever asking such a question. The technology originated in response to a definite threat – malware [collectively called “viruses” at the time]. At no point in this technology evolution, was the user supposed to steer it towards particular targets. It just “did it.”

OK, Anton, and your point is?

SIEM use cases, naturally (example use case, list of common SIEM starter use cases). I’ve met a few folks who loudly wondered “why SIEM can’t just DO IT.” Here is how they think: anti-virus just does it, firewalls just do it (well, after you write the rules), even NIPS just does it [well, in their minds it does…]

Why oh why can’t SIEM just do it? When the enlightened SIEM vendor offers them a tool and adds “now you need to tell us what use cases you’d like to focus on first”, they complain that the vendor is shifting the burden to then; why can’t their SIEM tool “just do it”?

OK, the enlightened readers of this blog will start to snicker – or even ROFL – just about now. However, let’s scrutinize this delusion.

Back in my SIEM vendor days, we had a situation when a field engineer was asking a customer “what use cases do you want to start with?” with the customer countering with “so, what use cases should I start with?” and this ping-pong going on for a while with both parties getting increasing frustrated (all the way to “so…wait a second here…you just paid us $740K for a SIEM and you don’t know what you want with it?! WTH!” — “Whaat?! You just sold us a $740K pile of stuff that does not even DO anything” and so on). In the end, they simply said “we want to do with our SIEM what most other people want to do with theirs” — and left it at that…

Intuitively, people feel that SIEM technology as well as some other technologies are inherently different from others, but they are unable to spell out how (i.e. SIEM is not like anti-virus). For once, monitoring technologies require an open-ended commitment from the organization wanting to utilize them. Also, successful monitoring nowadays MUST be mission-specific; you are unlikely to succeed if you want to generate a critical alert if “something bad” happens. You have BAD, next-morning BAD, end-of-the-week BAD and of course that scary “wake me up at 3AM BAD” — with the exact priorities depending on YOUR BUSINESS. Not the vendor default correlation rules, now some “security intelligence”, not what Gartner thinks – your business [BTW, DLP is even more so this way]. Contrast this with “I don’t like viruses – please kill them all” seen through the anti-virus lens….

To summarize, a lot of security gear is bought to “plug a hole” (be it an audit finding or a new threat such as malware). However, SIEM is most explicitly NOT of this kind. As I’ve written many times, SIEM is a “force multiplier”, but this definition implies that you have something to multiply. If you have 0 capabilities, a purchase of a SIEM tool will still leave you at – you guessed it!—0. SIEM will make YOUR security monitoring problem-solving better/faster, but it won’t “plug any hole” for you.

And if you somehow cannot transcend “see a hole-buy a box” thinking about security, some expensive education is available

Related blog posts:

Select recent SIEM blog posts:


Category: philosophy security SIEM     Tags:

More on SIEM Maturity – And Request for Feedback!

by Anton Chuvakin  |  July 14, 2014  |  11 Comments

During my original SIEM architecture and operational practices research (see the paper here and a presentation here), I looked at the topic of SIEM operation maturity. Organizations that purchase and deploy SIEM technologies are at different stages of their IT and information security maturity (such as when measured by Gartner ITScore for Security). Certain security monitoring goals are extremely hard to achieve at lower maturity stages (such as “hunting” when you can barely collect data); they are also frequently unachievable unless the organization climbs all the steps in the maturity ladder to get to that step [so, no jumping stages].

The key purpose of this maturity scale is to evolve an SIEM deployment toward getting more value out of it at higher stages of the scale. Also, SIEM team members can use it to make sure that specific operational processes are in place as SIEM deployment evolves from stage to stage. For example, enabling alerts without having an alert triage process and incident response process is usually counterproductive and ends in frustration. Still, all the processes from lower stages must remain in place as SIEM deployment maturity grows.

Here is the current version of the table:

Table 7. SIEM Maturity Scale

Stage No. Maturity Stage Key Processes That Must Be in Place

(inclusive of previous stages)

1 SIEM deployed and collecting some log data SIEM infrastructure monitoring process

Log collection monitoring process

2 Periodic SIEM usage, dashboard/report review Incident response process

Report review process

3 SIEM alerts and correlation rules enabled Alert triage process
4 SIEM tuned with customized filters, rules, alerts and reports Real-time alert triage process

Content tuning process

5 Advanced monitoring use cases, custom SIEM content, niche use cases (such as fraud or threat discovery) Threat intelligence process

Content research and development

Source: Gartner (January 2013)

SIEM team members may also choose to add a Stage 0 (“tool deployed, no process”) and possibly higher stages, which are sometimes seen at security-mature, “Type A” organizations (with such exciting activities as data modeling process, visual data exploration process, use-case discovery process and so on).

At this point, I’d like to ask for your feedback and improvement suggestions?

Should I add dimensions to the maturity table, such as essential personnel skills, typical tool components deployed and utilized, use cases common at each stage?

In any case, feel free to suggest it below in comments, via email or via whatever social media venue you happen to frequent.

Previous version of the maturity table:

Select recent SIEM blog posts:


Category: monitoring security SIEM     Tags: