Anton Chuvakin

A member of the Gartner Blog Network

Anton Chuvakin
Research VP
2+ years with Gartner
14 years IT industry

Anton Chuvakin is a research VP at Gartner's GTP Security and Risk Management group. Before Mr. Chuvakin joined Gartner, his job responsibilities included security product management, evangelist… Read Full Bio

Do You Want “Security Analytics” Or Do You Just Hate Your SIEM?

by Anton Chuvakin  |  January 26, 2015  |  Submit a Comment

Now that I’ve taken a fair number of “security analytics” client inquiries (with wildly different meanings of the phase), I can share one emerging pattern: a lot of this newly-found “analytics love” is really old “SIEM hatred” in disguise.

A 101% fictional and slightly over-dramatized conversation goes like this:

  • Analyst: you said you wanted security analytics, what specifically do you want?
  • Enterprise: I want to collect logs and some other data, correlate, analyze, report.
  • Analyst: wait a second … that is called “SIEM”, SIEM does that!
  • Enterprise, passive-aggressively: Well, ours doesn’t!!!
  • Analyst: have you tried to .. you know… actually use it?
  • Enterprise: as a matter of fact, we did – for 5 years! Got anything else to ask?!

Upon some analysis, what emerges is a real problem that consists of the following:

  1. Lack of resources to write good correlation rules, tune them, refine them and adapt them to changing needs
  2. A degree of disappointment with out-of-the-box rules (whether traditional or baseline-based) and other SIEM content
  3. Lack of ability to integrate some of the more useful types of context data (such as IdM/IAM roles and user entitlements, as well as deeper asset data)
  4. Lack of trust that even well-written rules will let them detect attacker lateral moves, use of stolen/decrypted credentials, prep for data exfil, creating backdoors, etc
  5. Occasionally, a lack of desire to understand a multitude of their own monitoring use cases, but instead to buy a box for each problem.

So, a few years of such SIEM unhappiness have born a result … UBA. Some vendors’ UBAs are “SIEM add-ons” (since their rely on SIEM for collection, normalization and storage), others are more like a “narrower but smarter SIEM (since their collect a subset of SIEM logs and maybe other data).

A few can work with DLP and not just a SIEM (as we all know, tuning DLP is often – imagine that! – a bigger pain than tuning a SIEM) in order to create additional insight from SIEM and DLP outputs. As I hypothesize, UBA is where a broader-scope security analytics tooling may eventually emerge.

Now, do you need/want analytics or do you just hate your SIEM?

Blog posts on the security analytics topic:

Submit a Comment »

Category: analytics monitoring security SIEM     Tags:

My Top 7 Popular Gartner Blog Posts for December

by Anton Chuvakin  |  January 15, 2015  |  Submit a Comment

Most popular blog posts from my Gartner blog during the past month are:

  1. DLP Without DLP!? (DLP research)
  2. Named: Endpoint Threat Detection & Response (ETDR research) [now called EDR]
  3. On Comparing Threat Intelligence Feeds (threat intelligence research)
  4. Detailed SIEM Use Case Example (SIEM research)
  5. Should I Use “SIEM X” or “MSSP Y”? (MSSP research)
  6. How To Exit an MSSP Relationship? (MSSP research)
  7. Popular SIEM Starter Use Cases (SIEM research)

Enjoy!

Past top posts:

Submit a Comment »

Category: popular security     Tags:

Security Analytics – Finally Emerging For Real?

by Anton Chuvakin  |  January 12, 2015  |  2 Comments

Security analytics – a topic as exciting and as fuzzy as ever! My 2015 research year starts from another dive into this area. However, how can I focus on something so fuzzy and … well … defocused? GTP approach implies that we “get specific” and not touch fuzzball topics ….

So, there is still no market called “security analytics”, but there are some areas where specificity is finally emerging (yay!). Below you will see two areas where the label of “security analytics” may actually apply in real life, and not in the realm of marketing wet dreams:

  1. Expanded Network Forensics (NFT) [see our NFT document, and my blog coverage] where the source data is primarily network session metadata (and raw packets, as needed), fused with other activity and context data; quite a few of the vendors renamed their NFT products into “security analytics” or built new platforms for network data analysis (as a sidenote, some vendors artfully mix NFT, ETDR/EDR and threat intel and thus became even less similar to their NFT roots – as it is no longer just network, and no longer just forensics but also a stream of DPI-decoded data). So, these tools have their own sensors, collect traffic and utilize both stored and stream analysis of network and other data.
  2. User Behavior Analytics (UBA) [see a document on UBA] where the sources are variable (often logs feature prominently, of course), but the analysis is focused on users, user accounts, user identities – and not on, say, IP addresses or hosts. Some form of SIEM and DLP post-processing where the primary source data is SIEM and/or DLP outputs and enhanced user identity data as well as algorithms characterize these tools. So, these tools may collect logs and context data themselves or from a SIEM and utilize various analytic algorithms to create new insight from that data.

As result, in my opinion, “children of NFT” and “evolved UBAs” (as described above) is probably where REAL security analytics will emerge. At the very least, this functionality seems to be converging on common needs (as I lamented in this post).

Of course, more broadly focused data analysis tools (whether centered on IT data search or entity analytics) have been used for security data analysis as well, usually by the Enlightened Few. These may also steal some of the security analytics thunder in the coming years.

And here is a trick question? How many of these #1 and #2 tools are adopted en masse today, beyond the “Type A of Type A” security elites? Yup, exactly :-)

Now, my traditional call to action:

  • Vendors, got anything to say about using big data methods for security and/or about whatever you consider security analytics? Here is a briefing link … you know what to do [reminder: to brief an analyst you do not need to be a Gartner client – so it is free]!
  • Enterprises, got an “advanced algorithms and/or big data helps security” story – either a WIN story or a FAIL story – to share? Hit the comments or email me privately (Gartner client NDA will cover it, if you are a client).
  • Consultants focused on analytics, got a fun security analytics story (maybe inspired by your recent project) to share? I’d love to hear it and can use or NOT use [if you so desire] the example in my upcoming paper.

For those with a GTP subscription, here are existing documents about the topic:

For those without a GTP subscription, here are the blog posts from my past research projects on …

Security analytics topic:

Network forensics topic:

2 Comments »

Category: analytics announcement monitoring network forensics security     Tags:

Should I Use “SIEM X” or “MSSP Y”?

by Anton Chuvakin  |  December 16, 2014  |  4 Comments

Lately I’ve been surprised by some organizational decision-making as they think about their sourcing choices for security monitoring. Specifically, some organizations want to decide between “SIEM Brand X” and “MSSP Brand Y” before they decide on the model – staffed in-house, managed, co-managed, outsourced, etc. While on some level this makes sense (specifically, on a level of “spend $$$ – get a capability” whether from a vendor tool run by employees/consultants or from a service provider), it still irks me for some reason.

Let’s psychoanalyze this! IMHO, in real-life nobody decides between “BART or Kia” or “Uber or BMW” – people think first about “should I buy a car or use public transportation?” then decide on a vehicle or the most convenient mode of transportation. In one case, your money is used to buy a tool, piece of dead code that won’t do anything on its own and requires skilled personnel to run. In another case, you are essentially renting a tool from somebody and paying for their analysts time. As a sidenote, occasionally, I see a request for something that looks and behaves as BOTH a SIEM and a MSSP, such as a request for managed SIEM contract (“If you write an RFP for a car AND for a bus pass as one document, you’d get an RFP for a chauffeured limo, with that price” as some anonymous, but unquestionably wise CSO has said)

So, to me, deciding whether to own a tool or to rent time from others is The Decision, but which brand of tool or MSSP to procure is secondary.

  1. PICK THE MODEL SIEM, MSSP, hybrid (such as staff augmentation, co-managed, or even both SIEM and MSSP)
  2. PICK THE BRAND(S) to shortlist.

Admittedly, some hybrid models are fairly mixed (“MSSP for perimeter, but Tier 3 alert triage in-house; internal network monitoring with a SIEM staffed by consultants, and internal user monitoring by FTEs” is a real example, BTW) and you may not have 100% certainty if going for a hybrid. Still, clarity on the degree of externalization is a must.

Otherwise, IMHO, you end up with a lot of wasted time evaluating choices that simply cannot work for you, for example:

  • If you know you cannot hire, don’t look at SIEM [SIEM needs people!]
  • If you cannot move your data outside the organization, don’t look at MSSPs
  • If you cannot hire AND cannot move data out, go with the “managed SIEM”

Therefore, I think it helps to narrow down the options using the coarse-grained model filter and then go sort out the providers/vendors.

Am I wrong here? Can you intelligently choose between a bunch of SIEM vendors, MSSPs and consulting firms doing managed SIEM if you don’t first settle on the model?

P.S. If you call us at Gartner with another “What is better, MSSP X or SIEM Y?” question, we will undoubtedly help you regardless of the above. Still, I think model for monitoring/management should precede the brand…

Blog posts related to this research on MSSP usage:

4 Comments »

Category: monitoring MSSP security SIEM     Tags:

How To Exit an MSSP Relationship?

by Anton Chuvakin  |  December 12, 2014  |  1 Comment

Let me touch a painful question: when to leave your managed security services provider? While we have the research on cloud exit criteria (see “Devising a Cloud Exit Strategy: Proper Planning Prevents Poor Performance”), wouldn’t be nice to have a clear, agreed-upon list of factors for when to leave your MSSP?

For example, our cloud exit document has such gems as “change of internal leadership, strategy or corporate direction”, “lack of support”, “repeated or prolonged outages” and even “data, security or privacy breach” – do you think these apply to MSSP relationships as well?

And then there is that elephant in the room…

elephant

(source)

FAILURE TO DETECT AN INTRUSION. Or, an extra-idiotic version of the same: failure to detect a basic, noisy pentest that uses commodity tools and no pretenses of stealth?

[BTW, this is only an MSSP failure if the MSSP was given access to necessary log data; if not, it is a client failure]

Not enough? How about systematically failing to detect attacks before the in-house team (that… ahem …outsourced attack detection to said MSSP) actually sees them?

Still not enough? How about gross failures on system change SLA (e.g. days instead of hours), failure to detect attacks, failure to refine rules leading to excessive alerting and failure to keep client’s regulated data safe?

In any case, when signing a contract, think “how can you terminate?” When onboarding a provider, think “how can you off-board?” A detailed departure plan is a must for any provider relationship, but MSSP case also has unique twists…

Any thoughts? Have you left your MSSP in the dust over these or other reasons? Have your switched providers or brought the processes in-house? What would it take you to leave?

Blog posts related to this research on MSSP usage:

1 Comment »

Category: MSSP security     Tags:

DLP Without DLP!?

by Anton Chuvakin  |  December 3, 2014  |  11 Comments

“Titanic” was a big ship (it also was compliant) and it was probably prestigious to be seen on its deck. However, if somebody were to tell you that it would sink soon, you would rapidly develop a need to part ways with it…. Now I am NOT saying that data loss prevention (DLP) market is the “Titanic” – far from it. Our market measurements, done elsewhere at Gartner, still show projected growth (although, this states that “DLP segment growth rates have been reduced by 9.7% and 9.9% for 2014 and 2015, respectively” due to issues like “complexity in deploying companywide DLP initiatives, value proposition realization failures and high costs”).

As we are updating GTP DLP research, I think I noticed a disturbing trend – organizations planning what is essentially a data loss prevention project without utilizing DLP technology.

Think about it! In some cases, the sequence of events is truly ridiculous and goes like this:

  1. DLP technology is purchased and deployed
  2. The organization is breached and data stolen
  3. Anti-data breach project is initiated.

Say what?! Has Anton lost his mind while vacationing in Siberia?

I assure you that this seemingly idiotic sequence of events is real at some organizations. At others, I observed that a project to “detect exfiltration”, “gain network visibility” or even directly “stop data losses” is initiated and DLP technology is not considered central to it or even involved. In essence, they do DLP without DLP! This seemingly caught some vendors between the desire to be present in the DLP market and the readiness to jump off (such as towards an adjacent market or even into the blue ocean of new market creation) upon seen the first signs of an iceberg…

How does a DLP-less data loss project look like? As mentioned above, it may focus on exfiltration detection, network forensics/visibility (with focus on outbound data transfers) or other network-centric security analysis. Indeed, if Sony really did lose 11TB of valuable data, the challenge is not with fancy content inspection, but with basic network awareness. Even a good SIEM consuming outbound firewall logs and an analyst watching the console will be very useful for this and will allow one to detect massive data losses – and sometimes in time to stop the damage…

Thoughts? Have you seen any “DLP without DLP” lately?

Posts related to DLP research in 2013:

11 Comments »

Category: DLP philosophy security     Tags:

My Top 7 Popular Gartner Blog Posts for November

by Anton Chuvakin  |  December 1, 2014  |  Comments Off

Most popular blog posts from my Gartner blog during the past month are:

  1. On Comparing Threat Intelligence Feeds (threat intelligence research)
  2. Detailed SIEM Use Case Example (SIEM research)
  3. MSSP: Integrate, NOT Outsource! (MSSP research)
  4. SIEM Magic Quadrant 2014 Is Out! (announcements)
  5. Popular SIEM Starter Use Cases (SIEM research)
  6. MSSP Client Onboarding – A Critical Process! (MSSP research)
  7. Named: Endpoint Threat Detection & Response (ETDR research)

Enjoy!

Past top posts:

Comments Off

Category: popular security     Tags:

Our DDoS Papers Publish

by Anton Chuvakin  |  November 19, 2014  |  2 Comments

Our papers on Denial of Service (DoS/DDoS) protection have just published. Together with Patrick Hevesi, our newest team member, we have updated and expanded our DDoS coverage.

  • “DDoS: A Comparison of Defense Approaches” is our flagship DDoS document that states that “distributed denial of service attacks have risen in complexity, bandwidth and number of occurrences targeting enterprises. Organizations must architect their defenses with both cloud and on-premises defenses along with integrating DDoS responses into the current incident response process.” Now the document features much more vendor/provider coverage and updated architecture guidance, as well as more content on modern attacks and defenses.
  • “Blueprint for Mitigating DDoS Attacks and Protecting Data Centers and Hybrid Cloud” is a quick document that “defines a DDoS defense architecture for enterprises with a mission-critical website or e-commerce site and that have multiple ISPs connected into their data centers and corporate centers, and that use public IaaS.” This document features a sample DDoS defense architecture using cloud and on-premise components.

Enjoy!

P.G. Gartner GTP subscription required for access.

Others posts announcing document publication:

2 Comments »

Category: Denial of Service security     Tags:

MSSP Client Onboarding – A Critical Process!

by Anton Chuvakin  |  November 14, 2014  |  Comments Off

Many MSSP relationships are doomed at the on-boarding stage when the organization first becomes a customer. Given how critical the first 2-8 weeks of your MSSP partnership are, let’s explore it a bit.

Here are a few focus areas to note (this, BTW, assumes that both sides are in full agreement about the scope of services and can quote from the SOW if woken up at 3AM):

  • Technology deployment: unless MSSP sensors are deployed and are able to capture logs, flows, packets, etc, you don’t yet have a monitoring capability. Making sure that your devices log – and sending logs to the MSSP sensor – is central to this (this also implies that you are in agreement on what log messages they need for their analysis – yes, I am talking about you, authentication success messages :-))
  • Access methods and credential sharing: extra-critical for device management contracts, no amount of SLA negotiation will help your partner apply changes faster if they do not have the passwords (this also implies that you log all remote access events by the MSSP personnel and then send these logs to …. oops!)
  • Context information transfer: lists of assets (and, especially, assets considered critical by the customer), security devices (whether managed by the MSSP or not), network diagrams, etc all make a nice information bundle to share with the MSSP partner
  • Contacts and escalation trees: critical alerts are great, but not if the only person whose phone number was given to the MSSP is on a 3 week Caribbean vacation… Escalation and multiple current contacts are a must.
  • Process synchronization: now for the fun part: your risk assessment (maybe) and incident response (likely) processes may now be “jointly run” with your MSSP, but have you clarified the touch points, dependencies and information handoffs?

If you, the MSSP client, fail to follow through with these, the chance of success is severely diminished. Now, as my research on MSSP progresses, the amount of sad hilarity I am encountering piles on – and you don’t want to be part of that! For example, an MSSP asks a client: “To improve our alert triage, can we please get the list of your most critical assets?” The client response? “Damn, we’d like to know that too!” When asked for their incident response plan, another client sheepishly responded that they don’t have it yet, but can we please create it together – that is, only if it doesn’t cost extra…. BTW, if your MSSP never asked you about your IR plans during on-boarding, run, run, run (it is much better to actually walk thru an incident scenario together with your MSSP at this stage).

In another case, a client asked an MSSP “to monitor for policy violations.” When asked for a copy of their most recent security policy, the client responded that it has not been created yet. On the other hand, a sneaky client once scheduled a pentest of their network during the MSSP onboarding period – but after their sensors were already operational. You can easily imagine the painful conversations that transpired when the MSSP failed to alert them…. Note that all of the above examples and quotes are fictitious, NOT based on real clients and are entirely made up (which is the same as fictitious anyway, right? Just wanted to make sure!)

Overall, our recent poll of MSSP clients indicated that most wished they’d spent more time on-boarding their MSSPs. Expect things to be very much in flux for at least several weeks – your MSSP should ask a lot of questions, and so should you! While your boss may be tempted by the promises of fast service implementation, longer on-boarding often means better service for the next year. Of course, not every MSSP engagement starts with a 12-week hardcore consulting project involving 4 “top shelf” consultants, but such timeline for a large, complex monitoring and management effort is not at all offensive. In fact, one quality MSSP told me that they can deploy the service much faster than it takes their clients to actually fulfill their end of the bargain (share asset info, contacts, deploy sensors, tweak the existing processes, etc).

Blog posts related to this research on MSSP usage:

Comments Off

Category: MSSP security     Tags:

MSSP: Integrate, NOT Outsource!

by Anton Chuvakin  |  November 5, 2014  |  4 Comments

Security outsourcing! While the concept makes many managers happy (“Phew… no need to do security anymore” — yeah, right!), I have noticed that some smart MSSP leaders avoid the “O word.” If we are to believe Wikipedia, “outsourcing” implies “contracting out of a business process to another party.” On the surface, it sounds like “security monitoring” and “security device management” are perfectly fine business processes.

However, where does your security monitoring process end? If you think that it ends with some alert being triggered, then you have indeed been outsourcing the entire process. On the other hand, if you consider what happens after that alert is produced by an MSSP security analyst to also be part of your monitoring, then you ultimately INTEGRATE the processes (yours and MSSPs) rather than OUTSOURCE yours to an MSSP.

My early research conversations with both MSSP customers and providers themselves reveal the theme: those who think “integrate, NOT outsource” usually get much more value out of the MSSP relationship. In a dramatic break from my personal “policy” of not linking to vendor content from my Gartner blog (motivated by my utter lack of desire to waste time fighting idiotic accusations of ‘vendor favoritism’), here is a great example of integrated security operations with an MSSP:

Are you maximizing the value from your managed security services provider (MSSP) relationship?

(source: IBM via this blog)

Vendor-produced or not, I can recognize awesomeness when I see it. Thanks to  @mikebsanders for an excellent resource.

Now, what does it all mean?

This means that for the MSSP to work well for you, process integration must be carefully planned. Here we talked about the alert response integration (and here about the SLAs), but the same applies to device management (integrate with your change management and reporting), incident response (integrate with your IR) and many other processes.

This also means that this focus on integration allows you to vary the degree of security ‘outsourcing’ or externalization. If your plan – monitor – triage – respond – refine chain is well planned, you can almost painlessly engage external resources (MSSP, consultants, etc) at whatever stage: need more help with cleaning the mess? Call that IR consultant. Want to shift some perimeter monitoring duties outside? Go get that MSSP. Want to bring specific application security monitoring tasks in-house? Do exactly that. Some process chunks will externalize well, some poorly [and some not at all], but at least you will have a predictable map of what goes where and who does what…

Blog posts related to this research on MSSP usage:

4 Comments »

Category: monitoring MSSP security     Tags: