Gartner Blog Network


NTA: The Big Step Theory

by Anton Chuvakin  |  October 25, 2018  |  4 Comments

Let’s come back from the world where the endpoint won the detection and response wars to this one. As we are ramping up our NTA (but, really, broader NDR for network-centric detection and response) research one mystery has to be resolved.

What motivates some organizations to actually deploy NTA (usually one particular NTA vendor technology built on a large island off the coast of Europe) before any other detection and monitoring controls, and sometimes even before basic logging?

Let’s put aside the offensive jokes and appeals to the principle that “90% of people aren’t in the top 10 percentile.” Seriously, this question somehow bothered me, but I think I cracked it.

I call my explanation THE BIG STEP THEORY.

Imagine a real “Mongolian clusterluck” [no FoaaS for this one, sorry] of a network, where endpoints are unmanaged or clown-managed, switches and routers are poorly configured, nothing is segmented, changes are random, VPN users are dropped into the main LAN and nothing logs anything? In other words, not the networks that my esteemed readers’ employers have, but what other people have …

What is a single step one can take to know what the hell is going on in their IT environment? Where threats lurk? What users cause mischief in the name of “just doing their jobs”? What goes out, in and sideways?

I’d argue that it is NOT SIEM, NOT EDR, NOT NIDS or NGFW – but actually NTA.

SIEM needs several log types to be enabled (!) and collected – then correlated into a coherent picture. EDR needs agents nearly everywhere for decent visibility. NIDS will only get you attacks, unless you write other signatures.

NTA, however, will give you a passable picture of activities after you deploy just one box (typically on egress or in some other central location) that sniffs traffic. For sure, for this you need a layer 7 NTA, not the 1990s-style layer 3.

Hence, in this view, NTA represents the biggest ONE STEP jump to situational awareness from near-total unawareness.

Agreed? Other views?

P.S. A post-scriptum aimed at “all of the above” or “layered defense” crowd who always pedantically reminds us that “in security, the only correct answer is ALL OF THE ABOVE” since you need “oh so many layers of defense.” We know! However, not everybody will do this, almost nobody in smaller and mid-sized organizations will do it, and, frankly, even large enterprises won’t deploy all layers at the same time.

Bog posts related to NTA, NDR and this research:

Category: detection  monitoring  network  network-forensics  nta  security  

Anton Chuvakin
Research VP and Distinguished Analyst
5+ years with Gartner
17 years IT industry

Anton Chuvakin is a Research VP and Distinguished Analyst at Gartner's GTP Security and Risk Management group. Before Mr. Chuvakin joined Gartner, his job responsibilities included security product management, evangelist… Read Full Bio


Thoughts on NTA: The Big Step Theory


  1. Andre Gironda says:

    Nah, Infocyte, CyLR, and F-Response (N.B., none of these are EDRs, but they are endpoint) are the better platforms to uncover threats in chaotic environments. Only F-Response requires an agent, but it’s the combination of these 3 and their underlying techniques that often provide the most bang-per use.

    NTA isn’t going to uncover precaution with stealth, especially not volatile precaution with stealth. It’s not great at uncovering stealth by itself, either. You say layer 3 and layer 7, so does that mean you’re completely missing layer 2 transports such as slarpc/slarpd and SMB over names pipes?

    Even FastIR_Collector is going to dump names pipes (and the rest) for a quick view to responders in seconds. You really need to get on endpoint and perform memory acquisitions. You need to know what’s running in the environment: even when it’s behind a plain-looking TLS chain with an easily-fingerprinted SSL/TLS library and your org’s (or other too-obvious) subject identifiers and other variables.

    If you want to go and see, then you will need to do more than pin down a network with an NTA. It might be nice to have one, but the data is a lot of garbage to go through if you don’t find any beaconing or fumbling — and even then you’re going to hit a lot of false positives. I’d say you create more work with NTA then you find outcomes compared to a non-EDR response tool such as InfoCyte.

    • Thanks a lot for the comment.

      As a minor BTW, I think Infocyte is an agentless EDR (as they essentially don’t fit anywhere else) – I always treat them as such.

      About other stuff – I suspect you are right. This is why some people may think that “the endpoint approach has won.”

      However, what to do with the machines your agentless tool cannot access (no credentials or no OS [OT/IoT] or no remote access methods)? Won’t we use NTA-like tech for that?

      Re: layer3 vcs 7, I mostly meant UP TO L3 and UP TO L7. L3 didn’t mean ignore anything below, but more like “blind to anything above” – cannot get useragent from flow data, not? :-)

      Re: garbage from NTA — can’t argue here :-)

      Re: create more work with NTA then you find outcomes compared to […] <- I'd say even compared to EDR (agentless or not), so yes, agree here.

      BUT less mature orgs that are mentioned in this post as "liking NTA" often don't not have ANY way to use an agentless tool as the client team simply does not let the sec team to do it…OOPS!

      • Andre Gironda says:

        Yes, thank you for the correction. I meant to correct it about agentless EDR, but we absolutely agree on how to categorize it.

        For OT/IoT/IIoT, the stakes are higher for the people, the process, and the technology. Certainly where you’d normally see more and more CrowdStrike in Enterprise, you’ll see more and more Digital Guardian in these OT/IIoT environments. The DFIR/CERT and CTI teams must know the human element consequences in addition, or on top of, the existing technology problems including cybersecurity. Thus, a heavier set of controls around the human and process elements. These orgs live and die (and perhaps humans, animals, etc live and die) by their standards, by their value chains.

        I also think token-authenticated events are a huge part of OT, IoT, and IIoT — whether the org is a utility, construction outfit, chemical plant, etc (i.e., often considered Critical Infrastructure) servicing a large community or not. Thus, log management and SIEM will play a huge part in the technology platform pieces for cybersecurity here. I guess a special hat tip to Splunk HEC, but certainly AWS Thing Shadow and many other components. Perhaps we need an extra special hat tip to Splunk for Industrial IoT here as well, but I don’t want to go too crazy.

        Another problem with NTA is useful baselines. I can get extremely useful baselines once I can get a process-level memory dump, let alone a whole-system memory dump. Every chip on earth probably speaks some variation of JTAG, SWD, gdbserver, KGDBoC, KBGDBoE, et al. Even when endpoint memory insight doesn’t outright solve cybersecurity, it at least cuts the problem into 2 solveable halves. NTA can’t do that — it’s stuck in permanent garbage space.

        Now, all said, I would definitely consider adding Network Forensics to even a low-value input chain org at either low maturity or combing for good early-win finds when starting out with hunter teaming. Does it need an NTA capability? I would say it needs to be prepared for one. Would I rather see Bro (now Zeek) Intelligence Framework in use long before modeling with NTAs? Yes, I would.

        Other times we’re going to see the answer as both (NTA and EDR) or all 3: NTA, EDR, SIEM. Toolchains like Sysdig Falco and Microsoft’s SRUM could easily add NTA or fully move in this direction (they’re already at least halfway there now).

        As for the people problems, I’ve seen many a NetEng group deny taps and port mirrors in the same fashion that sysadmins deny WMI or PS Remoting. Typically, you are going to see many varieties of supports and handoffs. Yes, CyLR or Infocyte could be right out — but that same org might allow F-Response instead. Any good DFIR/CERT team is going to have deep CTI to forecast significant issues and be prepared about how to handle them. Even crappy DFIR/CERT teams should know how to run ir-rescue in order to cdqr the artifacts into skadivm. Heck, even Redline can do this for Win7 machines with 8GB of DRAM and 80GB of storage or less.

        In all seriousness, a Teensy or Bash Bunny running ir-rescue should be given to power-hungry sysadmins and helpdesks. Make it too easy for them. The difficult part will be convincing anyone to use a real EDR like FireEye HX or CbER that can do honest containment via host-network isolation. Once you’re at that stage, then you can actually DFIR/CERT. NTA is not gonna help you get there. You don’t get to skip a stage, sorry.

        The other nice thing about evolving the endpoint story for DFIR/CERT teams working with sysadmins is that eventually you can get to the EMET/ExploitGuard, SysMon, and DeviceGuard (sub AppLocker, et al) conversations. Maybe the red teaming analysis conversations can start to happen around this time as well.

  2. Thanks again for a super-insightful comment. Let me touch one of the fun mysteries there

    >Now, all said, I would definitely consider adding Network Forensics to even a low-value input chain org at either low maturity or combing for good early-win finds when starting out with hunter teaming.

    We see network forensics (=saving packets and very rich metadata) as MUCH less popular than NTA (=anomaly detection). Few people seem to want to buy/build.

    Now, you and me would do it, but it seems like we are in the minority :-(

    >I’ve seen many a NetEng group deny taps and port mirrors in the same fashion that sysadmins deny WMI or PS Remoting.

    Yes, +100! We see battles for logs, battles for agents and battles for taps. Not sure what kinds is most fierce….



Leave a Reply

Your email address will not be published. Required fields are marked *

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.