Blog post

Network Anomaly Detection Track Record in Real Life?

By Anton Chuvakin | October 15, 2018 | 6 Comments

securitynetwork forensicsnetworkmonitoringdetectionNTA

As I allude here, my long-held impression is that no true anomaly-based network IDS (NIDS) has ever been successful commercially and/or operationally. There were some bits of success, to be sure (“OMG WE CAN DETECT PORTSCANS!!!”), but in total, they (IMHO) don’t quite measure up to SUCCESS of the approach.

In light of this opinion, here is a fun question: do you think the current generation of machine learning (ML) – and “AI”-based (why is AI in quotes?) systems will work better? Note that I am aiming at a really, really low bar: will they work better than – per the above statement – not at all? But my definition of “work” includes “work in today’s messy and evolving real life networks.”

This is actually a harder question than it seems. Of course, ML and “AI” aficionados (who, as I am hearing, are generally saner compared to the blockchain types … these are more akin to clowns, really) would claim that of course “now with ML, things are totally different”,because cyber AI” and “next next next generation deep learning just works.”

On the other hand, some of the rumors we are hearing mention that in noisy, flat, poorly managed networks anomaly detection devolves to … no, really! … to signatures and fixed activity thresholds where humans write rules about what is bad and/or not good.

Before we delve into this, let’s think about the meaning of the term ANOMALY. In the past, “anomaly-based” was about silly TCP stack protocol anomalies and other “broken packets.” Today it seems that the term “anomaly” applies to mathematical anomalies in longer-term activity patterns – and not merely packets like in the 1990s.

So, will it work? This cannot really be answered without asking “work to detect what?”

Let’s go through a few examples we are hearing about:

  • C2/C&C connection from malware to an UNKNOWN [for known, signatures and TI work well, no need to ML it] piece of attacker infrastructure – this was reported to work by some people, and it is not a stretch to imagine that anomaly detection can work here, at least some of the time
  • Connection to some malicious domain [UNKNOWN to be bad at detection time, see above] – DGA domain detection is now baby’s first ML, so it does work [with some “false positives”, but then again, this is a separate question]
  • Internal recon such as a port scan – it works, but then again, this is probably the only thing where the old systems also worked [but with false alarms too]
  • Stolen data exfiltration by an attacker – we’ve heard some noises that it may work, but then again – we’ve heard the same about DLP. IMHO, the jury is still out on this one… Let’s say I think anomaly detection may detect some exfiltration some of the time with some volume of “false positives” and other “non-actionables”
  • Lateral movement by the attacker – the same as above, IMHO, the jury is still out on this one and how effective it can be in real life. I’d say we’ve heard examples where it worked, and some where it was too noisy to be useful or failed outright.

Apart from that, I’ve seem some naïve attempts to use supervised ML to train systems to learn good/bad traffic in general. IMHO, this is a total lost cause. It worked brilliantly for binaries (pioneered by “Vendor C”, for example), but IMHO this is 100% hopeless for general network traffic.

Finally, if the above detection benefits do not materialize for you, we are back in the “dead packet storage” land (albeit with metadata, not packets).

Posts related to this research:

The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.

Comments are closed

6 Comments

  • Nilesh says:

    Netflix is the answer.

  • Nilesh says:

    Sorry,netflow is the best answer.

  • Andre Gironda says:

    You correctly listed beaconing, fumbling, exfil, and access expansion as properties that ML and DL can detect.

    However, you did not list entry-point access, account takeover, or fraud. Why?

    Additionally, I don’t see a single one of these as anomalies. These are critically-malicious, known-unknowns at the worst.

    Anomalies aren’t detected by ML or DL. You’re using the wrong reference class again. Anomalies are discovered with tools like Etsy Skyline or Twitter BreakoutDetection against time-series data.

    You’re also assuming that supervised ML would be a good way to go about playing with netflow or pcap data, but you’re not specifying anything else.

    We know, through projects such as Apache Spot, that engaging the data with natural language processors can be done with statistical techniques, machine learning, and deep learning (not necessarily all 3, could be just stats without any learning) — but, certainly combined with newer meta including deep reinforcement learning. However, this requires the infrastructure and expertise to support this through a sharing alliance, not in a vacuum as most Enterprises have attempted so far.

    Can a vendor like Darktrace pull together this for their customer sets? Yes, but again, they are using a lot less than supervised learning, and even the malware data science folk are using a lot less than supervised learning. The other question is — have they done it? That’s a mixed answer, and I will tell you that if they haven’t then it’s because their customers are resisting or spoiling the brew. Thus, it becomes a data quality issue and unless you have data engineers on both sides, you’ll never actually figure out if the data is lying to you or whatnot.

    I am finding it really difficult to respond thoughtfully to this because you’re just using the wrong language, so we don’t have a common place to start the conversation from. Know that the quants are laughing here though. It’s equally easy as it is complex to answer these questions, to solve these problems — but you have to know the basics like choosing the right wording and using the right reference classes.

    • Thanks for an insightful comment.

      Some comments to comments below

      >However, you did not list entry-point
      >access, account takeover, or fraud. Why?

      I don’t see account takeover, or fraud as use cases for network analytics. They are better service by log-based analytics (SIEM and UEBA and others). THis is why they are not mentioned here. In fact, some of the vendor of network analytics still use flows, hence they cannot see account activity at all (no L7 – no account info)

      >I don’t see a single one of these as anomalies. These are critically-malicious, known-unknowns at the worst.

      This is a big one! And it perhaps justifies a longer discussion. I do agree that they are “known unknowns”, but perhaps not “known bad” (e.g. C&C to an not-noted-as-bad IP/name)

      >You’re also assuming that supervised ML would be a good way to go about playing with netflow or pcap data…

      I said EXACTLY THE OPPOSITE that it is exactly useless.

  • Reposted comment from LinkedIn (for posterity):

    No. No. And no.

    C&C connection use case: yes
    Malicious domains: yes
    Port scans: seriously?
    Data exfil: Not today
    Lateral movement: Not today.