Gartner Blog Network


Deception vs Analytics, or Can Analytics Catch True Unknown Unknowns?

by Anton Chuvakin  |  December 7, 2018  |  11 Comments

This is a debate post, and not a position post. The question alluded therein (hey… I said “alluded therein” to sound like Dan Geer, no?) has been bugging us for some time, perhaps for 2+ years.

However, we deferred this debate and hid behind the fact that most organizations don’t really compare broad security approaches like “do deception” or “do analytics” (or even “do network” or “do endpoint” for detection) in furtherance of a particular goal. The extra-large enterprises always click “all of the above” while others just want to compare vendors.

But I think the time has come to tackle this, given that this quarter we are looking at both deception tools/practices and network analytics tools.

First, it is very clear that there are sets of security problems where the question of “how to handle it?” or “how to detect it?” can be solved in a set of principally different ways.

Let’s take many people’s recent favorite: attacker’s lateral movement detection (ATT&CK link).

So far, we’ve seen organizations use these approaches for detecting attacker movement in their environment:

  1. Network-centric: NTA (flow-based or L7 [better!]), and NSM as an approach.
  2. Endpoint-centric: EDR or various endpoint interrogation tools.
  3. Log-centric: SIEM or UEBA with relevant logs (network, endpoint, DNS, etc)
  4. Deception-centric: decoys, lures and other juicy honey-tools and deception methods.

[of course, there is still a “WHAT LATERAL? WE WILL STOP THEM AT THE BORDER!” crowd, but I am not talking about those people today]

Ok, so far, nobody is running away screaming “WROOOOONG!” Fine!

But one method is not like the others. As we alluded in Better Data or Better Algorithms? back in 2016, methods #1 – #3 above work like this:

  1. gather lots of data from network, endpoint, log or combination thereof.
  2. think up a sneaky method to glean the insight you need from this data you just collected
  3. when this insight is gleaned, it is shown to an appropriate human who then runs out and takes the attacker out (not out on a date, mind you, but out with the trash)

However, the #4 (deception) does not work like this, it works more like this:

  1. think up and prepare a bunch of traps for the attacker
  2. spread them all over the environment and then hope they are discovered by the attacker
  3. when the attacker touches one of the traps, you go and take them out.

See the difference?

Now, we can argue:

  • WHICH OF THE APPROACHES IS BETTER FOR DETECTING THE TRUE “UNKNOWN UNKNOWNS”?
  • WHICH OF THE APPROACHES IS BETTER FOR DETECTING THE TRUE “UNKNOWN UNKNOWNS” GIVEN MOST ORGANIZATIONS LIMITED RESOURCES?

Posts related to deception:

Category: analytics  deception  detection  monitoring  security  

Anton Chuvakin
Research VP and Distinguished Analyst
5+ years with Gartner
17 years IT industry

Anton Chuvakin is a Research VP and Distinguished Analyst at Gartner's GTP Security and Risk Management group. Before Mr. Chuvakin joined Gartner, his job responsibilities included security product management, evangelist… Read Full Bio


Thoughts on Deception vs Analytics, or Can Analytics Catch True Unknown Unknowns?


  1. Sanjay Kalra says:

    In case of deception, you have to build traps everywhere and make them look real for them to be effective. In the case of analytics, if you capture all the relevant data without sampling and create a baseline it’s a lot more effective to catch unknowns in cloud/DC environments then deception.

    • Thanks for the comment and for your vote for analytics. I think analytics wins on comprehensiveness but loses on biases in favor of known threats OR noise due to many anomalies.

    • Lance Spitzner says:

      Sanjay, twenty years ago your comments were spot on, but with today’s deception technology / virtualization you can auto-populate your entire network with literally the touch of the button.

  2. Haroon Meer says:

    I’m obviously biased, but not just because we build Thinkst Canary. I’m biased because I’ve now seen dozens and dozens of instances where companies who have waited years for the ideal log management / SIEM setup to roll out, achieve tangible wins by deploying Canaries.

    It’s a common trope that you need to deploy deception “everywhere” or that you need them to perfectly blend in. A relatively small number of birds can cover huge swaths of space (since you are aiming to detect lateral movement) and often, even Canaries that don’t fit in are “touched” (exactly because they don’t fit in)

    You should have analytics and you should instrument things meaningfully.. in the time it took to write this, you could have deployed 5 Canaries. Why wouldn’t you do both?

  3. Charl says:

    I’m inclined to agree with Haroon, which is ironic since I run an MDR outfit. Deception is a great high-fidelity indicator of compromise. Analytics adds additional value by:
    a. providing the additional context needed to investigate a deception trigger
    b. promising detection for behaviours where deception is difficult or not possible
    c. generally proving visibility in a more comprehensive and general way.
    d. Collecting and and storing data historically for forensics or investigations.

  4. Tony says:

    Obviously I’m a bit biased since I recently moved over to work in the deception field (again). Some great comments above however one thing Ive found is that many defenders in the industry have out-dated knowledge on deception technology or have only experienced it in the open source world. Today realistic deception platforms are running in large Fortune 500 companies around the globe finding things that most legacy systems have missed. if you have limited resources, deception can be easily deployed and help your organization mature quickly with actual high-fidelity alerts. If your organization is mature it allows you to dive deeper into adversary activity and can integrate and feed your existing tool-sets making them more effective as well. Large well resourced organizations should do both, less mature orgs with fewer resources will find a much higher return with deception. There’s a reason my company is #31 on Deloitte’s Fast500 this year. Deception works.

  5. Lance Spitzner says:

    To be honest, its not so much which one is better (they both have advantages and disadvantages) however I believe most orgs vastly over estimate analytics and under estimate deception. Deception has come light years from the original Honeynet Project days. I strongly feel orgs need to take a hard look at what today’s deception brings, to include greatly simplifying detection/hunting and the ability to capture new tools/techniques.

  6. Pete says:

    Well Anton… As always, you always pose the interesting ones! , not least of all because this comes up alot, but because realistically there is no way to tell which is more effective, there is no perfect test bed to test any of this and what the hell is ‘Analytics’ anyway!

    My advice in these instances is simple, if you are tackling security use cases with a whole host of tools and you are hugely successful and you just want more… Then security analytics products are for you. If you want to catch your more ‘inquisitive’ staff out regularly or that compromised account then deception is best. But honestly this is like the dealer asking the customer when buying a car “do you want the engine or the gearbox? “

    A healthy security operation uses a variety of methods to detect threats, they then try to reduce the overhead to repeatedly detect these threats time and time again, this means Signatures, Rules, Corellation, Analytics, Deception… And even AI and ML! (maybe).

    The limited staff issue is a good twist, of course Deception will be far less noisy than anything else…but do you care about those particular use cases more than those that your analytics platform picks up, maybe, maybe not.

    Bottom line is there is no quick way to success here, make sure its relevant to protecting your reputation or your bottom line, try every technique that makes sense, but for the love of God make sure you know what it is your trying to protect and why.

  7. Fernando says:

    I am biased but I have worked on Analytics
    Products and now Deception and Analytics are very good but can be fooled. A solid deception program with full coverage of decoys like Endpoints, Servers and specialized equipment can fool malicious actors and Red Teams alike and help cut the dwell time significantly.

  8. Ilya O. says:

    In my opinion, the most fundamental difference is that for deception tool to generate an alert, just one action is required by an attacker, whereas analytics tools usually require statistically significant change to occur for an alert.

    Therefore, deception could be more sensitive for a malicious entity in the network while potentially also generating more FPs. On the other hand, analytics could sense something is wrong but might take longer time to discover it and, more importantly, requires that noticeable change to be there in the data.

  9. Ofer Israeli says:

    I’m thrilled that you posed this super important question Anton.

    In my humble opinion, the answer for detecting unknown unknowns is clearly deception. The reason lies in analytics’ models inherent trade-off of detecting anything that is anomalous, but then creating tons of false positives, or fine-tuning the models to detect very clear anomalous behaviour, but then missing the stealthy attackers.
    The unknown unknowns are typically associated with a sophisticated threat actor, which is very well aware of analytics running behind the scenes and ensures that the steps taken are such that they’ll be below the radar, and just below the tuning level that the analytics models are using to not bombard the organization with way too many false positives.

    Deception, on the other hand, derives from a very different approach, as you noted in the process, and given it’s very low false positive rate, does not need to make those compromises and stays effective for all threats.

    I do think there is merit to analytics-based approaches, but it’s also important to understand what they are good for solving and what they’re not good for, and they are more tuned for commodity threats and not the high-end ones.

    As per the limited resources, the answer there just becomes clearer. The implementation of sensors across the network, at different sites, geographies, etc. and / or the collection of *all* data that could be relevant is practically impossible as so many organizations have found out over the last few years. So you end up with a limited deployment and limited dataset to work off of (after you’ve invested a heck of a lot of man years), and hope that the attacker will move through what you see.
    Deception today, as Lance pointed out, is super easy to implement and maintain, so you don’t have that limiting factor as analytics does.

    I can tell you at Illusive, we’ve been engaged in a very substantial number of Red Team exercises where there were many other security solutions deployed, analytics included, and have been the only solution to alert as per the activity. So while not quite mathematical proof, hard real-world experiments that strengthen the above.



Leave a Reply

Your email address will not be published. Required fields are marked *

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.