This is a debate post, and not a position post. The question alluded therein (hey… I said “alluded therein” to sound like Dan Geer, no?) has been bugging us for some time, perhaps for 2+ years.
However, we deferred this debate and hid behind the fact that most organizations don’t really compare broad security approaches like “do deception” or “do analytics” (or even “do network” or “do endpoint” for detection) in furtherance of a particular goal. The extra-large enterprises always click “all of the above” while others just want to compare vendors.
But I think the time has come to tackle this, given that this quarter we are looking at both deception tools/practices and network analytics tools.
First, it is very clear that there are sets of security problems where the question of “how to handle it?” or “how to detect it?” can be solved in a set of principally different ways.
Let’s take many people’s recent favorite: attacker’s lateral movement detection (ATT&CK link).
So far, we’ve seen organizations use these approaches for detecting attacker movement in their environment:
- Network-centric: NTA (flow-based or L7 [better!]), and NSM as an approach.
- Endpoint-centric: EDR or various endpoint interrogation tools.
- Log-centric: SIEM or UEBA with relevant logs (network, endpoint, DNS, etc)
- Deception-centric: decoys, lures and other juicy honey-tools and deception methods.
[of course, there is still a “WHAT LATERAL? WE WILL STOP THEM AT THE BORDER!” crowd, but I am not talking about those people today]
Ok, so far, nobody is running away screaming “WROOOOONG!” Fine!
But one method is not like the others. As we alluded in Better Data or Better Algorithms? back in 2016, methods #1 – #3 above work like this:
- gather lots of data from network, endpoint, log or combination thereof.
- think up a sneaky method to glean the insight you need from this data you just collected
- when this insight is gleaned, it is shown to an appropriate human who then runs out and takes the attacker out (not out on a date, mind you, but out with the trash)
However, the #4 (deception) does not work like this, it works more like this:
- think up and prepare a bunch of traps for the attacker
- spread them all over the environment and then hope they are discovered by the attacker
- when the attacker touches one of the traps, you go and take them out.
See the difference?
Now, we can argue:
- WHICH OF THE APPROACHES IS BETTER FOR DETECTING THE TRUE “UNKNOWN UNKNOWNS”?
- WHICH OF THE APPROACHES IS BETTER FOR DETECTING THE TRUE “UNKNOWN UNKNOWNS” GIVEN MOST ORGANIZATIONS LIMITED RESOURCES?
Posts related to deception:
- Our “Applying Deception Technologies and Techniques to Improve Threat Detection and Response” Paper is Published (2016)
- APT-Ready? Better Threat Detection vs Detecting “Better” Threats?
- Better Data or Better Algorithms?
- Tricky: Building a Business Case for A Deception Tool?
- It Is Happening: We Are Starting Our Deception Research!
- “Deception as Detection” or Give Deception a Chance?
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.
Comments are closed
In case of deception, you have to build traps everywhere and make them look real for them to be effective. In the case of analytics, if you capture all the relevant data without sampling and create a baseline it’s a lot more effective to catch unknowns in cloud/DC environments then deception.
Thanks for the comment and for your vote for analytics. I think analytics wins on comprehensiveness but loses on biases in favor of known threats OR noise due to many anomalies.
Sanjay, twenty years ago your comments were spot on, but with today’s deception technology / virtualization you can auto-populate your entire network with literally the touch of the button.
I’m obviously biased, but not just because we build Thinkst Canary. I’m biased because I’ve now seen dozens and dozens of instances where companies who have waited years for the ideal log management / SIEM setup to roll out, achieve tangible wins by deploying Canaries.
It’s a common trope that you need to deploy deception “everywhere” or that you need them to perfectly blend in. A relatively small number of birds can cover huge swaths of space (since you are aiming to detect lateral movement) and often, even Canaries that don’t fit in are “touched” (exactly because they don’t fit in)
You should have analytics and you should instrument things meaningfully.. in the time it took to write this, you could have deployed 5 Canaries. Why wouldn’t you do both?
I’m inclined to agree with Haroon, which is ironic since I run an MDR outfit. Deception is a great high-fidelity indicator of compromise. Analytics adds additional value by:
a. providing the additional context needed to investigate a deception trigger
b. promising detection for behaviours where deception is difficult or not possible
c. generally proving visibility in a more comprehensive and general way.
d. Collecting and and storing data historically for forensics or investigations.
Obviously I’m a bit biased since I recently moved over to work in the deception field (again). Some great comments above however one thing Ive found is that many defenders in the industry have out-dated knowledge on deception technology or have only experienced it in the open source world. Today realistic deception platforms are running in large Fortune 500 companies around the globe finding things that most legacy systems have missed. if you have limited resources, deception can be easily deployed and help your organization mature quickly with actual high-fidelity alerts. If your organization is mature it allows you to dive deeper into adversary activity and can integrate and feed your existing tool-sets making them more effective as well. Large well resourced organizations should do both, less mature orgs with fewer resources will find a much higher return with deception. There’s a reason my company is #31 on Deloitte’s Fast500 this year. Deception works.
To be honest, its not so much which one is better (they both have advantages and disadvantages) however I believe most orgs vastly over estimate analytics and under estimate deception. Deception has come light years from the original Honeynet Project days. I strongly feel orgs need to take a hard look at what today’s deception brings, to include greatly simplifying detection/hunting and the ability to capture new tools/techniques.
Well Anton… As always, you always pose the interesting ones! , not least of all because this comes up alot, but because realistically there is no way to tell which is more effective, there is no perfect test bed to test any of this and what the hell is ‘Analytics’ anyway!
My advice in these instances is simple, if you are tackling security use cases with a whole host of tools and you are hugely successful and you just want more… Then security analytics products are for you. If you want to catch your more ‘inquisitive’ staff out regularly or that compromised account then deception is best. But honestly this is like the dealer asking the customer when buying a car “do you want the engine or the gearbox? “
A healthy security operation uses a variety of methods to detect threats, they then try to reduce the overhead to repeatedly detect these threats time and time again, this means Signatures, Rules, Corellation, Analytics, Deception… And even AI and ML! (maybe).
The limited staff issue is a good twist, of course Deception will be far less noisy than anything else…but do you care about those particular use cases more than those that your analytics platform picks up, maybe, maybe not.
Bottom line is there is no quick way to success here, make sure its relevant to protecting your reputation or your bottom line, try every technique that makes sense, but for the love of God make sure you know what it is your trying to protect and why.
I was disappointed that you said “because realistically there is no way to tell which is more effective”, and before I could quite figure out why Ofer’s reply sums it up nicely. Talk to red teams who’ve emulated real world adversary TTPs, so hopefully as unconstrained by limitations of time and budget as the genuine attackers, and ask them if Analytics or Deception has caught them out most often…
Thanks a lot for the comments and sorry for a much delayed response.
>Talk to red teams who’ve emulated real world adversary TTPs, so hopefully as unconstrained by limitations of time and budget as the genuine attackers, and ask them if Analytics or Deception has caught them out most often…
We did. Analytics wins this particular way to ask the question with a wide margin. But to me this is not the only way to study this…
I am biased but I have worked on Analytics
Products and now Deception and Analytics are very good but can be fooled. A solid deception program with full coverage of decoys like Endpoints, Servers and specialized equipment can fool malicious actors and Red Teams alike and help cut the dwell time significantly.
In my opinion, the most fundamental difference is that for deception tool to generate an alert, just one action is required by an attacker, whereas analytics tools usually require statistically significant change to occur for an alert.
Therefore, deception could be more sensitive for a malicious entity in the network while potentially also generating more FPs. On the other hand, analytics could sense something is wrong but might take longer time to discover it and, more importantly, requires that noticeable change to be there in the data.
I’m thrilled that you posed this super important question Anton.
In my humble opinion, the answer for detecting unknown unknowns is clearly deception. The reason lies in analytics’ models inherent trade-off of detecting anything that is anomalous, but then creating tons of false positives, or fine-tuning the models to detect very clear anomalous behaviour, but then missing the stealthy attackers.
The unknown unknowns are typically associated with a sophisticated threat actor, which is very well aware of analytics running behind the scenes and ensures that the steps taken are such that they’ll be below the radar, and just below the tuning level that the analytics models are using to not bombard the organization with way too many false positives.
Deception, on the other hand, derives from a very different approach, as you noted in the process, and given it’s very low false positive rate, does not need to make those compromises and stays effective for all threats.
I do think there is merit to analytics-based approaches, but it’s also important to understand what they are good for solving and what they’re not good for, and they are more tuned for commodity threats and not the high-end ones.
As per the limited resources, the answer there just becomes clearer. The implementation of sensors across the network, at different sites, geographies, etc. and / or the collection of *all* data that could be relevant is practically impossible as so many organizations have found out over the last few years. So you end up with a limited deployment and limited dataset to work off of (after you’ve invested a heck of a lot of man years), and hope that the attacker will move through what you see.
Deception today, as Lance pointed out, is super easy to implement and maintain, so you don’t have that limiting factor as analytics does.
I can tell you at Illusive, we’ve been engaged in a very substantial number of Red Team exercises where there were many other security solutions deployed, analytics included, and have been the only solution to alert as per the activity. So while not quite mathematical proof, hard real-world experiments that strengthen the above.