After much agonizing, we (Augusto and myself) have settled on the following list of UEBA / UBA use cases for our upcoming UEBA technology comparison. Here they are:
- Compromised account detection: this is a “classic UBA” usage – study account authentication and usage information to conclude that the account is being used by a malicious party [or, at least, by a different party]; this is often powered by some logic to relate multiple accounts to the same person and/or logic to build sessions of user activity.
- Compromised system/host/device detection: by detecting things like attacker lateral movement, C&C activity, access to bad domains [unknown ones, not just via TI!], various telling host and network anomalies, etc reach a conclusion that a system is under malicious control; this use case covers many data sources and a lot of fairly dissimilar methods of “sense-making”
- Data exfiltration detection: notably, DLP has not fully solved this one (ha!), but has since become a popular UEBA data source; data theft (“exfil”, if you want to sound cool!) detection by trusted insiders and outsiders (as well as their malware) presents a common UEBA tool use case.
- Insider access abuse: this is a fuzziest on the list, this focuses on detecting malicious and risky behavior by legitimate trusted insiders, and includes all forms of privileged access abuse and misuse, among other things; typically powered by user profiling, outlier detection and risk scoring of the results
- Gaining additional insight about the environment: a broad use case where UEBA tools are used for gaining better situational awareness; this also includes improved alerts prioritization and support for triage and investigation activities (yes, if you have to ask, hunting too)
- Custom use case: a good UEBA tool should be able to address a weird client-specific user behavior analytics scenario, ideally without coding and [much] data science knowledge on behalf of a client.
As a side note, to those of you who object to the above because “these are too high-level”…OK…sure, they are. So you can treat them as use case groups or use case types. Or, clusters, if you want to sound smart and “data-science-y”…
BTW, compare/contrast to top SIEM use cases (there are of course many more use cases for SIEM than the top ones mentioned).
Thoughts? Ideas? Additions? Complaints? Silly remarks about “but we have AI!!!”? 🙂
Related blog posts about security analytics:
- UEBA Clearly Defined, Again?
- Comparing UEBA Solutions (by Augusto)
- What Should Your UEBA Show: Indications or Conclusions?
- UEBA Shines Where SIEM Whines?
- The Coming UBA / UEBA – SIEM War!
- Next Research: Back to Security Analytics and UBA/UEBA
- Sad Hilarity of Predictive Analytics in Security?
- Security Analytics Webinar Questions – Answered
- On Unknown Operational Effectiveness of Security Analytics Tooling
- My “Demystifying Security Analytics: Sources, Methods and Use Cases” Paper Publishes
- Now That We Have All That Data What Do We Do, Revisited
- Killed by AI Much? A Rise of Non-deterministic Security!
- Those Pesky Users: How To Catch Bad Usage of Good Accounts
- Security Analytics Lessons Learned — and Ignored!
- Security Analytics: Projects vs Boxes (Build vs Buy)?
- Do You Want “Security Analytics” Or Do You Just Hate Your SIEM?
- Security Analytics – Finally Emerging For Real?
- Why No Security Analytics Market? <- important read for VCs and investors!
- More On Big Data Security Analytics Readiness
- 9 Reasons Why Building A Big Data Security Analytics Tool Is Like Building a Flying Car
- “Big Analytics” for Security: A Harbinger or An Outlier?
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.
Comments are closed
This is a solid summary at the high level. I think the biggest challenge for UBA buyers is that numbers 1-3 can only be simulated late in a POC when baselines have been established, while #4 (insider access abuse) is a little more likely if you know who deserves privilege, and #5 differs in its definition for every organization.
This all makes it difficult to compare solutions prior to purchase.
At least they all have AI to answer these questions for you :-)!
Sorry for delayed response. Indeed, I aimed at a solid “high-ish” level summary. IMHO, #1-#4 all require profiling but frankly to me it seems like insider use case requires MORE profiling, unless the vendor “cheats” and falls back to all-rule-based detection….
And of course AI FTW 🙂
We’ve seen a number of POCs where clients asked that we run on a past incident and present findings. In several cases we identified that the compromise had extended beyond the assets that had been cleaned up – Lateral movement anyone?
Running on historical data is not a problem if the UEBA solution can fetch historical data from a log management system.
Thanks for the comment — indeed, “use historical data to test” will be one of the pieces of advice we will give in the paper.