So, you decided! That “Vendor M”, “Vendor C” or “Vendor E” tool is going to all of your servers and endpoints (all 100, 1000 or – pity your soul! – all 20000 of them). You push that button on your SCCM server and 20,000 agents drop on those unsuspecting Windows machines (ha! I wish it were that easy, really…but I digress). A shiny new management and data access console lights up on your own laptop. You can now see every process launch, every DLL injection, every registry change, every connection, every PDF opening …
What’s next? Use cases: specific interactions with the new system aimed at solving specific security problems.
Here is my early look at tool use cases, for my current research paper:
- Data search and investigations
- Current (or last known) system state search: search all systems (as they are now) for a running process or other attribute relevant to the incident investigation
- Historical search: search past state of systems for a file by size or other parameter, relevant to figuring out the initial compromise (for example)
- Indicator checks: periodically scan all systems for a list of known artifacts, derived from threat intelligence or other sources
- Suspicious activity detection
- Activity scoring and anomaly detection: algorithmically compute an anomaly/concern/risk score, show rarely launches processes, discover malware-like behavior, etc
- Rule-based detection: if this process is launched, alert the analyst and other alerts based on past investigations (this typically uses a rule created based on a past investigation)
- Threat intelligence-based detection: similar to indicator checks and rule-based use cases above, but performed in real-time on the system (maybe it is the same use case, but the goal/implementation may be somewhat different)
- Data exploration
- Anomaly reports: summarize and review collected system data, external threat data, shared intelligence data, etc, and then use them as “threads to pull” for additional searches.
It should be noted that some of the use cases require that the tool centralized the data in near-real time or performed some of the checks on the endpoint itself and not on the central data repository. Other functions can be performed with the last-captured data set. BTW, notice something peculiar? A parallel to network forensics tools use cases!
P.S. This type of a tool still needs a generic name. Vendors, please think harder about the name 🙂 We all know that a category name can make or break the space…. At this point I am leaning towards “endpoint visibility” as my label for these tools in the paper.