Anton Chuvakin

A member of the Gartner Blog Network

Anton Chuvakin
Research VP
2+ years with Gartner
14 years IT industry

Anton Chuvakin is a research VP at Gartner's GTP Security and Risk Management group. Before Mr. Chuvakin joined Gartner, his job responsibilities included security product management, evangelist… Read Full Bio

Coverage Areas:

On Threat Intelligence Management Platforms

by Anton Chuvakin  |  March 31, 2014  |  6 Comments

I was writing this post on threat intelligence (TI) management platform requirements (TIMP? Do we need another acronym?), and I really struggled with it since most such information I have comes from non-public sources that I cannot just dump on the blog.

In a fortunate turn of events, Facebook revealed their “ThreatData framework” (which is, essentially, a TIMP for making use of technical TI data) and I can now use this public reference to illustrate some of the TIMP requirements without breaking any NDAs and revealing deep secrets of the Universe.

The FB article says “The ThreatData framework is comprised of three high-level parts: feeds, data storage, and real-time response.” So, in essence your TIMP should be able to:

  1. Allow you to collect TI data (standard feeds such as OpenIOC or STIX, formatted data such as CSV, manual imports, etc) – you can go easy and support nicely formatted inputs only or you can go fancy and have you pet AI parse PDFs and hastily written emails with vague clues.
  2. Retain TI data for search, historical analysis and matching against the observables (past and current) [FB approach has a twist in their use of tiered storage, each optimized for different types of analysis, which I find insanely cool]
  3. Normalize, enrich and link collected TI data (automatically and manually by collaborating analysts) to create better TI data to be used in various security tools for forward- and backward-looking matching [along the lines of what was described here]
  4. Search and query interface to actually look at the data manually, in the course of IR or threat actor research, etc.
  5. Distribute / disseminate the cleaned data to relevant tools such as network indicators for NIPS real-time matching and NFT for historical matching, various indicators to SIEM for historical and real-time matching, host indicators to ETDR for matching, etc [roughly as described here]. Of course, sharing the data (sanitized, aggregated and/or filtered) with other organizations happens here as well.

Keep in mind that their system seems is aimed at technical TI only. Some other organizations actually feed strategic TI in such systems as well, especially if such TI is linked to technical indicators (or, simply to have it all in one place for text keyword searching)

Also, keep in mind that some organizations prefer to have ONE system for TI and local monitoring (think of it as using a SIEM as your TIMP – but a SIEM that your wrote yourself based on Hadoop, for example). Or, you can keep you TIMP separate (as described in this post) and then reformat and distribute TI from the TIMP to the infrastructure components that actually do the monitoring and matching, including SIEM, NFT, NIPS, WAF, SWG, ETDR, whatever.

BTW, here is one more public, 1st hand, non-vendor data source on threat intelligence management platforms – The MANTIS Cyber-Intelligence Management Framework.

Enjoy!!

BTW, I am starting to hear some whining that lately I’ve only been writing stuff useful for the 1%-ers (NFT, ETDR, big data analytics, advanced IR). Said whining will be addressed head-on in an upcoming blog post : – )

Posts related to this research project:

6 Comments »

Category: analytics monitoring security standards threat intelligence     Tags:

6 responses so far ↓