Anton Chuvakin

A member of the Gartner Blog Network

Anton Chuvakin
Research Director
1 year with Gartner
12 years IT industry

Anton Chuvakin is a research director at Gartner's IT1 Security and Risk Management group. Before Mr. Chuvakin joined Gartner, his job responsibilities included security product management, evangelist… Read Full Bio

SIEM Webinar Questions – Answered

by Anton Chuvakin  |  April 14, 2014  |  4 Comments

Last year, I did this great SIEM webinar on “SIEM Architecture and Operational Processes” [free access to recording! No Gartner subscription required] and received a lot of excellent questions. This is the forgotten post with said questions.

The webinar was about “Security information and event management (SIEM) is a key technology that provides security visibility, but it suffers from challenges with operational deployments. This presentation will reveal a guidance framework that offers a structured approach for architecting and running an SIEM deployment at a large enterprise or evolving a stalled deployment.”

Before the attendee Q&A, I asked one question myself:

Q: Are you satisfied with your SIEM deployment?

siem-sat

You make your own conclusions from that one.

And here is the attendee Q&A:

Q: Do you have tips for starting log management (for SIEM) in a heavily outsourced environment, so where most servers, routers, firewalls etc are managed by 3rd parties?

A: Frankly, the central problem in such environment is about making changes to systems. Can you change that /etc/syslog.conf or that registry setting when needed, quickly and efficiently? Beyond that, I’ve seen outsourced IT will good log monitoring and I’ve seen traditional IT with bad one, so its success is not chained to the delivery model. If anything, I’d watch the outsourced environment more closely since “if you cannot control what they do, at least monitor them.”

 

Q: To what degree do you think it is realistic, and to what degree useful, to collect and analyze logs from Windows workstations (endpoints), rather than just servers?

A: It used to be rare, and it is still not that common, but more organizations are doing it. In fact, ETDR tools has emerged to collect even more security telemetry from the endpoints , including plenty of activities that are cannot be logged. In general, desktop/laptop [and soon mobile?] logging is much more useful now than it used to be. Also, the SIEM tool scalability (in both raw EPS and logging devices/sources) is better now and thus enables more user endpoint logging.

 

Q: For a mid-size company what percent of time would a typical SIEM analyst spend in monitoring / management of the tool – outstanding incident management.

A: Look at my SIEM skill model of Run/Watch/Tune and the paper where it is described in depth. Ideally, you don’t want to have one person running the SIEM system, doing security monitoring and tuning SIEM content (such as writing correlation rules, etc) since it would be either one busy person or one really talented one. Overall, you want to spend a small minority of time on the management of the tool and most of the time using it. SIEM works if you work it! SIEM fails if you fail to use it.

 

Q: How do you reconcile (at a high level) an overall SIEM effort with a Syslog or “IT search” type tool selection? We have enterprise architects who say our Operational Intelligence should include SIEM, but Ops and Security aren’t on that same page.

A: Nowadays, we see both “SIEM for security, general log management for ops” and “single system for both ops and security.” Organizations may choose to run a separate system for operational logging (as well as a SIEM) or choose to run one system and feed the logs from one place into different components. Many, many organization are still very silo’d and would actually prefer to do separate log collection in separate tools. Theoretically, this is unhealthy and inefficient, but for many organizations this is also the only way they can go…

 

Q: What kind of role do you see “Security Analytics” or the new generation SIEM solutions playing versus the traditional SIEM solutions? What kind of market adoption are you seeing of these new solutions versus the traditional SIEM ones?

A: In our recent paper on the topic, we tried to predict the same evolution as well as reconcile such SIEM evolution with new tools leveraging big data technologies and new analytic algorithms. At this point, new analytic approaches remain for the “Type A of Type A” organization with the most mature, well-funded and dedicated security operations teams (example). Many organizations can barely operate a SIEM and are nowhere near ready for the big data-style tools and methods. In essence, “if you think SQL is hard, stay outside of a 5 mile radius from Hadoop.” See this post for additional predictions and this one for a funnier take on this topic.

 

Q: Is SIEM dead or going to die? if yes, what other tools can you use for these SIEM-type use cases?

A:Not at all! SIEM is alive and happy, growing and evolving.

 

There you have it, with a slight delay : – )

Posts related to SIEM:

4 Comments »

Category: analytics logging monitoring security SIEM     Tags:

If You Use Window XP – You Are NOT PCI DSS Compliant!

by Anton Chuvakin  |  April 10, 2014  |  3 Comments

It should be *painfully* obvious to anybody that in a few short weeks [or maybe now, depending on how you interpret it] any merchant using Windows XP systems or devices inside the cardholder data environment (CDE) will NOT be PCI DSS compliant – unless they use stringent compensating controls.

Now, do I wish there was a nicer way to put it? Do I wish I had some great news to those merchants? Sure …. but I DO NOT. Neither does anybody else [at least not anybody else honest].

Use of Windows XP with no ability to obtain and apply security updates violates at least these PCI DSS requirements (quoted from PCI DSS v3):

  • “6.2 Ensure that all system components and software are protected from known vulnerabilities by installing applicable vendor-supplied security patches.” [of course, the fact that the vendor no longer publishes said patches does NOT absolve you of this responsibility!]
  • “11.2.1 Perform quarterly internal vulnerability scans and rescans as needed, until all “high-risk” vulnerabilities (as identified in Requirement 6.1) are resolved.” [of course, there will be high-risk vulnerabilities in XP post its sunset date … just you wait!]
  • “11.3.3 Exploitable vulnerabilities found during penetration testing are corrected and testing is repeated to verify the corrections“ [as a side note, a pentester who cannot break into a vulnerable XP box probably isn’t]

In addition, the systems will NOT be able to achieve a passing external vulnerability scan from your ASV [now, why you’d expose an XP box to the outside is beyond me, but stupider things have happened. One word: telnet].

UPDATE: as my readers correctly pointed out, there are two exceptions to this:

  • Windows XP Embedded (used in some devices) will still be supported until January 12, 2016
  • Microsoft does offer “custom support” for Windows XP that organizations can buy (it is expensive though – even though Microsoft is lowering the maximum cap they charge)

UPDATE2: PCI Council does have an official FAQ entry on this topic and I really should have included a link [thanks for pointing this out!] So: “Are operating systems that are no longer supported by the vendor non-compliant with the PCI DSS?” [the answer is of course "No, not without compensating controls"]

Does it mean that it is absolutely impossible to be compliant-while-using-XP? No, of course not! PCI DSS and the Magic of Compensating Controls can make you compliant in no time [eh…actually, it would take some time and a fair amount of work, but it sure sounded great, didn’t it? : – )].

What are some of the possible compensating control for “not patching” and running vulnerable systems in general [of course, you should not take any control advice from a blogger, analyst and PCI DSS book author, but only from your QSA : – )]

  • Host IPS (HIPS)
  • Application whitelisting
  • Some fancy virtualization isolation (?)

Frankly, I don’t believe that NIPS and better network segmentation will do, but feel free to ask that QSA to be sure. For more vulnerability mitigation advice, also see “Solution Path: Vulnerability Assessment, Mitigation and Remediation” and “Vulnerability Assessment Technology and Vulnerability Management Practices” documents [Gartner GTP access required]

Additional research on retiring Windows XP can be found here [Gartner access required]:

Posts related to PCI DSS:

3 Comments »

Category: compliance patching security     Tags:

On Threat Intelligence Management Platforms

by Anton Chuvakin  |  March 31, 2014  |  6 Comments

I was writing this post on threat intelligence (TI) management platform requirements (TIMP? Do we need another acronym?), and I really struggled with it since most such information I have comes from non-public sources that I cannot just dump on the blog.

In a fortunate turn of events, Facebook revealed their “ThreatData framework” (which is, essentially, a TIMP for making use of technical TI data) and I can now use this public reference to illustrate some of the TIMP requirements without breaking any NDAs and revealing deep secrets of the Universe.

The FB article says “The ThreatData framework is comprised of three high-level parts: feeds, data storage, and real-time response.” So, in essence your TIMP should be able to:

  1. Allow you to collect TI data (standard feeds such as OpenIOC or STIX, formatted data such as CSV, manual imports, etc) – you can go easy and support nicely formatted inputs only or you can go fancy and have you pet AI parse PDFs and hastily written emails with vague clues.
  2. Retain TI data for search, historical analysis and matching against the observables (past and current) [FB approach has a twist in their use of tiered storage, each optimized for different types of analysis, which I find insanely cool]
  3. Normalize, enrich and link collected TI data (automatically and manually by collaborating analysts) to create better TI data to be used in various security tools for forward- and backward-looking matching [along the lines of what was described here]
  4. Search and query interface to actually look at the data manually, in the course of IR or threat actor research, etc.
  5. Distribute / disseminate the cleaned data to relevant tools such as network indicators for NIPS real-time matching and NFT for historical matching, various indicators to SIEM for historical and real-time matching, host indicators to ETDR for matching, etc [roughly as described here]. Of course, sharing the data (sanitized, aggregated and/or filtered) with other organizations happens here as well.

Keep in mind that their system seems is aimed at technical TI only. Some other organizations actually feed strategic TI in such systems as well, especially if such TI is linked to technical indicators (or, simply to have it all in one place for text keyword searching)

Also, keep in mind that some organizations prefer to have ONE system for TI and local monitoring (think of it as using a SIEM as your TIMP – but a SIEM that your wrote yourself based on Hadoop, for example). Or, you can keep you TIMP separate (as described in this post) and then reformat and distribute TI from the TIMP to the infrastructure components that actually do the monitoring and matching, including SIEM, NFT, NIPS, WAF, SWG, ETDR, whatever.

BTW, here is one more public, 1st hand, non-vendor data source on threat intelligence management platforms – The MANTIS Cyber-Intelligence Management Framework.

Enjoy!!

BTW, I am starting to hear some whining that lately I’ve only been writing stuff useful for the 1%-ers (NFT, ETDR, big data analytics, advanced IR). Said whining will be addressed head-on in an upcoming blog post : – )

Posts related to this research project:

6 Comments »

Category: analytics monitoring security standards threat intelligence     Tags:

Gartner GTP Catalyst Europe Conference Reborn

by Anton Chuvakin  |  March 27, 2014  |  4 Comments

After some hiatus, Gartner for Technical Professionals (GTP) conference – the Catalyst – is coming back to Europe (June 17-18 in London). I am NOT speaking there this time, but several of my teammates are. Here are some of the event highlights:

Main page: www.gartner.com/eu/catalyst

Brief description: this Catalyst event focuses on ‘how to’ advice for attendees to practically implement [and, of course, to secure!] their key initiatives of mobile, cloud and big data/analytics, and how organizations need to understand how these initiatives integrate together.

Agenda: here [PDF]

Select fun security talks:

  • To The Point: Insights on Mobile Security: What Works for Mobile Strategy: During a field research project at the end of 2013, Gartner identified important insights from organizations with successful mobile strategies. End user organizations discussed the technology used and how policy was implemented, how business units used apps and how culture impacted mobile security.
  • Security and SDN: Implementation Considerations: Software-defined Networking (SDN) is being discussed as the future for data center networking. It impacts not just the network infrastructure equipment and the virtualized network topology, but also how enterprises implement network security controls. In this session, learn how to properly implement security controls within an SDN.
  • Private Cloud Security, The Deja Vus, The Surprises and The Pitfalls: In private clouds, new security requirements stem from the cloud paradigm that makes forklifting security from the physical world into the virtual world insufficient. The deployed security measures need to match the new paradigm that involves for example elasticity, portability, self-service and API Keys.
  • To the Point: Next Steps in Malware Protection: Traditional malware protection fails to protect against some of the latest threats. New technologies are available for additional malware protection in the network and on endpoints. This talk examines the strengths and weaknesses of the latest technologies in endpoint and network malware protection.

Also, see this blog post from Joerg Fritsch from our team who IS speaking at the event.

Just for reference, my recent and upcoming Gartner event speaking:

4 Comments »

Category: announcement conference security     Tags:

How to Use Threat Intelligence with Your SIEM?

by Anton Chuvakin  |  March 26, 2014  |  10 Comments

SIEM and Threat Intelligence (TI) feeds are a marriage made in heaven! Indeed, every SIEM user should send technical TI feeds into their SIEM tool. We touched on that subject several times, but in this post will look at in in depth. Well, in as much depth as possible to still make my future paper on the topic a useful read :–)

First, why are we doing this:

  • Faster detection – alerting on TI matches (IPs, URLs, domains, hashes, etc) is easier than writing good correlation rules
  • Better context – alert triage and incident investigation becomes easier and information is available faster
  • Threat tracking and awareness – combining local monitoring observations, external TI and [for those who are ready!] internal TI in one place.

What log data do we need?

  • Most common log data to match to TI feeds: firewalls logs (outbound connection records … my fave logs nowadays!), web proxy logs
  • Also used: netflow, router logs or anything else that shows connectivity
  • NIDS/NIPS (and NBA, if you are into that sort of thing) data (TI matching here helps triage, not detection)
  • ETDR tools can usually match to TI data without using a SIEM, but local endpoint execution data collected in one place marries well to TI feeds.

Where would TI data comes from (also look for other TI sources):

  • SIEM vendor: some of the SIEM vendors are dedicating significant resources to the production of their own threat intelligence and/or TI feed aggregation, enrichment and cleaning
  • Community, free TI feeds: CIF format comes really handy here, but CSV can be imported just as well (some lists and information on how to compare them)
  • Commercial packaged feeds from the TI aggregator (it may even have pre-formatted rules ready for your SIEM!)
  • Commercial TI providers of original threat intelligence.

Obviously, using your SIEM vendor TI feeds is the easiest (and may in fact be as easy as clicking one button to turn it on!), but even other sources are not that hard to integrate with most decent SIEM tools.

Now, let’s review all the usage of TI data inside a SIEM:

  • Detect owned boxes, bots, etc that call home when on your network (including boxes pre-owned when not on your network) and, in general, detect malware that talks back to its mothership
  • Validate correlation rules and improve baselining alerts by upping the priority of rules that also point at TI-reported “bad” sources
  • Qualify entities related to an incident based on collected TI data (what’s the history of this IP?)
  • Historical matching of past, historical log data to current TI data (key cool thing to do! resource intensive!)
  • Review past TI history as key context for reviewed events, alerts, incidents, etc (have we seen anything related to this IP in the past TI feeds?)
  • Review threat histories and TI data in one place; make use of SIEM reports and trending to analyze the repository of historical TI data (create poor man’s TI management platform)
  • Enable [if you feel adventurous] automatic action due to better context available from high-quality TI feeds
  • Run TI effectiveness reports in a SIEM (how much TI leads to useful alerts and incidents?)
  • Validate web server logs source IP to profile visitors and reduce service to those appearing on bad lists (uncommon)
  • Other use of TI feeds in alerts, reports and searches and as context for other monitoring tasks

So, if you are deploying a SIEM, make sure that you start using threat intelligence in the early phases of your project!

Posts related to this research project:

10 Comments »

Category: analytics collective incident response logging monitoring security SIEM threat intelligence     Tags:

Speaking at Gartner Security & Risk Management Summit 2014

by Anton Chuvakin  |  March 24, 2014  |  7 Comments

For those attending Gartner 2014 Security and Risk Management Summit (June 23-26, 2014 in Washington, DC), here is what I am presenting on:

  1. SIEM Architecture and Operational Processes
  2. Network and Endpoint Visibility for Incident Response
  3. Security Incident Response in the Age of the APT

The sessions in detail:

SIEM Architecture and Operational Processes

Security information and event management (SIEM) is a key technology that provides security visibility, but it suffers from challenges with operational deployments. This presentation will reveal a guidance framework offers a structured approach for architecting and running an SIEM deployment at a large enterprise or evolving a stalled deployment.

Key Issues:

  • How to plan for a SIEM deployment?
  • How to deploy and expand your SIEM architecture?
  • What key processes and practices are needed for a successful SIEM implementation?

BTW, this session was SUPER-popular at the 2013 Summit and so I am rerunning it more or less intact, with some new data. It is based on my paper “Security Information and Event Management Architecture and Operational Processes.”

Network and Endpoint Visibility for Incident Response

As preventative controls keep failing to defend organizations, the new emphasis on comprehensive visibility across networks and endpoints is emerging. This presentation will cover network forensics tools (NFT) and practices as well as endpoint threat detection and response tools (ETDR) and their use for detecting and investigating threats.

Key Issues:

  • How to use network forensics tools (NFT) for detecting and investigating threats?
  • How to use endpoint detection and response tools (ETDR) for detecting and investigating threats?
  • What are the key processes related to these tools?

This presentation is based on my papers “Network Forensics Tools and Operational Practices” and “Endpoint Threat Detection and Response Tools and Practices.”

Security Incident Response in the Age of the APT

Increased complexity and frequency of attacks, combined with reduced effectiveness of preventative controls, elevate the need for enterprise-scale security incident response. This presentation covers ways of executing incident response in the modern era of cybercrime, APT and evolving IT environments.

Key Issues:

  • How to prepare for enterprise security incident response?
  • What tools, skills and practices are needed for APT IR?
  • How to evolve security IR into “continuous IR” or hunting for incidents?

This presentation is based on my paper “Security Incident Response in the Age of APT.”

Come see me at the Summit!

My past Gartner speaking:

7 Comments »

Category: announcement conference ETDR incident response network forensics security SIEM     Tags:

On Internally-sourced Threat Intelligence

by Anton Chuvakin  |  March 20, 2014  |  7 Comments

At the very top of the very top of the pyramid, practically in upper stratosphere, sit organizations that produce their own threat intelligence (TI), sourced from local artifacts and their own intelligence gathering activities.

In my view, this “internal TI” label applies to four types of activities:

  1. Local team collects technical TI all over the Internet (from outside of the organization’s perimeter)

  2. Local team collects non-technical TI (from the outside)

  3. Local team creates better TI by refining received data with local context and local analysis (such as finding hidden relationships between threat profiles or enriching TI data)

  4. Local team sourcing TI from locally discovered artifacts (from the inside)

The first two items are similar to what was discussed in my blog post on TI sources since they are close to what commercial TI providers do for their clients. The third is related to a discussion in my post on TI fusion. Thus, in this post, we will look at locally sourced TI, the ultimate in home cooked, organic, locally grown juicy TI, that is actually good for you – if you can afford it.

But first, we have a small issues of definitions (don’t we always?). I keep pondering the question of where TI begins and monitoring/IR (and continuous IR) ends. Frankly, this is a hard one, and I am not entirely happy with the answer I came to: if it is primarily focused on directly protecting the environment (such as to alert about the intrusion), it is monitoring. And if it is primarily focused on learning about the threats and expanding your knowledge base about actors, it is internal TI. Clearly, an activity can easily be both…

So, locally sourced TI may come from locally caught malware (static analysis, reversing, executing/detonating in sandboxes, etc). It may come from local disk/memory images collected during an incident investigation. In some organization, local TI may come from locally deployed honeypots. Sometimes, it may come from artifacts discovered on partner vendor/customer/networks. Also, it may also come from processing locally sourced monitoring data, but this is the area where security monitoring, incident response and threat intelligence truly fuse into one security visibility and analysis bundle. And it is actually a good thing since what matters is that these tasks are done, not that they are properly classified

Specific internal TI activities may include:

  • Detailed analysis of locally caught malware, looking for ways to detect it in the future, possible connections to other malware and past incidents, relating to threat actor profiles, etc
  • Detailed analysis of disk images, memory images for indicator extraction and learning about the attacker behavior while on your network
  • Compiling and expanding threat actor profiles based on local data, such as logs, packet captures, malware, incident data, local honeypots, etc
  • Detailed analysis of artifacts shared by other organizations
  • Fusing local data with shared data (see TI fusion post for details) to produce more durable threat intelligence and extract new intelligence from existing data.

To summarize, a lot of the material discussed herein should NOT be tried by many less enlightened organizations. Admittedly, it can be very useful, but IT IS HARD WORK!

P.S. Extensive local TI activities usually require a TI management platform (TIMP?) of some kind. My research path has led me into the dark woods of defining the requirements for such platforms…. Next blog post?

Posts related to this research project:

7 Comments »

Category: analytics incident response monitoring security threat intelligence     Tags:

Our Team Is Hiring Again: Join Gartner GTP Now!

by Anton Chuvakin  |  March 18, 2014  |  6 Comments

It is with great pleasure that I am announcing that our team is HIRING AGAIN! Join Security and Risk Management Strategies (SRMS) team at Gartner for Technical Professionals (GTP)!

Excerpts from the job description:

  • Create and maintain high quality, accurate, and in depth documents or architecture positions in information security, application security, infrastructure security, and/or related coverage areas;
  • Prepare for and respond to customer questions (inquiries/dialogues) during scheduled one hour sessions with accurate information and actionable advice, subject to capacity and demand;
  • Prepare and deliver analysis in the form of presentation(s) delivered at one or more of the company’s Catalyst conferences, Summit, Symposium, webinars, or other industry speaking events;
  • Participate in industry conferences and vendor briefings, as required to gather research and maintain a high level of knowledge and expertise;
  • Perform limited analyst consulting subject to availability and management approval;
  • Support business development for GTP by participating in sales support calls/visits subject to availability and management approval;
  • Contribute to research planning and development by participating in planning meetings, contributing to peer reviews, and research community meetings

In essence, your job would be to research, write, guide clients (via phone inquiries/dialogs) and speak at events. Also, we do list a lot of qualifications in the job req, but you can look at my informal take on them in this post.

So APPLY HERE!

P.S. If the link above fails, go to https://careers.gartner.com and search for “IRC26388

P.P.S. If you have questions, feel free to email me – I cannot promise a prompt response, but I sure can promise a response.

Related posts:

6 Comments »

Category: announcement hiring     Tags:

Delving into Threat Actor Profiles

by Anton Chuvakin  |  March 14, 2014  |  6 Comments

Threat actor profiles – an expensive toy or a necessity for some? [if you see some excited vendor who says that everybody should create and maintain threat actor profiles, please ask whether a gas station at the corner should!]

In any case, our discussion of threat intelligence would be woefully incomplete without the discussion of threat actor profiles. Sure, if your organization thinks that TI data feeds are basically the same as AV or NIPS sig updates, threat actor discussions in general and threat actor profiles in particular are not for you. By the way, in the good old debate about APT being a WHO or a WHAT, threat actor is a WHO.

In fact, let’s briefly chat about this.

Plenty of organizations seem to want to forget about the threats and focus all their energies on patching holes in walls and building better walls (prevention), watch towers (detection) and maybe training firefighters (response). This always reminds me of this post from Richard.

So, yes, barbarians may be at their gate, but some people just wish to not have them rather them to study them. However, how’s that working for the industry? All in all, I think that many signs point that we will be spending more time trying to understand and classify the threat actors we face.

Let’s take a look at a sample STIX threat actor profile (heavily abbreviated to show only top level schema items):

TF2

(schema image)

TF1

(description from STIX materials)

In essence, the profile contains such information as:

  • Threat actor name, description, etc
  • Type of an actor, their goals (if known)
  • Common tools, methods and even mistakes (often rolled into TTPs)
  • Involvement in past activities, campaigns (localized information)
  • Relationship to other actors
  • Reliability, accuracy and confidence of available information

Where does the information comes from? Threat intelligence activities of course [your own and those of others; in particular, profiles received from others may be enriched with your own information as discussion in the TI fusion post]

Let’s figure out how having all this information will actually help you secure your organization. We are not doing it for fun, you know :-) Well, not only for fun….

Threat actor profiles can be used by a fledgling threat intelligence operation to organize their knowledge about who is “out to get them” and who they observe on their network. Such knowledge organization helps prioritize incident response and alert triage activities. In addition, threat actor profiles may have some predictive powers. Specifically, if you observe behavior X (as indicated by indicators X1, X2 and X3 – these may be accounts used, captured malware hashes and favorite tools used for recon) and this behavior can be matched to a threat actor profile (that you built from shared, purchased and your own data) then other data from the threat profile becomes relevant as well and can be used for predicting what may happen next or what has in fact already happened while you were not watching.

BTW, did I just say “attribution”? Well, I dint, but this is pretty much what we did when we matched observed indicators to the actor profile. In essence, attribution is not something that only trained telepaths inside Project Stargate can do. For our purposes, this is simply hypothesizing which threat actor(s) performed an act that you observed on your network based on threat intelligence data. So, yes, you do need threat actor profiles to do attribution… And, no, attribution is not magic. Or magick :-)

On the other hand, threat actor profiles utility for a “pure consumer of TI” with no TI fusion or internal TI creation? In my view, NONE. However, I’d be very happy to be corrected here. So, I hereby call upon the Enlightened Ones to correct me…

Posts related to this research project:

6 Comments »

Category: security threat intelligence     Tags:

On Threat Intelligence Sources

by Anton Chuvakin  |  February 26, 2014  |  2 Comments

Where does threat intelligence (TI) come from? Yes, we all know that little gnomes living in the clouds make threat intelligence out of seagulls’ poo. But apart from that, where DOES it come from?

Let’s delve into this, starting from the high level. Some types of TI are simply the results of somebody else’s security monitoring. They see it, they share it, you pay them to see it too. The proverbial “they” may have bigger networks, better tools or simply be in that business for hard cold cash. Other TI comes from people who go actively crawl for threats, browse sites for malware, run honeypots, spam traps, sinkholes, etc or even actively “engage” the threat actors and/or infiltrate their ranks. Different processes – different TI (all fitting our broad TI definition).

In general, categorizing the TI sources as “technical” and “human” is a bit artificial, since many of the sources are in essence produced by a tight collaboration of humans AND machines (e.g. a smart TI analyst armed with data visualization and entity linking software) with various fractions of contributions from each. Still, mostly technical sources include:

  • Network monitoring (NIDS, NBA, etc)
  • Server and client honeypots [some vendors want to say they use “next gen honeypots” which really means that they use … honeypots :-) ]
  • Spam (and phishing email) traps
  • Live botnet connections
  • Link crawling for malware and exploit code
  • Malware reversing and observation
  • Social network monitoring
  • BGP observation [ooooh…fun!]
  • Tor usage monitoring

However, some of the juicier intel does not come from automated tools, but from people (admittedly, equipped with tools). For example, you don’t simply wget a Russian cybercrime forum for future analysis. A human needs to actively work to get in. Similarly, you don’t simply sniff IRC traffic or intercept emails between attackers, a lot of “very human” work is involved before this task can be completed. Some TI vendors whisper about infiltration and “aggressive techniques” (as I suspect, some hacking across state borders aimed at malicious infrastructure may in fact be legal [or at least unlikely to ever be charged] in the source country).

This set of sources includes examples such as:

  • Public internet, IRC, newsgroup monitoring [crawlers + human analysts/translators]
  • Attacker online community infiltration (web forum, “private” social media, IRC, etc)
  • Leaked data about attacker’s infrastructure (such as leaks by other attackers)
  • Direct compromise of attackers systems.

Furthermore, if you are buying/receiving TI from somebody else (government, regional CERT, sharing club, security vendor, TI vendor) the intel you receive is ultimately produced by a blend of the above sources. Of course, you may also be producing your very own TI from local incident artifacts – a subject of the next post in this series…

Have I missed any major threat intel sources?

Or, better, can you think of a better way to classify and organize them?

Posts related to this research project:

2 Comments »

Category: security threat intelligence     Tags: