by Anton Chuvakin | May 23, 2013 | Submit a Comment
It is with GREAT excitement that I am pre-announcing my next area of research focus – security incident response.
In brief, here is what I have in mind for the next few months:
- Host and Malware Forensics Tools and Practices (title tentative), an assessment of the endpoint investigation tool scene (to complement my just-finished report on network forensics)
- Incident Response in the Age of APT (title tentative), a guidance to doing incident response (from tools to teams!) in the modern era of industrial cyber-crime, APT and also cloud/virtual/mobile environments.
Some of the vendors I am speaking with or planning to speak are Crowdstrike, Mandiant, Guidance Software, Carbon Black, some anti-malware/EPP vendors (who actually think rather than milk). And of course, as with all Gartner GTP research, I am planning to have lots of conversations with enterprise CIRTs, other end users and whatever others sources of current IR wisdom…
Possibly related posts:
Category: incident response security Tags: incident response, security
by Anton Chuvakin | May 20, 2013 | 5 Comments
Is alert-driven security workflow “dead”?! It is most certainly not.
However, it is being challenged at some enlightened organizations that deploy SIEM, network forensics or other analytics technologies (notice how elegantly I am avoiding the marketer-corrupted term “big data” ).
A fellow SIEM literati once called it using “tech support workflow” for security incident response – and, let me tell you, he didn’t like it much. Many users of network forensics tools (NFT) have discovered that their tools are not alert-centric at all (such as discussed here), but require active data exploration. One NFT team manager even went as far as to say “we don’t hire alert responders here.” He meant to say that in his team he doesn’t want people to wait for alerts, but to go and explore, “hunt” for insights rather than “gather” alerts. Starting from a hypothesis, a “thread to pull”, a question rather than an alert is characteristic of this newer way of approaching security.
Here is how I am thinking about:
|Alert comes in –> you respond
||You go out –> you find actionable info -> you act
|Like tech support
||Like QA (thanks for this idea!)
|Context to decide on the alert
||Context to explore wider/deeper
|Triage THIS entity
||Explore in THIS direction
|Want to be “done” with the alert
||Want to know what is really going on, not be “done”
|Operations – alert volume
||Research – insight usefulness
In any case, hopefully it is insightful and useful for your security analytics / SIEM / SOC thinking and planning.
And, hey, vendors – don’t assume that security monitoring is ALL about alert-driven workflows… The smartest of your tool users already don’t.
Posted related to my network forensics research:
Category: analytics monitoring network forensics security SIEM Tags: analytics, network forensics, security, security monitoring
by Anton Chuvakin | May 6, 2013 | Comments Off
We again interrupt our regular programming (on network forensics and security data sharing this quarter) to delve into a subject much removed from the exciting world of APT fighting, “kill chain” swinging, state-sponsored attacking and stealthy custom malware-ing. My esteemed colleagues (like here) like to sometimes use the labels of “security haves” vs “security have-nots.”
Hereby we are transported into the land of the unfortunate “have-nots” from the land of the enlightened “haves.”
Although patching has been “a solved problem” for many years, even decades, a lot of organizations struggle with it today – and struggle mightily. While patching Windows OS on desktops monthly is indeed solved everywhere, but in the darkest woods of IT, patching 3rd party application on a desktop remains a significant challenge for many organizations. Patching server OSs (Windows and Linux/UNIX) and 3rd party server applications also remains challenging due to fragility of many server environments. Add virtualization to the mix – and you have a full-blown slow-cooking disaster. And then you have Java … <dramatic pause > … a security disaster in a league of its own.
In a medium-sized organization (say, up to 10,000 PC endpoints) the problem seems to be even more painful due to a double whammy of already-high complexity and still-low resources. Java, Adobe Reader and Flash, Firefox, Oracle fat clients as well as many vertical and business-specific applications are often patched MUCH later than Windows and Office. Quite a few of the organizations actually found success using their Microsoft tools for patching 3rd party applications. Microsoft WSUS and its older brother SCCM rule the roost there, but there are challenges with this approach as well (such as MSI package creation and all sorts of compatibility testing).
We have created a lot of great research on the subject of patch management as a part of overall configuration management (example). Similarly, patching as a method of remediating vulnerabilities is an obvious part of vulnerability management (example). However, we still get calls about patch management planned and operated in isolation from either configuration management or vulnerability management (“no-no-no, we just need to know about patching!”).
One subject that presents a particular challenge is patching time frames (and, ultimately, SLAs). Across organizations and system types, they vary wildly - literally from hours to years (!). A critical, known-to-be-exploited vulnerability in a DMZ Windows web server often gets fixed in 24-48 hours, while an internal database “medium” severity issue is likely to languish for years (exploited and re-exploited by literally generations of attackers lounging on your network).
These SLA are often affected dramatically by the extent of testing done to make sure that the patch does more good than harm. At this point, I’d venture a guess that testing is a major bottleneck – and one that can never be fully automated for fragile and complex server environments. Yes, you can have a nice staging environment, but not an exact clone of every system you plan to patch, complete with active users. As a result, the users utilizing the patched system is the only real test, thus deploying patches to waves of users is popular as a way of ‘test-ployment’ i.e. testing combined with deployment.
These challenges are, incredulously, leading some organizations to reduce the efforts they put in vulnerability scanning. After all, if every monthly scan gives you 10,000 findings, but you have enough bandwidth to fix only 300 (a real example), won’t you be at least a bit demotivated to scan? If you are going to assume you are owned, what’s one unpatched vulnerability between friends? In any case, please treat this point as Anton thinking aloud and NOT as a recommendation to ditch you scanner. However, when you do scan, you need to become MUCH smarter in picking what to fix first, second, third, etc …. as well as last and never.
Finally, please keep in mind that patching must have a purpose (darn, I feel stupid even saying this) – you patch to reduce a risk or to enhance application functionality. Patching for the sake of patching is just not a very useful mindset that also dulls this activity to an endless, mindless grind. Oh….wait
Possibly related posts:
Category: patching security vulnerability management Tags: patching, security, vulnerability management
by Anton Chuvakin | April 29, 2013 | 4 Comments
Here is my collection of favorites and highlights from Verizon 2013 Data Breach Investigations Report [PDF]
- “If your organization is indeed a target of choice, understand as much as you can about what your opponent is likely to do and how far they are willing to go ” <- a REALLY key point!
- “State-affiliated actors tied to China are the biggest mover in 2012. Their efforts to steal IP comprise about one-fifth of all breaches in this dataset.” <- 1/5 is HUGE!!! I expected a lot, but not that much, to be honest.
- “Collect, analyze, and share tactical threat intelligence, especially Indicators of Compromise (IOCs), that can greatly aid defense and detection.” <- a great recommendation indeed!
- “In a streak that remains unbroken, direct installation of malware by an attacker who has gained access to a system is again the most common vector. ” <- however, this does NOT mean that “threat = malware”!
- “State-affiliated actors often use the same formula and pieces of multifunctional malware during their campaigns, and this is reflected in the statistics throughout this report.” <- this means that even when specific signatures fail, detecting higher level patterns of activity will work well!
- “more than 95% of all attacks of this genre [= espionage] employed phishing as a means of establishing a foothold in their intended victims’ systems.” <- sure, why change if this works well for them?
- “With respect to mobile devices, obviously mobile malware is a legitimate concern. Nevertheless, data breaches involving mobile devices in the breach event chain are still uncommon.” <- keep this in mind before freaking out over “MOBILE THREATS!!!”
- “Some interpret attack difficulty as synonymous with the skill of the attacker, and while there’s some truth to that, it almost certainly reveals much more about the skill and readiness of the defender.” <- NO COMMENT
- “Approximately 70% of breaches were discovered by external parties who then notified the victim. This is admittedly better than the 92% observed in our last report” <- I am pretty sure that a token optimist on the team inserted this statement in the report …
- “Matching this [collected from various sources] IOC library with victim-side evidence kick starts an investigation and allows for much quicker and more effective progress.” <- please print this and post in your cube
- “As history has shown, focusing on finding specific vulnerabilities and blocking specific exploits is a losing battle.” <- planning to buy a new/better scanner? Are you sure? CAN you patch as fast as the scanner can scan? NO!
Finally, at the risk of quoting too much – my favorite table from the report is shown on the right.
Category: security Tags: security
by Anton Chuvakin | April 23, 2013 | Comments Off
Just FYI, I will be presenting at our Gartner Catalyst Conference (July 29 – August 1, 2013 in San Diego, CA)
My sessions are:
- Top 7 Most Effective Incident Detection and Response Practices and Tools (network and host investigations, tips, practices, “counter-APT” response and other fun stuff)
- To the Point: Deception-Based Security is Baaaaack! (deception has crept back into infosec thinking, from honeypots to counter-intel tricks)
- Stop That Leak! Finding, Watching and Controlling Where Your Data Goes (some fun ideas and practices based on my recent DLP research)
- Roundtable: How to Monitor the Security Of Your Cloud Assets (participant roundtable)
See you there!
Category: announcement security Tags: security
by Anton Chuvakin | April 18, 2013 | Comments Off
My second paper (out of 2) on Data Loss Prevention (DLP) just went up – enjoy “Enterprise Content-Aware DLP Solution Comparison and Select Vendor Profiles.” Meant to be read together with the first paper (“Enterprise Content-Aware DLP Architecture and Operational Practices”), it focuses on the state of Data Loss Prevention technology, functional capabilities, DLP use cases, future DLP trends, risks of DLP, profiles of five DLP vendors, etc.
Quick summary: “Content-aware data loss prevention has grown up and is on the verge of becoming a standard part of security architecture. A small set of vendors dominate a majority of enterprise DLP deployments. Challenges remain with planning, deployments and operations of large-scale DLP implementations.”
A few highlights from a 70 page document:
- “DLP use for regulated data protection, in general, is simpler than its use for corporate secrets and intellectual property because regulated data is ultimately the same for every organization that is covered by a particular regulation.”
- “DLP represents a different model for information security: information-centric security. Unlike other controls that operate based on context and metadata, DLP policies apply to the content itself. This presents a surprisingly difficult shift for many organizations.”
- “DLP duality — as an enforcement and education technology — reflects the deeper truth behind DLP: Both automation (that is, blocking and encryption) and education are mandatory for data security program success.”
- “As with many other security controls, content-aware DLP may bring additional risks to an organization.”
- “Gartner research consistently demonstrates that organizations procure much more DLP functionality than they can absorb and have deployed.”
- “DLP vendors are working on data security controls for emerging IT models. However, it appears that IT delivery models are evolving faster than DLP tools.”
P.S. If you think 70 pages is too long, the paper can definitely be read piece by piece (e.g. check out sections like “Customer Perspectives”, “Forces Shaping the DLP of the Future”, “Use-Case Comparison” and others)
Enjoy the paper! (sorry, Gartner GTP subscriber access only)
Somewhat related content:
Category: data DLP security Tags: data security, DLP, security
by Anton Chuvakin | April 17, 2013 | Comments Off
So, we are hiring (UK role, AU role), but WHO are we hiring? Job reqs don’t always do this justice, so let me explain it in plain terms. Think of this as a “prospect profile” for our team here.
In no particular order, we need somebody who:
- Can analyze and structure information and see patterns and trends in various types of often imperfect and noisy data
- Can write/present well about information security technology
- Is known in the industry (maybe due to your presentations, a blog, books, or whatever excellent work that you have done)
- Is a doer, not a talker (you will need to do a lot of talking, but it needs to be based on doing!)
What does that mean?
You might be an enterprise security architect, a security vendor technical product manager or a strategist (or “an evangelist” – but one connected to reality!). You might also be a consultant now, preferably one dealing with technical issues. Hey, you can even be an analyst at another firm (maybe one frustrated with too much talking and not enough architecting, implementing and operationalizing).
These roles may make you a good Gartner for Technical Professionals analyst. Clear now? Go apply while the roles are open!
Category: hiring security Tags: security
by Anton Chuvakin | April 15, 2013 | 2 Comments
Here is how building an enterprise security analytics “big data” capability is like building a flying car:
- You can buy a car from a lot of suppliers, but no one will sell you a flying car
- It makes little sense to build your own *regular* car, since there are so many to buy
- Some people/firms have demonstrated workable flying cars (example and a few more here)
- There are no best practices for building and operating flying cars … yet
- Lots of people think they want one, but few can explain why and how exactly they’d use it
- There are very few people who will give you useful advice on this topic, but there are many who will quote science fiction works that mention flying cars
- From the scientific point of view, flying cars are very simple. However, practical, engineering challenges make building one really hard
- Nobody offers training for flying car drivers/pilots, and it is not clear what skills such people need to possess
- Flying cars are REALLY cool!!!
The above is “inspired by a true story”, of course. However, all resemblance to real characters is purely coincidental
Category: analytics big data security SIEM Tags: analytics, big data, security, SIEM
by Anton Chuvakin | April 12, 2013 | 1 Comment
While listening to the keynote by Vice Admiral Gerald Beaman at a recent SINET ITSEF event, the following occurred to me: now in 2013, all the “hottest” security thinking has military roots … again. Kill chain, defense, intelligence, adversary, TTPs, campaigns, engaging [the adversary], even the whole cyber thing all have roots in either DoD, DIB or surrounding area.
Here are a few more examples observed recently:
- Incident response practices are moving away from automatic “find badness – flatten the box” and (in some cases!) started to include deception (such as honeypot redirection, much discussed for a decade, but done VERY rarely – until now) and attacker observation in live production environments [please recover from your shock quickly ] To fight the adversary, you need to know what they are likely to do next, and not just “make them go away” for now. Winning the battle [at this one PC] is not winning the war.
- Goals of some security programs have shifted to mission resilience/survivability as opposed to control compliance and pondering “are we secure?” [I totally get that this is old news for The Enlightened Few aka “security-haves” - the point is not that there is this dude who can now say “I told ya so”, but that is actually happening!] This change has obvious military roots, IMHO, as few wars were won because all tanks had proper maintenance done …
- Focus on the threats, long predicted by some people and dismissed by the majority, is slowly replacing the view that “we can do nothing about the threats, lets focus on vulnerabilities.” If threats are assumed to be persistent, doing something about them becomes an unavoidable priority since one can not fix all the vulnerabilities, all the time. Obviously, there was no age in history when the military ONLY perfected armor and ignored the gun (spear, bow, missile, laser, malware, whatever)
In the same speech, it was also quite interesting to observe the admiral take a very long-term view of information security and put some of its current security challenges in the context of a technological gap ( = US has less people to field a military than some other countries, thus it must maintain a technology advantage). Quite a few of other organizations can benefit from the same focus on the mission [=business] and its survival. BTW, here is a fun fact: this excellent paper [PDF] treats “a zero-day wielding professional attacker” as … threat Tier 3 of 6. Who is Tier 6? Well, read the paper
On top of this, I recently noticed the following a phenomenon, a few times already: Google for “site:new_security_vendor.com compliance” and see NO RESULTS. Despite that, “compliance is not dead” and a long list of security basics needs to be executed first – and executed well! However, the cutting edge is most definitely no longer anywhere near compliance …
P.S. Yes, this post is a bit of a rant or an incomplete thought. Well, that is why it is tagged philosophy
Category: philosophy security Tags: security
by Anton Chuvakin | April 11, 2013 | 2 Comments
Here is a useful resource on SIEM that has been recently updated by Mark Nicolett and Kelly Kavanagh: SIEM RFT Toolkit.
“Organizations that need to improve their log management, compliance reporting or real-time security event management capabilities can benefit from a security information and event management (SIEM) technology deployment. The SIEM project team should engage compliance stakeholders, security operations, network operations, and other groups that will ultimately use SIEM reports and monitoring functions. The SIEM project team can then produce an RFP that translates the organization’s needs into a list of requirements that the SIEM technology solution must meet.”
It goes without saying that this is for you to CUSTOMIZE (!!!) – NOT to use verbatim!
BTW, according to Gartner recent research, “Data loss prevention and SIEM were the fastest-growing segments” of an overall security markets.
Category: announcement SIEM Tags: SIEM