In a very distant past, security monitoring used to be a very simple activity. A single guy would grab logs from the firewall, the IDS and maybe an authentication system and looks for bad stuff, usually applying some scripts or slightly more sophisticated tools. It quickly evolved to log management and SIEM, where logs from different technologies is aggregated and correlated in real time and presented to a team of analysts looking for potential security incidents. The challenge to do effective security monitoring was, for a long time, dealing with the increasing volume of events, false positive rates and additional log sources to consider.
With the increasing computational power and the adoption of data science concepts by security, researchers and start-ups have been creating many new tools to improve security monitoring. Some of those tools capture data directly from the network, while others get data from the SIEM or other centralized event repository to apply their different techniques. The interesting aspect about those technologies is that they are loosely coupled to SIEM systems (when connected at all). Organizations can leverage those tools even without a SIEM in place. In fact, many organizations are deploying these tools as a way to compensate for their SIEM blind spots. As the tools don’t need to be integrated, they can also be operated independently. There is no need to talk to the SOC guys running the SIEM when deploying your cool new UEBA (Gartner access required), NBA or NFT.
However, as any defense strategy, coordination of efforts is key to success. There are many real and cyber world examples of things being detected by a group just to be forgotten in another group’s queue. Doing things in isolation is a recipe for disaster and that has been proven over and over again. Does that mean security systems must be implemented and controlled by a single group, always acting as a new event source to the top-of-the-pyramid SIEM?
I don’t think so. Some of the new tools even require different skills to operate, so trying to keep everything under the same group may be too complex (maybe impossible?) to do. There might be some benefits of having separate monitoring groups looking for different threats or applying different methods. But there is something that can’t be left behind: Coordination. Security Information has to flow between the groups doing security monitoring and suspicious activities must be reported to those in charge of incident investigation and response.
Security information flow is just like working with threat intelligence; some organizations are pretty good on obtaining and using externally provided TI. With different security monitoring groups, each one of those must be prepared to consume and provide TI to its peers. It doesn’t have to be centralized, it can follow a federated model. Instead of everything operating like an event source for a SIEM, The groups could keep operating independently but communicating through something like a Threat Intelligence Platform (TIP).
If security monitoring cannot be centralized and operated by the same team, think about using a federation model. It works decently for law enforcement (in places where law enforcement works ;-)), it could also work for cyber security.
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.