One of the key, absolutely key, factors in sharing of security-relevant information (be it IOCs, custom malware, detection specifics or “breach” costs) between organizations is TRUST. Even if an organization is promised the world of value in exchange for sharing their information with a particular group, sharing is unlikely in the absence of trust. Value may drive the interest in sharing, but trust drives the act of sharing itself!
The model of trust seems to be at the center of success/failure of whatever data sharing “club.” Therefore, the most successful model is based upon a person to person (P2P) trust. I trust “John” to make good use of the information, but I don’t necessarily trust “John’s” employer (no P2Org). Similarly, my employer may not trust “John” (no Org2P trust either). The same logic works when sharing with a group: ultimately, it would be about trusting each member of the group individually. On the other hand, this model is not very scalable and hard to engineer as trust needs to be built over time. This trust model also helps explain why relatively few like to share security stuff with “the government” – the ultimate faceless, impersonal entity (Pro tip I heard the other day: try not to share a malware sample developed by government X with the law enforcement arm of said government ).
Another dimension of trust for security data sharing is the chain of trust or transitive trust. I may trust “John," but do I trust all the people “John” trusts? My malware sample may end up in the hands of people I personally don’t trust because I may not even know them. Furthermore, if “Sam” entrusts his malware sample to me, does it automatically mean I can share it with “John” who I trust? Thus, the chain of trust looks different for a malware sample I found myself vs the one that was entrusted to me. Furthermore, trust is really the only thing that stops such “re-sharing.” After all, the most private email can get forwarded, the most secret conversations retold… unless there is justified trust that it won’t happen.
One of the examples where the lines of trust get the most tense is sharing of attack advance warning. The famous example of this (Churchill and Coventry), whether mythical or real, demonstrates the challenges with sharing such intelligence. A case I’ve recently heard is organization X capturing a beta version of malware designed for a targeted attack on organization Y (presumably, the sample either escaped from the “evil QA lab” or was purposefully released into the wild as a routine part of QA testing). Should the organization X call the organization Y? You go ponder that …
Thus, different levels of trust apply to different types of data, so I may trust “John” with my prized collection of custom malware, but not with the methods I used to accumulate it in the first place. In other cases, information sharing can happen, but the source of the information cannot be shared (and, presumably, cannot be easily guessed either). In yet another set of cases, only sanitized (a subset of reliably sanitizable i.e. not pcaps) information can be shared with the wider audience, while “full content” may only be shared with “John” who I trust fully.
Finally, no well-designed XML protocol or set of APIs will create this person to person trust. However, it may make sharing easier in the presence of such trust. In later posts, I will describe some of the example of the protocols and their associated trust models, such as The Honeynet Project hpfeeds system, CIF and others. BTW, PPD21 (and its “fact sheet”) does not say anything about what information could be entrusted to whatever parties, private or public. Presumably, it would be defined by NIST at later stages.
- On Security Data Sharing Research
- On Security Data Sharing
- Our Log Standards Paper Publishes (mentions select data sharing standards)
- More on DoS and Shared Security
Read Complimentary Relevant Research
Security Monitoring and Operations Primer for 2017
Security monitoring and operations excellence is a key component of any effective security program. Gartner's 2017 research will guide...
View Relevant Webinars
Surviving a Software Audit
Gartner clients continue to report increasingly frequent software license audits, resulting in undefended, unbudgeted and unmanaged costs....
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.