Ben Tomhave

A member of the Gartner Blog Network

Ben Tomhave
Research Director
1 years at Gartner
19 years IT Industry

Ben is conducting research within the Security and Risk Management Strategies team under Gartner for Technical Professionals.

Updating GTP’s DLP Coverage

by Ben Tomhave  |  November 12, 2014  |  Submit a Comment

It’s been a couple years since the last update of our DLP coverage. In the process of updating it this go-round, I’ll be taking the reins from Anton Chuvakin and picking up primary coverage of DLP for the SRMS team. In addition to revising the existing documents (Enterprise Content-Aware DLP Solution Comparison and Select Vendor Profiles and Enterprise Content-Aware DLP Architecture and Operational PracticesGTP subscription required), we’ll also be spinning off a foundational document that can be referenced when getting started with a project.

Overall, we have some very hard questions for the vendors to answer, such as whether or not DLP is viable and valuable going forward, or if it’s instead too expensive and difficult to maintain and operate. We’re also seeking out stories from our end-user clients (good or bad!) about DLP projects/implementations. If you have a success or failure story, then we’d love to hear about it (doesn’t need to be cited by name!). Feel free to leave a comment on this post, email me, or message me on twitter.

I’m looking forward to this research process and would love to hear about your experiences!

Submit a Comment »

Category: Research Uncategorized     Tags: , ,

Recent GTP Security Research

by Ben Tomhave  |  November 12, 2014  |  Comments Off

Before resuming delving into any philosophical meanderings about infosec or info risk mgmt, I wanted to first highlight some recent research for you all. All of the following require a GTP subscription (go here to contact us if you’re interested in getting access).

2015 Planning Guide for Security and Risk Management
02 October 2014 G00264325
Analyst(s): Ramon Krikken

– Scaling security practices is difficult and generally unsustainable.
– Breaches are inevitable, so it’s how well you monitor and respond that matters most.
– The regulatory environment is still volatile, and we expect that to only get worse in the near term.

Implement Information Classification to Enhance Data Governance and Protection
23 October 2014 G00267242
Analyst(s): Heidi L. Wachs

– Information classification is powerful and useful.
– Get a program underway, earning some quick wins to demonstrate value.
– Don’t hit paralysis – classify and move on to remediation!

Approaches for Content and Data Security in Microsoft SharePoint Server
21 October 2014 G00265848
Analyst(s): Ben Tomhave

– SP security is primarily about content protection.
– Understanding SP scale is key in determining the best class of security solution.
– SP has a lot of built-in capabilities!

Interested in more information? We look forward to speaking with you! :)

Comments Off

Category: Uncategorized     Tags:

Writer’s Block and the Long Summer

by Ben Tomhave  |  November 5, 2014  |  Comments Off

It’s been several months since I last posted here, and for good reason. For one thing, June-October tends to be a blur. It started off with Gartner’s Security & Risk Management Summit over in National Harbor, MD, followed by a couple weeks of vacation, a week of sales support, a couple weeks of catch-up, then Gartner’s Catalyst Conference out in San Diego. A couple weeks later school resumed, which always keeps my household busy, followed by a handful of smaller events and a lot more sales support.

To top things off, this analyst hit the wall and got stuck in a good old fashioned rut of writer’s block. Oh, the horror – as someone expected to produce written content on a regular basis – to enter a period where one’s ability to coherently put thoughts into writing simply wasn’t functional (some might argue that’s still the case;). Thankfully, having finally gotten all caught-up from travel, speaking, sales support, etc, etc, etc, this analyst is finally getting back into the swing of things and getting writing underway.

Things will likely continue to be stop-n-go for the next couple months, but rest assured, this blog spot isn’t dead and will be revived in the coming weeks.

Comments Off

Category: Uncategorized     Tags:

Things That Aren’t Risk Assessments

by Ben Tomhave  |  July 24, 2014  |  2 Comments

In my ongoing battle against the misuse of the term “risk,” I wanted to spend a little time here pontificating on various activities that ARE NOT “risk assessments.” We all too often hear just about every scan or questionnaire described as a “risk assessment,” and yet when you get down to it, they’re not.

As a quick refresher, to assess risk, you need to be looking at no less than 3 things: business impact, threat actor(s), and weaknesses/vulnerability. The FAIR definition talks about risk as being “the probable frequency and probable magnitude of future loss.” That “probably frequency” phrase translates to Loss Event Frequency, which is compromised of estimates that a threat community or threat actor will move against your org, that they’ll have a certain level of capabilities/competency, and that your environment will be resistant to only a certain level of attacks (thus representing weakness or vulnerability).

Oftentimes, that “probable magnitude” component is what is most lacking from alleged “risk-based” discussions. And, of course, this is where some of these inaptly described tools and capabilities come into play…


A questionnaire is just a data gathering tool. You still have to perform analysis on the data gathered, supplying context like business impact, risk tolerance/capacity/appetite, etc, etc, etc. Even better, the types of questions asked may result in this tool being nothing more than an audit or compliance activity that has very little at all to do with “risk.” While I realize that pretty much all the GRC platforms in the world refer to their questionnaires as “risk assessments,” please bear in mind that this is an incorrect characterization of a data gathering tool.


The purpose of an audit is to measure actual performance against a desired performance. Oftentimes, audits end up coming to us in the form of questionnaires. Rarely, if ever, do audits look at business impact. And, one could argue that this is ok because they’re really not charged with measuring risk. However, we need to be very careful about how we handle and communicate audit results. If your auditors come back to you and start flinging the word “risk” around in their report, challenge them on it (hard!!!) because dollars-to-donuts, they probably didn’t do any sort of business impact assessment, nor are they even remotely in the know on the business’s risk tolerance, etc.

Vulnerability Scans and AppSec Testing

My favorite whipping-boy for “risks that aren’t risks” is vulnerability scans. I had a conversation with a client last week where they had a very large (nearly 100-page) report dropped onto them that was allegedly a “risk assessment,” but in reality was a very poor copy-n-paste of vuln scan data into a template that had several pages of preamble on methodology, several pages of generic explanatory notes at the end, and did not appear to do any manual validation of findings (the list of findings also wasn’t deduplicated).

Despite the frequent use of “risk” on these reports, they’re most often describe “ease of exploit” or “likelihood of exploit.” However, even then, their ability to estimate likelihood is pretty much nonsensical. Take, for instance, an internal pentest of a walled-off environment. They find an open port hosting a service/app that may be vulnerable to an attack that could lead to remove root/admin and is easy to exploit. Is that automatically a “high likelihood” finding? Better yet, is it a “high risk” finding? It’s hard to say (though likely “no”), but without important context, they shouldn’t be saying anything at all about it.

Of course, the big challenge for the scanning vendors has always been how to represent their findings in a way that’s readily prioritized. This is why CVSS scores have been so heavily leveraged over the years. However, even then, CVSS does not generally take into account context, and it absolutely, positively is NOT “risk.” So, again, while potentially useful, without context and business impact, it’s still just a data point.

I could rant endlessly about some of these things, but I’ll stop here and call it a day. My exhortation to you is to challenge the use of the word “risk” in conversations and reports and demand that different language be used when it’s discovered that “risk” is not actually being described. Proper language is important, especially when our jobs are on the line.


Category: Risk Management     Tags: , , , , ,

Three Epic “Security” Mindset Failures (“ignorance is bliss”)?

by Ben Tomhave  |  May 6, 2014  |  2 Comments

I don't care if it's a bug or a feature as long as Nagios is happy

I saw this graphic a couple weeks ago while trolling Flickr for CCby2.0-permissioned images and it got me thinking… there are a number of mindset failures that lead us down the road to badness in infosec.

Consider this an incomplete list…

  • As long as [monitoring/SIEM] is happy, you’re happy.
  • As long as [auditor/checklist] is happy, you’re happy.
  • As long as [appsec testing / vuln scanner] is happy, you’re happy.

I’m sure we could all come up with a few dozen more examples, but for a Tuesday, this is probably enough to start a few rants… :) Part of what triggered this line of thinking for me was the various reports after the retail sector breaches about tens of thousands of SIEM alerts that were presumed to be false positives, and thus ignored. Kind of like trying to find a needle in a haystack.

(Image Source (CCby2.0): Noah Sussman


Category: Uncategorized     Tags:

New Research: Security in a DevOps World

by Ben Tomhave  |  April 30, 2014  |  Comments Off

Hot off the presses, new research from Sean Kenefick and me titled “Security in a DevOps World,” which is available to Gartner for Tech Professionals subscribers at

Some of the key takeaways from this research include:

  • Automation is key! AppSec programs must find ways to integrate testing and feedback capabilities directly into the build and release pipeline in order to remain useful and relevant.
  • Risk triaging is imperative to help differentiate between apps with lower and higher sensitivity. We recommend leveraging a pace-layered approach to assist with building a risk triage capability.
  • Engineer for resilience. It’s impossible to stop all the bad things. Instead, we need to build fault-tolerate environments that can rapidly detect and correct incidents of all variety.

We also have found there continues to be some confusion around DevOps, and much trepediation as to the ever-shifting definition. Within security, this uncertainty translates into negative perception of DevOps and typically assinine resistance to change. To that end, I wish to challenge 5 incorrect impressions.

DevOps doesn’t mean…

  • …no quality. QA doesn’t go away, but shifts to being more automated and incremental. This can be a Good Thing ™ as it can translate to faster identification and resolution of issues.
  • …no testing. Patently untrue. There should be testing, and likely lots of testing at different levels. Static analysis should occur on nightly code repository check-ins, and possibly at code check-in time itself. Dynamic analysis should occur as part of standard automated build testing. For high-risk apps, there should be time built-in for further detailed testing as well (like manual fuzzing and pentesting).
  • …no security. Security should be involved at the outset. Ideally, developers will work from a pre-secured “gold image” environment that already integrates host and network based measures. Further, as noted above, appsec testing should be heavily integrated and automated where possible. This automation should free appsec professionals to evolve into a true security architect role for consulting with developers and the business.
  • …no responsibility. On the contrary, DevOps implies empowerment and LOTS of shared responsibilities.
  • …no accountability. Perhaps one of the more challenging paradigm shifts is the notion that developers now be held directly responsible for the effects of their code. This accountability may not be as dire as termination, but should include paging devs in the middle of the night when their code breaks something in production. The telephone game must end. This doesn’t mean removing ops and security from the loop (quite the opposite), but it changes the role of ops and security (IRM).

I hope you enjoy this new research!

“Security in a DevOps World” (published April 29, 2014

Comments Off

Category: Uncategorized     Tags:

Where I’ll Be: Spring/Summer 2014 Events

by Ben Tomhave  |  March 27, 2014  |  Comments Off

A quick post… I’ll be traveling a bit this Spring and Summer to speak at a number of events. For non-Gartner events, we’re actively looking for GTP sales opportunities, so if you’ve been thinking about getting a subscription to Gartner for Technical Professionals, this could be your chance to meet face-to-face to discuss! :) For Gartner events, I will be available for 1-on-1s, as well as sales support as needed.

Here’s what I have scheduled through August:

Please drop me a note if you’ll be at any of these events and we can arrange to meet-up. Also, if you’re interested in exploring a GTP subscription, please contact us and a sales rep will reach out to help coordinate. We love meeting with clients!

Hope to see you soon!

Comments Off

Category: Uncategorized     Tags:

Discussing RA Methods with CERT

by Ben Tomhave  |  March 26, 2014  |  Comments Off

In follow-up to our paper, “Comparing Methodologies for IT Risk Assessment and Analysis” (GTP subscription required), Erik Heidt and I were given the wonderful opportunity to be guests on the CERT Podcast to discuss the work.

You can listen the episode, as well as view program notes and a full transcript, at the CERT Podcast page here.

Comments Off

Category: Uncategorized     Tags:

Incomplete Thought: The Unbearable “Bear Escape” Analogy

by Ben Tomhave  |  March 20, 2014  |  Comments Off

“You don’t have to run faster than the bear to get away. You just have to run faster than the guy next to you.”

The problem with this analogy is that we’re not running from a single bear. It’s more like a drone army of bears, which are able to select multiple targets at once (pun intended). As such, there’s really no way to escape “the bear” because there’s no such thing. And don’t get me started on trying to escape the pandas…

So… if we’re not trying to simply be slightly better than the next guy, what approach should we be taking? What standard should we seek to measure against?

Overall, I’ve been advocating for years, as part of a risk-based approach, that the focus should be on determining negligence (or, protecting against such claims). Unfortunately, evolving a standard of reasonable care takes a lot of time. It’s been suggested in some circles (particularly on The Hill) that the NIST CSF may fill that void (for better or worse). One challenge here, however, is that the courts are charged with determining “what’s reasonable,” and so in many ways we’ll be challenged in evolving this standard (that is, it’ll take a while).

At any rate, I believe that there is an opportunity for constructing a framework (or, perhaps rubric would be a better outcome) by which people can start determining whether or not they’ve met a reasonable standard of care. Of course, one might also point out the myriad other standards in place that could serve a similar capacity. I don’t think CSF is remotely sufficient in its current incarnation, but that may improve over time.

It is probably worthwhile here to reinforce the point that “bad things will happen” and that it’s not so much a matter of “stop all the bad things,” but rather “manage all the bad things as best as possible” (at least after having exercised a degree of sound basic security hygiene). Anyone who’s familiar with my pre-Gartner writings will recognize the topics of resilience and survivability as key foci for risk management programs (and implicit in my thoughts and comments here).

But, how do you get to that point of a healthy, robust risk management program? Where do you start? How do you prioritize your work?

Here’s the priority stack I’ve been using lately:

  1. Exercise good basic security hygiene
  2. Do the things required of you by an external authority (aka “things that will get you fined/punished”)
  3. Do the things you want to do based on sound risk management decisions

What this stack should tell you is two key things. First, a reasonable standard has to consider a basic set of security practices applied across the board. It would probably be comprised of policies, awareness programs, and foundational practices like patch mgmt, VA/SCA, appsec testing (for custom coding projects), basic hardening, basic logging/monitoring/response, etc. Second, from the perspective of considering a negligence claim (bearing in mind that IANAL!), I think looking at high-level practices will be key, rather than delving into specific technical details.

For instance: Did a breach occur because a system wasn’t up to full patch level? If so, is a reasonable patch mgmt program in place? If so, why wasn’t this patch applied? What does the supporting risk assessment show about why this particular patch was not applied?

Lather, rinse, repeat.

Obviously, more could be said… but, hopefully this stub gets you started thinking about how the business may need to protect itself from legal claims in the future, and how an evolved standard for “reasonable care” (as determined in court) may impact security practices and expectations for security performance.

Comments Off

Category: Uncategorized     Tags:

Join Us! SRMS has an opening!

by Ben Tomhave  |  March 20, 2014  |  Comments Off

We’re hiring for the Security & Risk Management Strategies (SRMS) team within Gartner for Technical Professionals. Full details here.

The official listing covers a LOT of territory, but here are some things to consider:

  • This position is with Gartner for Technical Professionals (GTP), which is distinctly different from the IT Leaders (ITL) team (which produces marketscopes, magic quadrants, etc.). Our focus is on an architectural perspective and we often approach research from the perspective of addressing technical questions.
  • Positions are generally work-from-home/remote! :)
  • The amount of travel required tends to be fairly low.
  • Doing research is the top priority!
  • Work-load is a nice balance between research, writing, speaking, and “other stuff” (like sales support).
  • There are plenty of opportunities for client interactions.
  • The pace of research is very nice, in large part because our documents tend to be longer and more thorough. For example, a typical GTP document will run anywhere from 25-40 pages, whereas ITL papers will typically be under 15 pages. Our audiences tend to be different, though security and risk management do tend to have some overlap.
  • SRMS is an awesome team! This is really a fun bunch to work with.
  • The sky’s the limit! No kidding. There is no limit to the available research opportunities. It makes the job a ton of fun!

So… those are my quick thoughts. I joined this team in June 2013 and am loving it. If you’re looking for an opportunity to do technical research, then this might be the opportunity you’ve been waiting for!

Apply here.

If for some reason that link doesn’t work, please visit and search for “IRC26388.”

Feel free to reach out to me if you have questions or just want to chat about the opportunity.

Good luck!

ps: You can read Anton’s thoughts on the position, too.

Comments Off

Category: Uncategorized     Tags: