Ben Tomhave

A member of the Gartner Blog Network

Ben Tomhave
Research Director
1 years at Gartner
19 years IT Industry

Ben is conducting research within the Security and Risk Management Strategies team under Gartner for Technical Professionals.

Updated Research on AppSec Testing

by Ben Tomhave  |  February 2, 2015  |  Comments Off

As of January 30th, we have an updated paper out titled “How to Perform Application Security Testing for Web and Mobile Applications” (GTP subscription required). Following is the summary from the document:

“Application security testing remains a critical application security practice for developers, testers and security team members. This document explains how to implement three phases of AST throughout the software life cycle.”

This paper continues the narrative set forth in our appsec guidance framework, “Application Security: Think Big, Start With What Matters,” which lays out the overall recommend structure for an appsec program.

We hope that you’ll find this research to be useful and welcome your feedback!

Comments Off

Category: Uncategorized     Tags:

You Can’t Fix Stupid: Renewed Calls For Cybersecurity Legislation (U.S.)

by Ben Tomhave  |  January 14, 2015  |  Comments Off

(yes, I’m feeling a bit cheeky today;)

As you’ve undoubtedly heard by now, President Obama renewed calls increased cybersecurity legislation, all apparently because Sony Pictures Entertain (SPE) got hacked? If you’ve not heard, check out the mainstream press coverage here:

Additionally, the SEC has signaled that it’s considering increasing their disclosure requirements. This is perhaps a bit more sensible than the proposed legislation, since stakeholders have a right to transparency in the companies they’re supporting in order to better manage their investment/portfolio risk.

The EFF has come out with a rebuttal to the President that pretty much captures most of the security community’s response to the proposal. tl;dr: the proposal is political rubbish, as per usual.

A few quick thoughts… and, note, this represents my personal opinions, not the opinion of my employer, etc, etc, etc….

1) You can’t legislate away cybersecurity “risk.” And, you certainly can’t do so with checklists alone. The simple fact of the matter is that we continue to move through a transitional period in the midst of the digital industrial revolution, and until humans and automation catch-up with other technological advances, stuff is going to happen. While we should be holding people/organizations accountable for poor decisions, that does not mean we should be defaulting to a checklist-based approach that will never be complete or adequate.

2) Bringing back broken old ideas doesn’t suddenly make them good ideas. CISPA and related cybersecurity legislation insanity were never good ideas, and these proposed changes aren’t any better now than 3-4 years ago. Some have argued that the proposed changes would make most security research illegal, and it’s probably not too far off the mark this time. Because that’s worked so well in the past…

3) Making illegal actions more illegal doesn’t stop them. There’s a certain fallacious logic floating around these days that simply boggles the mind. It started with anti-Second Amendment (“gun control”) legislation, and continues now with these cybersecurity rules. Criminals don’t obey the laws. If they did, they wouldn’t be criminals. The attack on SPE is absolutely a violation of federal law under CFAA. And, while it’s true that CFAA really needs to be revised (it truly sucks), increasing penalties is in no way a solution, nor is making other security-related activities illegal going to help at all. If the deterrence isn’t sufficient, then certainly, make those tweaks. However, at the same time, let’s remember that an attacker based outside our jurisdiction is not going to be deterred by any laws inside our jurisdiction. Making changes that punish the security community for doing research is patently unhelpful.

4) The “right” solution is to make people responsible and accountable. I’ve written about this several times before (see here and here for a couple older examples). The fundamental problems are that businesses still aren’t acting responsibly around infosec and IT risk mgmt., nor are they accepting responsibility. Let’s consider, for example, just how pwnd SPE has apparently been. How is that even possible in this day and age? They’re not a small company. Without knowing all the details, I have to wonder at fault tolerance there, as well as monitoring, detection, and response capabilities. There is a certain degree of culpability that enterprises must accept insomuch as they are ultimately responsible for ensuring that they’ve met a reasonable standard of due care. And, to a degree, it seems that most businesses experiencing large compromises in 2014 were somewhat resilient, but are they resilient enough? That remains to be seen…

5) We have to push to the next generation of security practices, which means DevOps and security ops automation. As the title of this piece suggests, you can’t fix stupid, which is a glib way of pointing out that humans are fallible (and quite possibly dangerously ignorant). However, more importantly, we need to realize that traditional security practices simply do not scale. If they did, and if checklists were sufficient, then we wouldn’t be having these conversations. Instead, we’re finding that we simply don’t have adequately scalable solutions and resources. The only answer will be automation in the security space in order to at least improve the signal-to-noise ratio (SNR), cleaning up all those high-frequency/low-impact events, instead of having IT and opsec resources in constant firefighting mode, unable to see the forest for the trees.


When all is said and done, some rule changes may help move the needle. For example, NIST’s current draft of SP800-171 is particularly interesting insomuch as this will mandate cybersecurity requirements for private industry via FISMA and the acquisitions process (that is, if you want to do business with the USG, then you’ll need to demonstrate a modicum of infosec due diligence). However, there will be limits to changes like this because infosec cannot be solely addressed through checklists. As upcoming research will discuss, a two-pronged approach to security is necessary that combines checklist-based basic security hygiene with a risk-based approach to enhance, extend and mature specific practices to best serve the needs and risk tolerance of the business.

Comments Off

Category: Uncategorized     Tags:

Sonys and Targets and Heartbleeds! Oh My!

by Ben Tomhave  |  January 9, 2015  |  Comments Off

Now that we can soundly close the book on 2014, it’s perhaps a good time to take a quick think back as we consider our best path forward. 2014 was indeed the year of infosec insanity, based on the sheer number of large breaches, number of breaches, number of “major, earth-shattering” vulnerability disclosures, etcetera etcetera etcetera (if you didn’t read that last bit in the voice of the King of Siam, then check it out here).

First, let’s ponder back, ever-so-briefly, on 2014… wow, it kinda sucked, didn’t it? Well, maybe only a little. Personally, I only had 2 cards reissued last year, and it’s because I misplaced one and wore out the other. Well, ok, the worn out one also got compromised, now that I think about it, but this was still an improvement over 2013 when I had 3 cards compromised in 4 wks (card skimming ring on a major highway near where I live targeting all the gas stations).

We certainly saw a heaping pile of major breach disclosures in 2014… Target seems to have been the most impactful one in terms of the infosec trade, in large part because of the impact on executives and board members there. I’ve heard from dozens of organizations that are “benefiting” from increased interest and emphasis on cybersecurity now going forward. Are there lessons to learn from that event? Probably, but the real story has yet to be written…

Sadly, the same can be said about the Sony breach… we really don’t know a whole lot, except that they’re apparently still down (“More than six weeks later, the studio’s network is still down – and is expected to remain so for a few weeks, as techs work to rebuild and get it fully back online.” [src]) We aren’t even sure of what the business impact will be from the event (“‘I would say the cost is far less than anything anybody is imagining and certainly shouldn’t be anything that is disruptive to our budget,’ Lynton told Reuters in an interview” [src]).

So, we’re left to wonder, after all the hoopla over Target and Home Depot and Staples and Sony and myriad others… what lessons are we to learn from 2014?

First, remain calm and carry on. For all the major breaches that occurred, we’ve not seen businesses fail (yet). As such, something must be working, even if it’s just an adequate amount of insurance / self-insurance. That said, don’t get cocky! Breaches will happen. We need to be prepared for them. And, more importantly, we cannot rely on good will and insurance to bail us out in the end.

Second, get those basics tackled, and asap. Based on the 2014 Verizon DBIR, the vast majority of attacks are still reflective of not having the basic practices in place. If the rumors are to be believed, the Target incident may have involved some weak vendor credentials, inadequate network segmentation, and improperly managed malware and monitoring (see “It turns out Target could have easily prevented its massive security breach” and the U.S. Senate Committee on Commerce, Science, and Transportation report “A ‘Kill Chain’ Analysis of the 2013 Target Data Breach”).

Third, it’s time to pivot hard toward the cloud services model while optimizing detection and response capabilities. That means pushing into virtualization and various cloud services models, making use of orchestration and configuration management tools that facilitate ops automation in order to more rapidly address issues. It likely also means shifting toward DevOps for similar reasons. And, of course, it means investing heavily in visibility into your environment in order to detect events as quickly as possible. Bad things will happen, so it’s imperative we find ways to detect them ASAP and then interdict as best as possible.

At the end of the day, the time is right to start investing in better tooling for ops automation, detection and response. All of that must be underpinned by a solid risk management capability, which will facilitate prioritization of projects, environments, as well as pulling forward impact analysis awareness to an operational level to better achieve top-to-bottom alignment. We’re critically overdue for an evolutionary step (or leap) forward as we’re simply not able to scale resources adequately to keep pace. Solving those scale issues will be critical here in 2015 (and beyond).

Comments Off

Category: Planning Risk Management     Tags: , , ,

Facebook and the Derpness of Enabling Their 2FA

by Ben Tomhave  |  December 1, 2014  |  Comments Off

I was awoken around 5am post-Thanksgiving Saturday by multiple text messages from Facebook instructing me to click a link and enter a code to reset my password. It seems someone decided to try and takeover my account. This led me to conclude that now would be a good time to quit putting-off enabling 2-factor authentication (2FA) for my account. What should have been a very simple process was complicated (slightly) by a degree of true derpitude: in order to enable 2FA for my account, Facebook first insisted that I change my browser configuration (or use a different browser) that wasn’t set to clear cookies after each session. I received the following error/explainer:

B3oFwPECIAAnrKt

This message annoyed me greatly, for three reasons.

1) Why was I being asked to reduce the security settings on my browser in order to increase the security on my account?

First and foremost, this is just plain stupid. Clearing my cache is a /good/ thing, which all of you should do on a regular basis. It helps improve your privacy (to a small degree, anyway). It helps keep the web site content you’re accessing correct and fresh (we still see some cache issues when sites are updated). More importantly, though, is that I should never be asked to decrease my security and privacy settings in one area just to improve my security in another area. Which leads me to…

2) Why the erroneous implied correlation between cookies and 2FA?

Clearly, there are two concerns driving this behavior by Facebook: usability and trackability. In the first case, I would imagine they foresaw (or maybe actually saw) an uptick in frustration with logging into the site and having to use a one-time-code each time. They may also have viewed it as a usability issue that could be eased by storing a permanent cookie. However, let’s be honest… if I’m enabling 2FA, it’s because I want that second out-of-band factor there pretty much every single time.

Which leads me to believe that there is, in all likelihood, a more nefarious and profit-driven motive behind not allowing me to enable 2FA without also allowing them to set long-term (or permanent) cookies: advertising and tracking. By setting a permanent cookie, they can help “improve” usability while also getting a long-term tracking device well-embedded within my browser, which carries over to more than just their own site (read this article for more on the extensive tracking their doing). Facebook is a commercial business, and publicly traded to boot, so they need to be able to grow their business and show value for stakeholders. I understand /why/ they would want to track, but… they also need to honor “do not track” decisions be users (see EFF’s coverage of the topic).

These considerations aside, though, leads me to a more important usability question…

3) Why not warn instead of preventing 2FA until one relents and adopts the weaker settings (at least temporarily)?

It seems rather heavy-handed to outright prevent users from enabling 2FA until they reduce their security settings in their browser. I understand advising people of the potential usability issue (“if you don’t change your cache settings, then you’ll be prompted for a code every single time you login”). But, why prevent it altogether? To me, that seems to send the wrong messages to users. We want them to enable 2FA! As important as Facebook has become in social and professional circles, the thought that extra hoops must be jumped through to improve account security is a bit asinine.

Also, interestingly and as expected, the weaker browser settings are /not/ required in order to use 2FA and have a normal experience. I went through the step of using a weakened browser config in order to jump through their hoop, and then immediately cleared cache and reverted my settings, and had no problem whatsoever logging into my account using a one-time-code. As such, it seems clear that this is less about usability and more about profit motive, which is a lousy message to send to customers.


At the end of the day, Facebook is an easy target, but they’ve seemingly handily won the social media platform competition. However, it concerns me greatly when a major player in the online space does patently silly things like increasing the difficulty of enabling 2FA on accounts. The secure choice should be the easy choice, and that certainly is not the case today for Facebook’s 2FA. Which is not to say it’s super-difficult to enable it; just that it’s harder than it needs to be and, for less tech-savvy users, I’m going to go out on a limb and guess that encountering this message would be enough to stop someone from proceeding further to get 2FA enabled.

Hopefully Facebook Security will wake up and correct this situation. There’s no reason security and usability can’t be complementary. There’s no reason that the secure choice can’t be the easy choice. For most people (IE and Safari users under default configs), this is likely a non-issue. However, that means they’re almost being openly antagonistic with more advanced users, and to what end?

Well, enough ranting on the topic… suffice to say, forewarned is forearmed, and if you’re looking to enable 2FA on your Facebook account, don’t be surprised by this little hiccup.

Comments Off

Category: Common Sense     Tags:

Updating GTP’s DLP Coverage

by Ben Tomhave  |  November 12, 2014  |  Submit a Comment

It’s been a couple years since the last update of our DLP coverage. In the process of updating it this go-round, I’ll be taking the reins from Anton Chuvakin and picking up primary coverage of DLP for the SRMS team. In addition to revising the existing documents (Enterprise Content-Aware DLP Solution Comparison and Select Vendor Profiles and Enterprise Content-Aware DLP Architecture and Operational PracticesGTP subscription required), we’ll also be spinning off a foundational document that can be referenced when getting started with a project.

Overall, we have some very hard questions for the vendors to answer, such as whether or not DLP is viable and valuable going forward, or if it’s instead too expensive and difficult to maintain and operate. We’re also seeking out stories from our end-user clients (good or bad!) about DLP projects/implementations. If you have a success or failure story, then we’d love to hear about it (doesn’t need to be cited by name!). Feel free to leave a comment on this post, email me, or message me on twitter.

I’m looking forward to this research process and would love to hear about your experiences!

Submit a Comment »

Category: Research Uncategorized     Tags: , ,

Recent GTP Security Research

by Ben Tomhave  |  November 12, 2014  |  Comments Off

Before resuming delving into any philosophical meanderings about infosec or info risk mgmt, I wanted to first highlight some recent research for you all. All of the following require a GTP subscription (go here to contact us if you’re interested in getting access).

2015 Planning Guide for Security and Risk Management
02 October 2014 G00264325
Analyst(s): Ramon Krikken

MY MAJOR TAKEAWAYS:
– Scaling security practices is difficult and generally unsustainable.
– Breaches are inevitable, so it’s how well you monitor and respond that matters most.
– The regulatory environment is still volatile, and we expect that to only get worse in the near term.

Implement Information Classification to Enhance Data Governance and Protection
23 October 2014 G00267242
Analyst(s): Heidi L. Wachs

MY MAJOR TAKEAWAYS:
– Information classification is powerful and useful.
– Get a program underway, earning some quick wins to demonstrate value.
– Don’t hit paralysis – classify and move on to remediation!

Approaches for Content and Data Security in Microsoft SharePoint Server
21 October 2014 G00265848
Analyst(s): Ben Tomhave

MY MAJOR TAKEAWAYS:
– SP security is primarily about content protection.
– Understanding SP scale is key in determining the best class of security solution.
– SP has a lot of built-in capabilities!

Interested in more information? We look forward to speaking with you! :)

Comments Off

Category: Uncategorized     Tags:

Writer’s Block and the Long Summer

by Ben Tomhave  |  November 5, 2014  |  Comments Off

It’s been several months since I last posted here, and for good reason. For one thing, June-October tends to be a blur. It started off with Gartner’s Security & Risk Management Summit over in National Harbor, MD, followed by a couple weeks of vacation, a week of sales support, a couple weeks of catch-up, then Gartner’s Catalyst Conference out in San Diego. A couple weeks later school resumed, which always keeps my household busy, followed by a handful of smaller events and a lot more sales support.

To top things off, this analyst hit the wall and got stuck in a good old fashioned rut of writer’s block. Oh, the horror – as someone expected to produce written content on a regular basis – to enter a period where one’s ability to coherently put thoughts into writing simply wasn’t functional (some might argue that’s still the case;). Thankfully, having finally gotten all caught-up from travel, speaking, sales support, etc, etc, etc, this analyst is finally getting back into the swing of things and getting writing underway.

Things will likely continue to be stop-n-go for the next couple months, but rest assured, this blog spot isn’t dead and will be revived in the coming weeks.

Comments Off

Category: Uncategorized     Tags:

Things That Aren’t Risk Assessments

by Ben Tomhave  |  July 24, 2014  |  2 Comments

In my ongoing battle against the misuse of the term “risk,” I wanted to spend a little time here pontificating on various activities that ARE NOT “risk assessments.” We all too often hear just about every scan or questionnaire described as a “risk assessment,” and yet when you get down to it, they’re not.

As a quick refresher, to assess risk, you need to be looking at no less than 3 things: business impact, threat actor(s), and weaknesses/vulnerability. The FAIR definition talks about risk as being “the probable frequency and probable magnitude of future loss.” That “probably frequency” phrase translates to Loss Event Frequency, which is compromised of estimates that a threat community or threat actor will move against your org, that they’ll have a certain level of capabilities/competency, and that your environment will be resistant to only a certain level of attacks (thus representing weakness or vulnerability).

Oftentimes, that “probable magnitude” component is what is most lacking from alleged “risk-based” discussions. And, of course, this is where some of these inaptly described tools and capabilities come into play…

Questionnaires

A questionnaire is just a data gathering tool. You still have to perform analysis on the data gathered, supplying context like business impact, risk tolerance/capacity/appetite, etc, etc, etc. Even better, the types of questions asked may result in this tool being nothing more than an audit or compliance activity that has very little at all to do with “risk.” While I realize that pretty much all the GRC platforms in the world refer to their questionnaires as “risk assessments,” please bear in mind that this is an incorrect characterization of a data gathering tool.

Audits

The purpose of an audit is to measure actual performance against a desired performance. Oftentimes, audits end up coming to us in the form of questionnaires. Rarely, if ever, do audits look at business impact. And, one could argue that this is ok because they’re really not charged with measuring risk. However, we need to be very careful about how we handle and communicate audit results. If your auditors come back to you and start flinging the word “risk” around in their report, challenge them on it (hard!!!) because dollars-to-donuts, they probably didn’t do any sort of business impact assessment, nor are they even remotely in the know on the business’s risk tolerance, etc.

Vulnerability Scans and AppSec Testing

My favorite whipping-boy for “risks that aren’t risks” is vulnerability scans. I had a conversation with a client last week where they had a very large (nearly 100-page) report dropped onto them that was allegedly a “risk assessment,” but in reality was a very poor copy-n-paste of vuln scan data into a template that had several pages of preamble on methodology, several pages of generic explanatory notes at the end, and did not appear to do any manual validation of findings (the list of findings also wasn’t deduplicated).

Despite the frequent use of “risk” on these reports, they’re most often describe “ease of exploit” or “likelihood of exploit.” However, even then, their ability to estimate likelihood is pretty much nonsensical. Take, for instance, an internal pentest of a walled-off environment. They find an open port hosting a service/app that may be vulnerable to an attack that could lead to remove root/admin and is easy to exploit. Is that automatically a “high likelihood” finding? Better yet, is it a “high risk” finding? It’s hard to say (though likely “no”), but without important context, they shouldn’t be saying anything at all about it.

Of course, the big challenge for the scanning vendors has always been how to represent their findings in a way that’s readily prioritized. This is why CVSS scores have been so heavily leveraged over the years. However, even then, CVSS does not generally take into account context, and it absolutely, positively is NOT “risk.” So, again, while potentially useful, without context and business impact, it’s still just a data point.


I could rant endlessly about some of these things, but I’ll stop here and call it a day. My exhortation to you is to challenge the use of the word “risk” in conversations and reports and demand that different language be used when it’s discovered that “risk” is not actually being described. Proper language is important, especially when our jobs are on the line.

2 Comments »

Category: Risk Management     Tags: , , , , ,

Three Epic “Security” Mindset Failures (“ignorance is bliss”)?

by Ben Tomhave  |  May 6, 2014  |  2 Comments

I don't care if it's a bug or a feature as long as Nagios is happy

I saw this graphic a couple weeks ago while trolling Flickr for CCby2.0-permissioned images and it got me thinking… there are a number of mindset failures that lead us down the road to badness in infosec.

Consider this an incomplete list…

  • As long as [monitoring/SIEM] is happy, you’re happy.
  • As long as [auditor/checklist] is happy, you’re happy.
  • As long as [appsec testing / vuln scanner] is happy, you’re happy.

I’m sure we could all come up with a few dozen more examples, but for a Tuesday, this is probably enough to start a few rants… :) Part of what triggered this line of thinking for me was the various reports after the retail sector breaches about tens of thousands of SIEM alerts that were presumed to be false positives, and thus ignored. Kind of like trying to find a needle in a haystack.

(Image Source (CCby2.0): Noah Sussman https://www.flickr.com/photos/thefangmonster/6546237719/sizes/o/)

2 Comments »

Category: Uncategorized     Tags:

New Research: Security in a DevOps World

by Ben Tomhave  |  April 30, 2014  |  Comments Off

Hot off the presses, new research from Sean Kenefick and me titled “Security in a DevOps World,” which is available to Gartner for Tech Professionals subscribers at www.gartner.com/document/2725217.

Some of the key takeaways from this research include:

  • Automation is key! AppSec programs must find ways to integrate testing and feedback capabilities directly into the build and release pipeline in order to remain useful and relevant.
  • Risk triaging is imperative to help differentiate between apps with lower and higher sensitivity. We recommend leveraging a pace-layered approach to assist with building a risk triage capability.
  • Engineer for resilience. It’s impossible to stop all the bad things. Instead, we need to build fault-tolerate environments that can rapidly detect and correct incidents of all variety.

We also have found there continues to be some confusion around DevOps, and much trepediation as to the ever-shifting definition. Within security, this uncertainty translates into negative perception of DevOps and typically assinine resistance to change. To that end, I wish to challenge 5 incorrect impressions.

DevOps doesn’t mean…

  • …no quality. QA doesn’t go away, but shifts to being more automated and incremental. This can be a Good Thing ™ as it can translate to faster identification and resolution of issues.
  • …no testing. Patently untrue. There should be testing, and likely lots of testing at different levels. Static analysis should occur on nightly code repository check-ins, and possibly at code check-in time itself. Dynamic analysis should occur as part of standard automated build testing. For high-risk apps, there should be time built-in for further detailed testing as well (like manual fuzzing and pentesting).
  • …no security. Security should be involved at the outset. Ideally, developers will work from a pre-secured “gold image” environment that already integrates host and network based measures. Further, as noted above, appsec testing should be heavily integrated and automated where possible. This automation should free appsec professionals to evolve into a true security architect role for consulting with developers and the business.
  • …no responsibility. On the contrary, DevOps implies empowerment and LOTS of shared responsibilities.
  • …no accountability. Perhaps one of the more challenging paradigm shifts is the notion that developers now be held directly responsible for the effects of their code. This accountability may not be as dire as termination, but should include paging devs in the middle of the night when their code breaks something in production. The telephone game must end. This doesn’t mean removing ops and security from the loop (quite the opposite), but it changes the role of ops and security (IRM).

I hope you enjoy this new research!

“Security in a DevOps World” (published April 29, 2014

Comments Off

Category: Uncategorized     Tags: