by Ben Tomhave | December 1, 2014 | Comments Off
I was awoken around 5am post-Thanksgiving Saturday by multiple text messages from Facebook instructing me to click a link and enter a code to reset my password. It seems someone decided to try and takeover my account. This led me to conclude that now would be a good time to quit putting-off enabling 2-factor authentication (2FA) for my account. What should have been a very simple process was complicated (slightly) by a degree of true derpitude: in order to enable 2FA for my account, Facebook first insisted that I change my browser configuration (or use a different browser) that wasn’t set to clear cookies after each session. I received the following error/explainer:
This message annoyed me greatly, for three reasons.
1) Why was I being asked to reduce the security settings on my browser in order to increase the security on my account?
First and foremost, this is just plain stupid. Clearing my cache is a /good/ thing, which all of you should do on a regular basis. It helps improve your privacy (to a small degree, anyway). It helps keep the web site content you’re accessing correct and fresh (we still see some cache issues when sites are updated). More importantly, though, is that I should never be asked to decrease my security and privacy settings in one area just to improve my security in another area. Which leads me to…
2) Why the erroneous implied correlation between cookies and 2FA?
Clearly, there are two concerns driving this behavior by Facebook: usability and trackability. In the first case, I would imagine they foresaw (or maybe actually saw) an uptick in frustration with logging into the site and having to use a one-time-code each time. They may also have viewed it as a usability issue that could be eased by storing a permanent cookie. However, let’s be honest… if I’m enabling 2FA, it’s because I want that second out-of-band factor there pretty much every single time.
Which leads me to believe that there is, in all likelihood, a more nefarious and profit-driven motive behind not allowing me to enable 2FA without also allowing them to set long-term (or permanent) cookies: advertising and tracking. By setting a permanent cookie, they can help “improve” usability while also getting a long-term tracking device well-embedded within my browser, which carries over to more than just their own site (read this article for more on the extensive tracking their doing). Facebook is a commercial business, and publicly traded to boot, so they need to be able to grow their business and show value for stakeholders. I understand /why/ they would want to track, but… they also need to honor “do not track” decisions be users (see EFF’s coverage of the topic).
These considerations aside, though, leads me to a more important usability question…
3) Why not warn instead of preventing 2FA until one relents and adopts the weaker settings (at least temporarily)?
It seems rather heavy-handed to outright prevent users from enabling 2FA until they reduce their security settings in their browser. I understand advising people of the potential usability issue (“if you don’t change your cache settings, then you’ll be prompted for a code every single time you login”). But, why prevent it altogether? To me, that seems to send the wrong messages to users. We want them to enable 2FA! As important as Facebook has become in social and professional circles, the thought that extra hoops must be jumped through to improve account security is a bit asinine.
Also, interestingly and as expected, the weaker browser settings are /not/ required in order to use 2FA and have a normal experience. I went through the step of using a weakened browser config in order to jump through their hoop, and then immediately cleared cache and reverted my settings, and had no problem whatsoever logging into my account using a one-time-code. As such, it seems clear that this is less about usability and more about profit motive, which is a lousy message to send to customers.
At the end of the day, Facebook is an easy target, but they’ve seemingly handily won the social media platform competition. However, it concerns me greatly when a major player in the online space does patently silly things like increasing the difficulty of enabling 2FA on accounts. The secure choice should be the easy choice, and that certainly is not the case today for Facebook’s 2FA. Which is not to say it’s super-difficult to enable it; just that it’s harder than it needs to be and, for less tech-savvy users, I’m going to go out on a limb and guess that encountering this message would be enough to stop someone from proceeding further to get 2FA enabled.
Hopefully Facebook Security will wake up and correct this situation. There’s no reason security and usability can’t be complementary. There’s no reason that the secure choice can’t be the easy choice. For most people (IE and Safari users under default configs), this is likely a non-issue. However, that means they’re almost being openly antagonistic with more advanced users, and to what end?
Well, enough ranting on the topic… suffice to say, forewarned is forearmed, and if you’re looking to enable 2FA on your Facebook account, don’t be surprised by this little hiccup.
Category: Common Sense Tags:
by Ben Tomhave | November 12, 2014 | Submit a Comment
It’s been a couple years since the last update of our DLP coverage. In the process of updating it this go-round, I’ll be taking the reins from Anton Chuvakin and picking up primary coverage of DLP for the SRMS team. In addition to revising the existing documents (Enterprise Content-Aware DLP Solution Comparison and Select Vendor Profiles and Enterprise Content-Aware DLP Architecture and Operational Practices – GTP subscription required), we’ll also be spinning off a foundational document that can be referenced when getting started with a project.
Overall, we have some very hard questions for the vendors to answer, such as whether or not DLP is viable and valuable going forward, or if it’s instead too expensive and difficult to maintain and operate. We’re also seeking out stories from our end-user clients (good or bad!) about DLP projects/implementations. If you have a success or failure story, then we’d love to hear about it (doesn’t need to be cited by name!). Feel free to leave a comment on this post, email me, or message me on twitter.
I’m looking forward to this research process and would love to hear about your experiences!
Category: Research Uncategorized Tags: 2014, DLP, research
by Ben Tomhave | November 12, 2014 | Comments Off
Before resuming delving into any philosophical meanderings about infosec or info risk mgmt, I wanted to first highlight some recent research for you all. All of the following require a GTP subscription (go here to contact us if you’re interested in getting access).
2015 Planning Guide for Security and Risk Management
02 October 2014 G00264325
Analyst(s): Ramon Krikken
MY MAJOR TAKEAWAYS:
– Scaling security practices is difficult and generally unsustainable.
– Breaches are inevitable, so it’s how well you monitor and respond that matters most.
– The regulatory environment is still volatile, and we expect that to only get worse in the near term.
Implement Information Classification to Enhance Data Governance and Protection
23 October 2014 G00267242
Analyst(s): Heidi L. Wachs
MY MAJOR TAKEAWAYS:
– Information classification is powerful and useful.
– Get a program underway, earning some quick wins to demonstrate value.
– Don’t hit paralysis – classify and move on to remediation!
Approaches for Content and Data Security in Microsoft SharePoint Server
21 October 2014 G00265848
Analyst(s): Ben Tomhave
MY MAJOR TAKEAWAYS:
– SP security is primarily about content protection.
– Understanding SP scale is key in determining the best class of security solution.
– SP has a lot of built-in capabilities!
Interested in more information? We look forward to speaking with you!
Category: Uncategorized Tags:
by Ben Tomhave | November 5, 2014 | Comments Off
It’s been several months since I last posted here, and for good reason. For one thing, June-October tends to be a blur. It started off with Gartner’s Security & Risk Management Summit over in National Harbor, MD, followed by a couple weeks of vacation, a week of sales support, a couple weeks of catch-up, then Gartner’s Catalyst Conference out in San Diego. A couple weeks later school resumed, which always keeps my household busy, followed by a handful of smaller events and a lot more sales support.
To top things off, this analyst hit the wall and got stuck in a good old fashioned rut of writer’s block. Oh, the horror – as someone expected to produce written content on a regular basis – to enter a period where one’s ability to coherently put thoughts into writing simply wasn’t functional (some might argue that’s still the case;). Thankfully, having finally gotten all caught-up from travel, speaking, sales support, etc, etc, etc, this analyst is finally getting back into the swing of things and getting writing underway.
Things will likely continue to be stop-n-go for the next couple months, but rest assured, this blog spot isn’t dead and will be revived in the coming weeks.
Category: Uncategorized Tags:
by Ben Tomhave | July 24, 2014 | 2 Comments
In my ongoing battle against the misuse of the term “risk,” I wanted to spend a little time here pontificating on various activities that ARE NOT “risk assessments.” We all too often hear just about every scan or questionnaire described as a “risk assessment,” and yet when you get down to it, they’re not.
As a quick refresher, to assess risk, you need to be looking at no less than 3 things: business impact, threat actor(s), and weaknesses/vulnerability. The FAIR definition talks about risk as being “the probable frequency and probable magnitude of future loss.” That “probably frequency” phrase translates to Loss Event Frequency, which is compromised of estimates that a threat community or threat actor will move against your org, that they’ll have a certain level of capabilities/competency, and that your environment will be resistant to only a certain level of attacks (thus representing weakness or vulnerability).
Oftentimes, that “probable magnitude” component is what is most lacking from alleged “risk-based” discussions. And, of course, this is where some of these inaptly described tools and capabilities come into play…
A questionnaire is just a data gathering tool. You still have to perform analysis on the data gathered, supplying context like business impact, risk tolerance/capacity/appetite, etc, etc, etc. Even better, the types of questions asked may result in this tool being nothing more than an audit or compliance activity that has very little at all to do with “risk.” While I realize that pretty much all the GRC platforms in the world refer to their questionnaires as “risk assessments,” please bear in mind that this is an incorrect characterization of a data gathering tool.
The purpose of an audit is to measure actual performance against a desired performance. Oftentimes, audits end up coming to us in the form of questionnaires. Rarely, if ever, do audits look at business impact. And, one could argue that this is ok because they’re really not charged with measuring risk. However, we need to be very careful about how we handle and communicate audit results. If your auditors come back to you and start flinging the word “risk” around in their report, challenge them on it (hard!!!) because dollars-to-donuts, they probably didn’t do any sort of business impact assessment, nor are they even remotely in the know on the business’s risk tolerance, etc.
Vulnerability Scans and AppSec Testing
My favorite whipping-boy for “risks that aren’t risks” is vulnerability scans. I had a conversation with a client last week where they had a very large (nearly 100-page) report dropped onto them that was allegedly a “risk assessment,” but in reality was a very poor copy-n-paste of vuln scan data into a template that had several pages of preamble on methodology, several pages of generic explanatory notes at the end, and did not appear to do any manual validation of findings (the list of findings also wasn’t deduplicated).
Despite the frequent use of “risk” on these reports, they’re most often describe “ease of exploit” or “likelihood of exploit.” However, even then, their ability to estimate likelihood is pretty much nonsensical. Take, for instance, an internal pentest of a walled-off environment. They find an open port hosting a service/app that may be vulnerable to an attack that could lead to remove root/admin and is easy to exploit. Is that automatically a “high likelihood” finding? Better yet, is it a “high risk” finding? It’s hard to say (though likely “no”), but without important context, they shouldn’t be saying anything at all about it.
Of course, the big challenge for the scanning vendors has always been how to represent their findings in a way that’s readily prioritized. This is why CVSS scores have been so heavily leveraged over the years. However, even then, CVSS does not generally take into account context, and it absolutely, positively is NOT “risk.” So, again, while potentially useful, without context and business impact, it’s still just a data point.
I could rant endlessly about some of these things, but I’ll stop here and call it a day. My exhortation to you is to challenge the use of the word “risk” in conversations and reports and demand that different language be used when it’s discovered that “risk” is not actually being described. Proper language is important, especially when our jobs are on the line.
Category: Risk Management Tags: assessment, GRC, language, management, risk, terminology
by Ben Tomhave | May 6, 2014 | 2 Comments
I saw this graphic a couple weeks ago while trolling Flickr for CCby2.0-permissioned images and it got me thinking… there are a number of mindset failures that lead us down the road to badness in infosec.
Consider this an incomplete list…
- As long as [monitoring/SIEM] is happy, you’re happy.
- As long as [auditor/checklist] is happy, you’re happy.
- As long as [appsec testing / vuln scanner] is happy, you’re happy.
I’m sure we could all come up with a few dozen more examples, but for a Tuesday, this is probably enough to start a few rants… Part of what triggered this line of thinking for me was the various reports after the retail sector breaches about tens of thousands of SIEM alerts that were presumed to be false positives, and thus ignored. Kind of like trying to find a needle in a haystack.
(Image Source (CCby2.0): Noah Sussman https://www.flickr.com/photos/thefangmonster/6546237719/sizes/o/)
Category: Uncategorized Tags:
by Ben Tomhave | April 30, 2014 | Comments Off
Hot off the presses, new research from Sean Kenefick and me titled “Security in a DevOps World,” which is available to Gartner for Tech Professionals subscribers at www.gartner.com/document/2725217.
Some of the key takeaways from this research include:
- Automation is key! AppSec programs must find ways to integrate testing and feedback capabilities directly into the build and release pipeline in order to remain useful and relevant.
- Risk triaging is imperative to help differentiate between apps with lower and higher sensitivity. We recommend leveraging a pace-layered approach to assist with building a risk triage capability.
- Engineer for resilience. It’s impossible to stop all the bad things. Instead, we need to build fault-tolerate environments that can rapidly detect and correct incidents of all variety.
We also have found there continues to be some confusion around DevOps, and much trepediation as to the ever-shifting definition. Within security, this uncertainty translates into negative perception of DevOps and typically assinine resistance to change. To that end, I wish to challenge 5 incorrect impressions.
DevOps doesn’t mean…
- …no quality. QA doesn’t go away, but shifts to being more automated and incremental. This can be a Good Thing ™ as it can translate to faster identification and resolution of issues.
- …no testing. Patently untrue. There should be testing, and likely lots of testing at different levels. Static analysis should occur on nightly code repository check-ins, and possibly at code check-in time itself. Dynamic analysis should occur as part of standard automated build testing. For high-risk apps, there should be time built-in for further detailed testing as well (like manual fuzzing and pentesting).
- …no security. Security should be involved at the outset. Ideally, developers will work from a pre-secured “gold image” environment that already integrates host and network based measures. Further, as noted above, appsec testing should be heavily integrated and automated where possible. This automation should free appsec professionals to evolve into a true security architect role for consulting with developers and the business.
- …no responsibility. On the contrary, DevOps implies empowerment and LOTS of shared responsibilities.
- …no accountability. Perhaps one of the more challenging paradigm shifts is the notion that developers now be held directly responsible for the effects of their code. This accountability may not be as dire as termination, but should include paging devs in the middle of the night when their code breaks something in production. The telephone game must end. This doesn’t mean removing ops and security from the loop (quite the opposite), but it changes the role of ops and security (IRM).
I hope you enjoy this new research!
“Security in a DevOps World” (published April 29, 2014
Category: Uncategorized Tags:
by Ben Tomhave | March 27, 2014 | Comments Off
A quick post… I’ll be traveling a bit this Spring and Summer to speak at a number of events. For non-Gartner events, we’re actively looking for GTP sales opportunities, so if you’ve been thinking about getting a subscription to Gartner for Technical Professionals, this could be your chance to meet face-to-face to discuss! For Gartner events, I will be available for 1-on-1s, as well as sales support as needed.
Here’s what I have scheduled through August:
- Infosec World by MIS Training Institute, Orlando, April 7-9
- Rocky Mountain Information Security Conference (RMISC), Denver, May 14-15 (I’ll be in Omaha May 12th and available all day in Denver on May 13th)
- RVAsec, Richmond, VA, June 5-6
- Gartner Security & Risk Management Summit 2014, National Harbor, MD, June 23-26
- Gartner Catalyst Conference, San Diego, August 11-14
Please drop me a note if you’ll be at any of these events and we can arrange to meet-up. Also, if you’re interested in exploring a GTP subscription, please contact us and a sales rep will reach out to help coordinate. We love meeting with clients!
Hope to see you soon!
Category: Uncategorized Tags:
by Ben Tomhave | March 26, 2014 | Comments Off
In follow-up to our paper, “Comparing Methodologies for IT Risk Assessment and Analysis” (GTP subscription required), Erik Heidt and I were given the wonderful opportunity to be guests on the CERT Podcast to discuss the work.
You can listen the episode, as well as view program notes and a full transcript, at the CERT Podcast page here.
Category: Uncategorized Tags:
by Ben Tomhave | March 20, 2014 | Comments Off
“You don’t have to run faster than the bear to get away. You just have to run faster than the guy next to you.”
The problem with this analogy is that we’re not running from a single bear. It’s more like a drone army of bears, which are able to select multiple targets at once (pun intended). As such, there’s really no way to escape “the bear” because there’s no such thing. And don’t get me started on trying to escape the pandas…
So… if we’re not trying to simply be slightly better than the next guy, what approach should we be taking? What standard should we seek to measure against?
Overall, I’ve been advocating for years, as part of a risk-based approach, that the focus should be on determining negligence (or, protecting against such claims). Unfortunately, evolving a standard of reasonable care takes a lot of time. It’s been suggested in some circles (particularly on The Hill) that the NIST CSF may fill that void (for better or worse). One challenge here, however, is that the courts are charged with determining “what’s reasonable,” and so in many ways we’ll be challenged in evolving this standard (that is, it’ll take a while).
At any rate, I believe that there is an opportunity for constructing a framework (or, perhaps rubric would be a better outcome) by which people can start determining whether or not they’ve met a reasonable standard of care. Of course, one might also point out the myriad other standards in place that could serve a similar capacity. I don’t think CSF is remotely sufficient in its current incarnation, but that may improve over time.
It is probably worthwhile here to reinforce the point that “bad things will happen” and that it’s not so much a matter of “stop all the bad things,” but rather “manage all the bad things as best as possible” (at least after having exercised a degree of sound basic security hygiene). Anyone who’s familiar with my pre-Gartner writings will recognize the topics of resilience and survivability as key foci for risk management programs (and implicit in my thoughts and comments here).
But, how do you get to that point of a healthy, robust risk management program? Where do you start? How do you prioritize your work?
Here’s the priority stack I’ve been using lately:
- Exercise good basic security hygiene
- Do the things required of you by an external authority (aka “things that will get you fined/punished”)
- Do the things you want to do based on sound risk management decisions
What this stack should tell you is two key things. First, a reasonable standard has to consider a basic set of security practices applied across the board. It would probably be comprised of policies, awareness programs, and foundational practices like patch mgmt, VA/SCA, appsec testing (for custom coding projects), basic hardening, basic logging/monitoring/response, etc. Second, from the perspective of considering a negligence claim (bearing in mind that IANAL!), I think looking at high-level practices will be key, rather than delving into specific technical details.
For instance: Did a breach occur because a system wasn’t up to full patch level? If so, is a reasonable patch mgmt program in place? If so, why wasn’t this patch applied? What does the supporting risk assessment show about why this particular patch was not applied?
Lather, rinse, repeat.
Obviously, more could be said… but, hopefully this stub gets you started thinking about how the business may need to protect itself from legal claims in the future, and how an evolved standard for “reasonable care” (as determined in court) may impact security practices and expectations for security performance.
Category: Uncategorized Tags: