Ramon Krikken

A member of the Gartner Blog Network

Ramon Krikken
BG Analyst
2 years at Gartner
15 years IT industry

Ramon Krikken is an analyst in the Gartner IT1 Security and Risk Management Strategies team. He covers software/application security; service-oriented architecture (SOA) security; structured and unstructured data security management, including data masking, redaction and tokenization...Read Full Bio

Creating an Appetizing and Healthy Application Security Diet

by Ramon Krikken  |  July 2, 2012  |  Comments Off

In the recent month I’ve done both a Security Summit talk and a webinar about application security. The gist of the presentations – at least what I wanted customers to take away – is that we can’t sell application security to developers and architects by perpetuating the train-test-fix cycle of pain. It feels, though, like many organizations, vendors, and others in the industry are doing exactly that. Making pain less painful in the hopes that it becomes bearable.

My proposition? Externalization and standardization. Take care of security (as much as possible) FOR the developers, not BY the developers. It’s 2012 and I believe we shouldn’t have to look at vendor stats that show 25%+ XSS and SQLi incidence for the applications they test. And at the same time I believe developers shouldn’t actually have to worry about those two specific things every time they create or modify code either. And I’m strongly convinced that we can eat our cake and have it too on this one. But it may require some changing of minds. A change that says bolting on is OK, as you as you planned to do so.

It’s a discussion that I think need to take place more often. Jack Daniel actually just posted about this article that covers my recent Security Summit talk. In it I see a valid complaint that the article feels like it too heavily favors WAF over ever fixing code, and I agree that it could be misinterpreted to say don’t ever worry about this code stuff again (which I most definitely don’t want anyone to do). But along with that I also see a perpetuation of the notion that WAF is a band aid, with which I most definitely do not agree.

You see, ideally many elements of security would be taken care of in the application code. It could be that the platform does not expose certain vulnerabilities (e.g., in the case of buffer overflows), or it could be that a module or framework is available to assist the developer in taking care of them (e.g., OWASP ESAPI, Log4j/Log4net, or the MS anti-XSS module). But unfortunately this doesn’t always work. Aside from the obvious “it’s off-the-shelf,” not all environments are (relatively) safe and many don’t have these frameworks and modules available for them. And if they are available, security teams are often either not aware of them or not working with development teams to get them in place. They instead still try to sell application security with these long application testing reports … and the pain continues.

But there’s another aspect of externalization: sometimes you might not want to build something into the code. That is, some other component in the architecture takes care of it. Single sign-on support would be an easy pick here, but even something like anti-CSRF tokens might make a lot of sense to deploy consistently and independent of the application. And I think this is where some adjustment may be required. Seeing WAF less like an IDS (patch) and more like an XML GW (application architectural component). I believe that doing so – and taking advantage of the non-security benefits of these technologies as well – will make the application security diet a lot more appetizing.

The economic argument in fixing can be a factor, too. Should we filter input in an application with method X or filter it at the WAF or XML GW with the same method instead? To me this is not about always fixing code or always falling back on WAFs, but it’s about knowing where an when you can effectively use and manage it. Just like you can misconfigure a WAF you can make a mistake in the code fix – and absent industry figures on what’s more likely to succeed or fail you just have to figure out where your strong points lie, and test, test, and test some more.

So let’s me be clear: WAFs or any other mediating technology shouldn’t and won’t fix everything. But a balanced and appetizing application security diet – one that developers and architects will be able to live on – can’t be made of custom code (and custom code fixes) alone. I prefer frameworks for much of this, but operational components need to fill in where gaps exist.

Comments Off

Category: Applications Security     Tags: , , , , ,

Encryption Won’t Always Save You, but it Certainly Will Cost You

by Ramon Krikken  |  June 20, 2012  |  1 Comment

I have encryption on my mind again a lot lately. It certainly has something to do with work in progress for presentations I’m giving at our Catalyst 2012 conference (“Protecting Data in the Public Cloud: Encryption, Obfuscation, or Snake Oil?” and “Scenarios: Encryption, Tokenization, Anonymization, or None of the Above”). But it’s also because I’m seeing an increase in talk about encryption in general. Not just interest, but a real push for encrypting data as much and often as possible.

Although I can understand the occasional article having lines like referring to LinkedIn’s passwords as “lightly encrypted,” I really do not have much sympathy for experts prescribing encryption for most security ailments … especially because its often discussed without fully weighing its side effects (increased risk of data destruction and an empty wallet being two of the more prevalent). And in some cases these side effects are definitely worse than the cure.

A case in point: lately I’ve been getting a lot of customer questions about encryption data in applications and databases. When I ask why they want to encrypt, the answer is often that someone (e.g., a business partner or auditor) tells them it’s “required by regulation” or “best practice.” Although they mention encryption, very few – if any – regulations can be said to mandate it, especially inside the data center. And whether it is a best practice – even though some data shows fairly signification adoption of database encryption – is certainly up for discussion in my book.

Lucky for us, encrypting applications and databases isn’t exactly cheap and sticker shock usually leads to re-assessment. But even when a team is convinced they should implement such encryption, asking the question “who and what are you trying to protect from / have you performed a threat assessment?” is often met with silence. We end up discussing the limitations of encryption and may well come to the conclusion that the investment (implementation as well as ongoing management costs) might be better spent on different preventative controls (e.g., finer-grained access control) or altogether different types of controls (e.g., activity monitoring). In short:

  • Don’t encrypt data unless you have to or unless it provides appropriate protection for a given use case.

But one thing is almost certain for the foreseeable future: overcoming a perception that encryption is always the primary control choice is difficult at best.

For those who want to evaluate whether encryption is an appropriate control, my recently published “Solution Path: Choosing and Implementing Encryption” [subscription required] discusses how to approach encryption from an architectural / planning perspective, and incorporates the above advice.

1 Comment »

Category: Security     Tags: , , , , , , , ,

LinkedIn Password Hack and the Case of the Misunderstood Crypto Function

by Ramon Krikken  |  June 6, 2012  |  Comments Off

Every time a hashed password store gets compromised, people come out of the woodwork and yell things like “They used SHA-1/MD5/DES? OMG that’s so stupid because SHA-1/MD5/DES is broken!” The LinkedIn password breach is no exception.

It’s true that they’re no longer good general-purpose hash functions … except that for the purpose of password hashing they aren’t the weak spot. The types of attacks on these hash functions don’t make it easier to crack hashed passwords. Two things really matter in password hashing: the strength of the input (the password strength) and the time it takes to calculate its hashed version (the work factor).

Here’s a snippet of advice on I give in our “decision point for cryptography” [subscription required]:

In order to create resistance against password attacks given the possibility of weak passwords, the algorithm should use two additional barriers: so-called salts for hashing, which preclude the use of precomputed hash tables, and iteration, which applies the cryptographic function as often as possible within performance and latency requirements in order to slow down the brute force attack.

Note that salting by itself is not enough!

It’s best, of course, to externalize authentication as much as possible. But where passwords need to be stored (and they ultimately need to go somewhere) please do so securely – even if it means having to spend a bit more compute power and endure a bit more latency on authentication.

EDIT: I should add that simply choosing a strong hash like SHA-256 also doesn’t magically make password hashes strong. Although it’s not a bad idea to use strong hash functions, calculating a single hash iteration still takes an amazingly short time.

2nd EDIT: and I should also add that because of practical limits on how many iterations you use, a truly weak password cannot be protected by hashing. 12345 is still a definite no-no.

Comments Off

Category: Uncategorized     Tags:

SIEM Future – Would You Like Some Context With That?

by Ramon Krikken  |  May 22, 2012  |  Comments Off

This is a sister post to Anton Chuvakin’s “Our SIEM Futures Paper Publishes!” from yesterday. We collaborated on a “Security Information and Event Management Futures” note [subscription required], in which we discuss how we believe the technology will evolve in response to current and expected trends. Although Anton is now the primary GTP analyst to cover SIEM, I still have a strong interest because its place in the greater monitoring and security data analysis space.

If you look at the table of contents, you will notice the first of our “big 5 trends” relates to context data. As I’ve written in the past, context comes in several forms – some of it can be automatically derived or created by IT systems, while some of it has to be provided by humans (i.e., where humans have to “teach” the system what the meaning of certain context is). State data (e.g., the location of a person as derived from a physical access control system) is context, and events themselves can be context, too. Context is vital, because without it we would end up drowning in false positives and false negatives.

Right now, most SIEM treat context (and especially state data) differently from events. They are able to pull in state data as context for events, but performing analysis on the state data itself can be incredibly challenging. Using SIEM for multi-dimensional analytics over events and state turns out to be well-nigh impossible … as some customers-that-must-go-unnamed have explained to me in light of using SIEM for “non-traditional” use cases. This needs to change.

Although I still have reservations about “big data analytics for security” (mostly because it will for the foreseeable future be difficult to separate the wheat from the chaff in vendor claims and solutions), I do believe the need for better data processing and analytics is getting greater. Whether this is general SIEM evolution, a split in the SIEM market, or an entirely new class of technology parallel to top-tier SIEM remains to be seen. And, perhaps more importantly, how well this can be commoditized (i.e., have low requirements for involvement of data analysts and such) is a big question mark for me.

Comments Off

Category: Security     Tags: , , , , , ,

Contrary to Popular Opinion, Encryption IS the Hard Part

by Ramon Krikken  |  May 17, 2012  |  2 Comments

A well-known security meme is that “encryption is easy, it’s key management that’s hard.” But while this may be true for certain encryption use cases, it’s most definitely not true across the board. It’s a convenient meme for vendors, of course, who’ll simply point at a “we use AES” or “we’re FIPS 140-2 validated” statement and call it good. But for the end user this nothing short of unhelpful.

Understanding cryptography is hard, and validating a system where the core crypto is only one small part of a large, critical system is even harder. One of the largest problems in my opinion is the scope of FIPS 140-2. First off, the lowest level (1) doesn’t mean much in terms of how well the crypto system is implemented. But furthermore, it creates validation only for part of the entire solution. As an example, see a 2010 incident where FIPS 140-2 level 2 validated USB flash drives were compromised completely.

To get a better handle on crypto, current customers might review the just-updated “Understanding and Evaluating Cryptographic Systems: An Information Security Foundation” [subscription required] for a more complete picture. The evaluation includes algorithms, protocols, key generation, but also – very important – the overall system itself:

Proper design and implementation of cryptography are challenging, even when secure algorithms and protocols are used. Misapplied or incorrect hardware, software and architecture can all reduce or negate cryptographic security.

in the end, the strength of the system is just one piece of the puzzle. A more fundamental problem, and one that needs to be addressed before the crypto system evaluation starts is that the power of encryption is grossly overestimated. And I will address that in a series future posts.

2 Comments »

Category: Uncategorized     Tags: , , , ,

Mobile Application Security: the Walled Garden versus the Open Grounds

by Ramon Krikken  |  May 14, 2012  |  Comments Off

In our recent customer-facing research project on mobile application development, security was a smaller but important consideration for many participants.

When I read through a recent “this is what developing for Android looks like” blog post on the effects of Android fragmentation, I got inspired to write a quick piece on the platform. The open playground versus the walled garden approaches of the Android and Apple platforms, respectively, definitely play into how security can and must be designed.

One aspect of open versus closed is the ability to control and change what you have.  Avoiding vendor lock-in and heavy-handed vendor or carrier control (which in the consumer space often is often related to digital media) can be beneficial to security in the enterprise environment. On the open grounds those with the time and effort could create their own secure operating system specification, and those with less time can simply pick a few useful security components and add them on. To do so in a walled garden can be very difficult, if not impossible, and enterprises sure have a few complaints about controlling applications and data on the Apple platform here.

But notwithstanding a desire to maintain consumer choice and easily implement controls, it’s unavoidable to acknowledge that the walled garden can be very helpful for implementing security in B2E environments. If the endpoint is to handle sensitive data, we need a certain amount of control over the hardware and/or the operating system. This allows us to support or add on security features without the user having the ability to modify or remove them. The open Android model therefore definitely worries some enterprises when it concerns BYOD.

Neither platform is ideal for B2E – particularly for BYOD, where neither Android nor Apple have the best support for enterprise B2E application and data security requirements. In fact, right now both open and walled models have specific benefits that are appealing. But a variety of security and non-security factors sure do seem to drive organizations in the direction of favoring the walled garden, for better or for worse.

Comments Off

Category: Uncategorized     Tags:

“Securing Big Data” – the Newest Fad?

by Ramon Krikken  |  May 10, 2012  |  2 Comments

It doesn’t take a clairvoyant – or in this case, an research analyst – to see that “big data” is becoming (if it isn’t already, perhaps) a major buzzword in security circles. Not only big data as applied to security, but also security for big data. But what does “securing big data” actually mean?

Not too long ago I wrote a post about renaming DAM to DAP, and published a fairly large report about current DAP capabilities [subscription required]. In the report, I note that:

But database security for other types of databases, such as non-relational data stores that are increasingly important in the age of big data and cloud, mostly goes uncovered.

Note that I specifically focus on the platforms used to store and process the data, not the data itself. We have to distinguish those two, just as we distinguish document formats from the contents stored in a document. Yes, platform capabilities are important, but they don’t capture the full breadth of security concerns with – as we define it – data that is high in volume, velocity, and variety. In an environment that is all about really putting data to use, how do you design the right controls?

Or more precisely, what exactly we will need to do about this at the technology level – i.e., which technical controls make sense given specific exposure and threats to this information (not all of which result from [lack of] capabilities in the platform)? The latter part of that question requires more effort than just throwing “the usual” security solutions that have simply been re-badged with a “big data” label. Technical controls are no substitute for good understanding of data and its use.

Don’t get me wrong, I do believe several vendors will create very useful solutions – and some will be extensions of traditional products. So although I don’t believe the need for securing big data is a fad, the impending storm of marketing slogans around securing big data (and its possible ramification of leading to ineffective control designs) may well make it feel like one.

Much of the “securing big data” will need to be handled by understanding the data and its usage patterns – lest we repeat the “grant all” stance used in many RDBMSs instances. In other words, know your data  to know your controls.

2 Comments »

Category: Security     Tags: , , , , ,

Getting Started with Mobile Application Security

by Ramon Krikken  |  May 7, 2012  |  Comments Off

We’ve just finished parsing 1.5K data points in a customer-facing research project on mobile applications. We spoke mostly with development team members, but also had a few architects and other functions represented (we even had a person from a marketing team in the mix). The data is very rich, and we’ve spent considerable time deriving our insights and conclusions.

It wasn’t on security in particular, but we did talk to pretty much every customer about that topic. In most cases it was actually the customer that raised the issue, which makes me happy because that means it’s certainly on people’s minds. But on the not-so-good side, many participants were still investigating how to best provide security on and for mobile applications. That should be no surprise, though: many moving parts, several different platforms, and a mix of application and data types does not make things easy.

I haven’t quite finalized the advice yet, but I think the combination of security for the development process, the application infrastructure, and the applications themselves will provide us with plenty to write about. But one thing that is apparent to me at this point (and perhaps this is no surprise either) is the following:

  • Pay attention to web services (particularly RESTful services), and how they can be secured!

Yes, the client side is extremely important (and I’ll definitely cover that in future posts). But especially with lots of customers looking at B2E, building and securing services the right way is going to be critical. Some existing technologies may be reusable. The foundations – especially standards – should be re-examined to make sure they’re sound. If you are a Gartner customer looking for advice on this I’d of course be happy to discuss this on a call even before our notes publish.

And for those interested in how we did the research, check out this recent blog post by my colleague Danny Brian. If you like user-centrism you’ll certainly appreciate our process.

Comments Off

Category: Applications Cloud Security     Tags: , , , , , ,

Are You Flying the Airplane, or Running the Airline?

by Ramon Krikken  |  May 3, 2012  |  1 Comment

We’re always trying to get closer to developing more useful security metrics, and examining analogies provides a way to relate these measurements and metrics to things we already know (and that we perceive as being done and measured well). I like good analogies, but I don’t want to be limited by not-so-good ones.

“Flying an airplane” is one such analogy (it is used in various books, articles, discussions, etc.) The idea is that keeping systems up and running is operationally similar to flying an airplane: the gauges and indicators help pilots to safely fly. Similarly, SIEM, AV, IDS, and other security controls provide ways for IT to keep an eye on their systems. But I’m concerned the analogy misses some important consideration:

  • Preventing airplanes from crashing due to pilot error or mechanical failure is different from protecting it from intentional acts to crash it. This is much like “oil changes” don’t covering predictions related to people pouring sugar in the fuel tank (which extends to random failures and intentional attack differences in IT).
  • Preventing airplanes from crashing is not just related to flying – it’s also related to building airplanes correctly, and to maintaining them the right way. Likewise, running IT systems is only a piece of “doing” IT, where the security is built in and then maintained.
  • Preventing airplanes from crashing is also not done in isolation: there are many, many airplanes in the sky at any moment. The complexity of IT systems (which are systems of systems) also does not lend itself to an isolated analysis.
  • But most importantly, preventing airplanes from crashing is a small operational aspect of something larger. Airplanes, after all, do not exist just to fly. They exist to transport people and things from point A to point B. This is just like IT systems not existing just to run, but to support a business (process).

So I would argue that what we’re really trying to do is “run the airline.” What do you think?

1 Comment »

Category: Security     Tags: , , , , , , , , , ,

Will the VMWare Code Release become a “Many Eyes Principle” Case Study?

by Ramon Krikken  |  April 27, 2012  |  Comments Off

It must be Friday, because it’s definitely FUD-filled! Hide your valuables, because the VMWare ESX code leak is sure to cause IT systems to go dark around the world (and thus your alarm company’s systems too, I’m sure).

OK, so enough with the hyperbole. Let’s be fair: it’s certainly possible that source code availability will allow surfacing some new problems. And they could even be quite severe. But given the circumstances, how more likely is it to happen with source code than without?

For one, the binary code for ESX is available for analysis already. People find flaws a plenty in COTS without ever laying eyes on the source code. But in addition, the source code is available for analysis to third parties (as evidenced by the code being stolen not from VMWare, but from someone else). Surely some of them have laid eyes (or, more likely, run tools) on the code itself.

So when I see quotes like “If even more code gets published, that act at least partially takes away an advantage that VMware has enjoyed over the open source code hypervisor competition. Both Xen and KVM source code were published for everyone to see,” I sure do wonder if it matters all that much.

But that’s exactly the point – although we might guess, we don’t yet know. And because of how VMWare handles its code with respect to distribution to third parties, I think it could make for some excellent data points around the “many eyes principle.” In other words, with more and differently motivated (or skilled?) eyes on the source code, what would we see pop up?

Perhaps we’ll even find evidence of a government back door, who knows!

Update: when I wrote the post it wasn’t clear to me whether CEIEC had a legitimate copy of the VMWare source. From comments by the hacker it appears now they might not. How this changes the type and number of eyes is something worth digging into.

Comments Off

Category: Cloud Security     Tags: , , , , , , , , ,