by Ramon Krikken | April 25, 2012 | Comments Off
Security at the application-layer is getting ever more attention due to the large number of vulnerabilities that keep popping up in off-the-shelf and home-built software (although, in my opinion, it is still not getting enough attention). Aside from expanding security activities in the SDLC, we’re seeing calls for – amongst things – application monitoring. But what does “application” mean in these cases?
When I look at various application security efforts, though, it seems security coverage for the application platform (the middleware, if you will) and databases (or data repositories) is hit-or-miss. The same is true for infrastructure-focused security coverage. So whose responsibility is it? What bucket do these fall into? Consider that:
- Systems and network teams generally consider middleware and databases to be part of the application layer
- Application teams generally consider middleware and databases to be part of the infrastructure
And it is not just middleware and databases. Just ask IT teams who should, for example, own and manage a web application firewall. Or ask whether monitoring administrative users in business applications is an element of “privileged user monitoring.” The answers are certainly not always clear-cut.
I don’t have a perfect answer either – splitting the world into the tiniest of buckets isn’t necessarily helpful. But coarse-grained buckets with no agreement on what goes where isn’t either. Let’s just remember that differences in perspective must be acknowledged and dealt with, or control gaps will eventually form.
Category: Applications Security Tags: application security, Database Security, middleware security
by Ramon Krikken | April 23, 2012 | Comments Off
I’m hoping you can all make it out to San Diego at the end of August this year. We’re planning to have another great Catalyst conference, featuring not only our Gartner for Technical Professionals analysts and content, but also a good number of awesome external speakers, too!
Different from previous years, though, we won’t have dedicated security tracks. Rather, we’re organizing all our talks around three major themes:
- IT as a Broker: Clouds and Services
- Information Everywhere
Not to worry, we have security & risk and identity & privacy content in all three areas. In fact, we have a great deal of security content throughout. A sneak peek at the agenda reveals topics such as:
- Mobile application security architecture
- Mobile platform security and controls
- Security monitoring in the cloud
- Security intelligence and shared intelligence
- Information sprawl research results
- Encryption and data masking (in general, and for cloud)
- Content-aware controls
Many of these topics have multiple talks, so as you can see there’s a lot for the security-minded. For those more interested in identity, Ian Glazer posted some more information in an earlier blog post. Also keep an eye on the Catalyst 2012 site for more information.
Category: Uncategorized Tags: big data, cat12, catalyst, cloud, mobility, nexus, security, social media
by Ramon Krikken | April 19, 2012 | Comments Off
We’re always working on updating our software security / application security coverage, and the time has come to spend a few months on gathering new information for the application security program guidance document. To make it more than “here’s another general maturity model – do everything it says,” I’m looking for what makes and breaks the program in practice. And in particular, I’m looking for anecdotes and data in the area of developer training, which is somewhat of an opaque area for me. To wit, consider if and how the following relates to developer training:
“teach a man how to fish, and he may still end up starving the whole family.”
In other words, what exactly should developers be trained on?
I’ve asked a quite a few people for data. Data that shows how training improves software security quality. And I’ve come up empty-handed. I realize it’s hard to measure. Ideally we’d have a controlled study to gather some data, but such studies can be hard to pull off.
I know some of the more mature software security teams / programs do measure this in various ways. If you have some data to share, please do let me know in comments or via email! (and I’ll keep it in strictest confidence when requested, of course). You can reach me at email@example.com
Related: if you’re going to be at the 2012 U.S. Security Summit, stop by at my session “The Art of Saying Yes - Selling Application Security To Developers and Architects” on Tuesday (in the Business of IT Security track). We’re also featuring many other Technical Insights sessions by my GTP colleagues in the other tracks.
Category: Security Tags: application security, developer training, security, security summit, security training, software security
by Ramon Krikken | April 17, 2012 | 2 Comments
In my last post, commenter Randall Gamby notes that “of course [tokenization is encryption]. ” I wholeheartedly agree. But unfortunately the current PCI guidance does not, and cannot support this notion (and, because of this, people who build and/or implement tokenization cannot do so either without creating a tokenization catch-22). When we look at the PCI SCC statement on scope of encrypted data, we see the following:
“However, encrypted data may be deemed out of scope if, and only if, it has been validated that the entity that possesses encrypted cardholder data does not have the means to decrypt it.”
This takes us to a problematic decision point: if all tokenization is indeed encryption, having a tokenization system in house doesn’t reduce scope. If it is not, then even if you were to use a unique key per encrypted token you’d be functionally equivalent to random tokens but still fully in scope. Neither is sensible (and the proposed re-examination of encrypted tokens to carve out exceptions doesn’t make sense to me either under the scope guidance).
I much appreciated when the PCI SCC loosened its requirements around encryption key rotation. After all, is it really sensible to have to rotate backup tape encryption keys every year? But more change is needed – change to ensure that any reversible data transformation system is examined with the right level of architectural and algorithmic scrutiny (i.e., we can focus on what matters in practicality).
Category: Uncategorized Tags: code book, cryptography, encryption, insanity, PCI, PCI-DSS, politics, tokenization
by Ramon Krikken | April 11, 2012 | 2 Comments
It’s been a while since I blogged about tokenization. My last post on the subject drew some interesting comments – and conflicting comments at that: one commenter argued equating tokenization and encryption is bad for tokenization because tokenization is more secure per se. Another, however, commented that it’s in fact bad for encryption because encryption is based on proven algorithms while tokenization is not. Are these indeed contradicting views?
Interestingly enough, both commenters pose valid concerns. This is possible because the first view is based on architecture distinctions while the second is based on algorithm distinctions. In the end, though, tokenization is a form of encryption: the system builds a secret code book that contains the mappings between the plain texts and the cipher texts (or codes). In essence, it’s a symmetric block cipher in ECB mode.
Looking at history, we actually see many crypto systems based on code books. A famous example is the Navajo code talkers used in World War II. A lot of people get hung up on the “but tokenization doesn’t have a key” thing … problem is, it most definitely does have one in the form of the code book. Those that disagree with such large things being called keys must by extension conclude the one time pad isn’t encryption either. An untenable position if you ask me.
So why do I keep banging this drum? IOW, why is it important to acknowledge this? The reason is simple: the security created by a tokenization system is first and foremost dependent on its architecture. Algorithms are of course extremely important, but a the best algorithm in the world simply can’t save a bad protocol / design / implementation. The rules by which tokenization and encryption have to play are identical.
However, there is a problem in practice: the way encryption and tokenization are defined in the PCI DSS, and how they impact PCI scoping as a result of that definition … but more on that in the next post, which will also address the “based on math” argument brought forward in a three-piece vendor counter-argument to my original post.
Category: Security Tags: code book, cryptography, encryption, PCI, PCI-DSS, tokenization
by Ramon Krikken | April 3, 2012 | 2 Comments
In an NY Times Op-Ed “How China Steals Our Secrets,” Richard Clarke notes:
“Under Customs authority, the Department of Homeland Security could inspect what enters and exits the United States in cyberspace. Customs already looks online for child pornography crossing our virtual borders. And under the Intelligence Act, the president could issue a finding that would authorize agencies to scan Internet traffic outside the United States and seize sensitive files stolen from within our borders.”
Certainly an interesting proposition, although potentially worrisome in terms of reach. Mr Clarke does note that:
“Indeed, Mr. Obama could build in protections like appointing an empowered privacy advocate who could stop abuses or any activity that went beyond halting the theft of important files.”
But I’m not sure that it’s quite that simple. Lest we forget, even building the capabilities to support this opens up the gates to expanding use. A case in point is what the ACLU terms “constitution-free zones” that allow border enforcement action as far as 100 miles away from an actual border – a construct that has actually been abused to avoid U.S. 4th amendment protections against search and seizure.
So does the invocation of customs authority then also create a “constitution-free cyber-zone”? How far would it extend? And, given that Internet traffic can be explicitly routed across the border – even if only for a nanosecond – does such a discussion even matter? Given the powers of the government under customs authority, and the ease with which one could be “forced to cross the border,” I would say a proposal such as this requires very close scrutiny.
Although I believe we do need efforts in the area of protecting U.S. businesses from foreign espionage, I’m leery of proposals that seem to aim at side-stepping existing privacy protections. After all, “think of the economy” can very quickly extend to trampling privacy in the name of some “war on XYZ.”
UPDATE: I should also add that while the government might be able to instantiate mechanisms to decrypt some Internet traffic, the effectiveness of a what is, in essence, perimeter inspection could quite easily be diminished through crypto means. This, much like (in my opinion) whole-body scanners, misses the point of current threats and is most effective in situations that the technology wasn’t “supposed to be used for”
Category: Security Tags: 4th amendment, border enforcement, intellectual property, privacy
by Ramon Krikken | March 30, 2012 | 2 Comments
Although we have little information available at the moment about the latest credit card processor breach (source: Krebs on Security), it is a good opportunity to continue the conversation on how the usage patterns of data in a business process change (or not!) the dynamics of security exposure.
Merchants have been able to take advantage of ongoing security advances. Tokenization in particular means that they need not worry as much about the existence of credit card data on their systems and networks. But at some point in the business process of taking and clearing a payment, someone needs to work with the actual card data … and that is where the processors (and ultimately, the banks – leaving out for a minute the trickle-down effect on merchants and consumers) are exposed to risk – and a highly concentrated risk, at that. Notwithstanding all the valiant efforts by all parties involved, encryption and monitoring – as opposed to rolling out more fundamental changes to payment processing – really only get you so far in reducing the exposure.
The point here, of course, is not about payment processors per se. It’s about how large risk aggregations can create these near all-or-nothing situations. And these aggregations are in large part due to risk shifts created by business processes that, at least in some point in time, shift the majority of the exposure to certain parties, systems, networks, etc. And even though doing so, rather than fixing the business process (or, in some cases, slowing down change – can you hear me, smart grid?), may be the most economically or politically defensible option, we cannot be surprised at the outcomes.
P.S. I agree with Adam Shostack — way to not do a breach disclosure folks, jeez.
[EDIT: Looks like the affected processor posted a statement - too bad the rumor mill got it first]
If the facts warrant more discussion, I will post a follow-up when more details become available. But I just couldn’t let a good “crisis” go to waste.
Category: Security Tags: breach, business process, payment processing, PCI-DSS, risk aggregation, tokenization
by Ramon Krikken | March 2, 2012 | Comments Off
Jeffrey Wheatman and I just published a new note each on database security. For a long time, the main solutions in this space were referred to as Database Activity Monitoring. Monitoring is still an incredibly important aspect, but the products have really grown up in the last couple years – so we’re renaming the solutions to more accurately reflect their capabilities and customer requirements: enter Database Audit and Protection.
The most interesting thing about DAP is its sheer breadth of capabilities applied specifically to databases: from attack prevention to user auditing to discovery, it is almost anything and everything database security could encompass. This breadth is also visible in the number of adjacent and overlapping markets/solution types: Nine … and it even includes the extremely important non-security adjacency of, and overlap with aspects of Data Management.
Even though I’m often the last to recommend a channel-specific security product, creating a database security architecture that encompasses a multitude of databases in large environments is well beyond what DBMS and other data store products can offer by themselves. And while there are multiple ways to skin this cat (see again the adjacencies), I would strongly encourage customers to do at least a paper-based evaluation of how DAP fits in their overall security, database, and application architecture.
If you have an ITL seat and are looking for a high-level discussion on the renaming, have a look at “Database Activity Monitoring Is Evolving Into Database Audit and Protection” (customer access only).
And if your team has a GTP license, get all the technical and architectural details and advice in “Enhancing Security and Compliance With Database Audit and Protection” (customer access only).
Category: Security Tags: DAM, DAP, Data Management; Security, Database Activity Monitoring, Database Audit and Protection, Database Security
by Ramon Krikken | December 9, 2011 | Comments Off
My colleague Eric Ouellet recently published “Is OASIS KMIP Yet Another Hollow Key Management Standard?” (subscription required). In the note, he raises several important questions around KMIP becoming a widely adopted standard. I share his concerns, and will be touching on this as well in my upcoming note about key management.
Without going into the gory details, let’s just say that vendors’ intentions only get us so far. Ultimately, it’s the customers who, to a great extent, are in the captain’s chair.
My advice (or a plea) to end-user customers: if you have any encryption/crypto products (and if you happen to like the idea of KMIP, of course) could you tell the vendor you’d really appreciate it if they’d KMIP-enable them?
Not because KMIP is the end-all be-all key management standard, but simply because having a standard, even with shortcomings, that vendors can actually rally behind is good for everyone (compare: SNMP, X-509, PKIX, etc.) You don’t want to end up going from 10 proprietary key management solutions to 8 KMIP-enabled-but-each-managing-this-one-specific-proprietary-thing ones.
P.S. While you’re looking at KMIP, it also makes sense to look at it as the key management standard to use for internally developed encryption / crypto applications.
Category: Cloud Security Tags: cryptography, encryption, fiefdoms, key management, KMIP, OASIS, vendors
by Ramon Krikken | November 2, 2011 | Comments Off
Just published: the Gartner IT1 2012 planning guides [customer access only, although you can get a very good sneak peek into our thinking from looking at some of the tables of content].
The overall theme, perhaps to no surprise, is summarized as “the macro trends of volatility, multiplicity, versatility, and mobility underlie much of Gartner’s IT1 coverage in 2012.“
Besides the “2012 Planning Guide: Security and Risk Management,” those who work in my areas of coverage will also enjoy “2012 Planning Guide: Application Delivery Strategies,” “2012 Planning Guide: Data Management,” and “2012 Planning Guide: Mobile Strategies.” There are several others around collaboration, data center, identity, and cloud. My suggestion to IT professionals is to read at least the recommendations in each.
It looks like everyone’s gearing up for an interesting 2012 research calendar. And because nothing is set in stone until we start to write, we always welcome input on where to look next.
Category: Uncategorized Tags: