by Mark Diodati | June 6, 2011 | Comments Off
Quest is actively building out its identity management product portfolio. Some notable acquisitions:
- Vintela (Active Directory Bridge – 2005)
- Völcker Informatik AG (provisioning/access governance – late 2010)
- e-DMZ Security—privileged account management – early 2011)
Today, Quest announced the acquisition of Symlabs, a vendor with virtual directory and federation products.
In its early days, virtual directories dramatically decreased the deployment time associated with web access management (WAM) systems. Customers were able to present a single data view to the WAM system without a multi-year meta-directory project on the critical path. Over time, virtual directories became more valuable, particularly for federation identity provider (IdP) deployments.
The latest trend in enhanced directory services is the synchronization (sync) server, which will happily replicate user accounts from the enterprise directory store to SaaS applications. Account replication (and yes, federation) is an essential component for extending existing enterprise identity management to Cloud-based applications. The synchronization server is built into federation systems from CA (previously Arcot), Ping Identity, and VMware (formerly TriCipher). Unbound ID has a standalone sync server, and Radiant Logic has integrated the capability into its virtual directory products.
The purchase of the virtual directory and federation products is a good one and will serve Quest customers well. The federation product functions well for the “to” the Cloud scenario described above. In order to be competitive and support the same important scenario, the Virtual Directory must pick up sync server capabilities ASAP. The two products will then work in harmony to provision users and sign them into SaaS applications.
I’ll be talking about identity and Cloud applications at this year’s Catalyst Conference in late July.
Category: Cloud IAM Uncategorized Tags: Catalyst-NA
by Mark Diodati | June 2, 2011 | Comments Off
The fallout from the March attack on RSA has arrived. Per the news agencies—and the excellent blog post by Bob Cringely—several large defense contractors (Lockheed Martin, L-3, and potentially Northrop Grumman) were attacked using the information stolen in the March attack. The tokens associated with the stolen information should now be considered compromised. Recent events indicate that it’s very likely that the stolen information can be used to mount attacks on other RSA customers, and not just defense contractors.
RSA SecurID customers should demand replacement tokens, and the delivered tokens must be manufactured after implementation of RSA’s post-attack security procedures. Until RSA customers receive the replacement tokens and endure the subsequent pain and suffering of distributing them, customers should follow RSA’s instructions that were received after the initial attack.
While we are talking about the protection of SecurID token information, the attack vector that organizations dismiss at their peril is the on-premises secure storage of the token information. I have seen many seed record CDs (OK, floppies back in the day) on the desks of system administrators or sitting on top of the SecurID server. Also, the token information can be retrieved out of the server by the knowledgeable SecurID system administrator.
The reputation of the RSA SecurID OTP technology may be badly tarnished due to this attack. However, the real damage is limited to the token information that was stolen. In other words, tokens created by RSA after the attack should not be vulnerable, assuming that RSA’s new precautions are effective. By the way, did you notice that most of RSA’s competitors were publically quiet after the March attack? You can bet that that they were shoring up their OTP security. We’ll be talking about stronger authentication at the Catalyst Conference.
Category: Uncategorized Tags: Catalyst-NA
by Mark Diodati | May 6, 2011 | 1 Comment
Here at Gartner/Burton Group, we have been closely tracking identity standards—including Service Provisioning Markup Language (SPML)—since 2003. The standard has some serious flaws, which we have articulated in our research documents and blog posts. In the summer of 2010, the participants at the Gartner Catalyst Conference Standards-Based Provisioning Special Interest Group issued a consensus statement that stated that SPML was at a crossroads due to its complexity, lack of conformant implementations, and nearly non-existent support by application vendors. Nothing has changed since last summer; the OASIS Provisioning Services Technical Committee (PSTC) has not taken any steps to remediate these issues.
Several weeks ago, a specification for provisioning was released—Simple Cloud Identity Management (SCIM). It is an important step forward in the important goal of standards-based provisioning. Representatives from Google, salesforce.com, Ping Identity, VMware, UnboundID, Okta, Sailpoint, and other organizations are working on the initiative.
SCIM comes with four important benefits. SCIM is simple; it leverages REST and JSON, not SOAP and XML. SCIM focuses on essential CRUD (create, read, update, and delete) operations. It avoids the complexity of the LDAP object class inheritance model. Second, it doesn’t place an undue burden on the target application like SPML does (check out our research for the details). Third, SCIM has an extensible user schema (think LDAP’s inetOrgPerson), something that was sorely lacking in SPML. Lastly, SCIM comes with support from the major Cloud application vendors (e.g., salesforce.com and Google).
Some folks in the identity community state that SCIM needs to support the functionality provided by the SPML Capabilities (e.g., Reference, Batch, etc.). Based upon our research, these capabilities are rarely (if ever) used in the wild. The functionality provided by these Capabilities can exist outside SCIM, with the added benefit of not overburdening the target application. Let’s have that debate; please provide a comment to get it going.
Several identerati have advocated rolling SCIM into the PSTC work for the next release of SPML. Until last fall, the OASIS PSTC was largely dormant for nearly four years. With all apologies to the really smart people who are on the committee, a harmonization effort will take years and delay the release of a viable provisioning standard. What is the point of harmonizing SCIM to a largely unadopted, broken standard?
Others have stated that SCIM is suited only for Cloud applications. I disagree. If SCIM works for cloud applications, then it will work for on-premises applications.
SPML may still live on for specific use cases. For example, some organizations have utilized SPML to connect disparate provisioning systems (despite the fact that none of the major provisioning systems have a conformant SPML service). This is still a valid use case; if it ain’t broke, don’t fix it.
My unsolicited guidance for the folks working the SCIM specification: be disciplined. Keep the specification as simple as possible. Avoid the “everything but the kitchen sink” philosophy that sunk SPML v2. Focus on the end goal of providing a viable provisioning standard; don’t bother trying to harmonize SCIM with SPML—few organizations are using SPML today. Implement the standard as quickly as possible in your company’s products and services to spur adoption.
Gartner/Burton Group Recommended Reading
Directory Services, Federation, and the Cloud (2010 Assessment Document – subscription required)
Consensus on the Future of Standards-Based Provisioning and SPML (2010 blog)
OASIS or Mirage: Standards-Based Provisioning (2010 Technical Case Study – subscription required)
SPML: Life Support Redux (2010 blog)
SPML Is On Life Support …. (2010 blog)
The Value of SPML Gateways (2009 blog)
New Year’s Resolution: Let’s Talk More about SPML (2009 blog)
The Latticework of Identity Services (2007 blog)
SPML: Gaining Maturity (2006 Technology and Standards Document – subscription required)
Recommended Reading from Wicked Smaaht People
Patrick Harding: Why SCIM over SPML? Why not?
John Fontana: From SPML churn rises new crack at provisioning standard
Nishant Kaushik: SCIMming the Surface of User Provisioning
Dave Kearns: SCIMing the provisioning landscape
Martin Kuppinger: SCIM – will SPML shortcomings be reinvented?
Category: Uncategorized Tags:
by Mark Diodati | April 1, 2011 | Comments Off
At last measurement, authentication dialogues were 25% of the total number of dialogues in our Identity and Privacy Strategies service. A common dialogue request goes something like this: “We have a one-time password (OTP) authentication solution. We want to evaluate another vendor’s lower cost OTP solution, or a smart card solution for physical and logical access. What are the decision factors?”
If the client wishes to move to smart cards, then we discuss when the next OTP renewal is coming. The renewal is a milestone because it requires a cash outlay and represents a commitment to the OTP solution for several years. The typical answer is that the renewal is coming six months. A client in this situation will not be able to move to smart cards; Smart deployments are multi-year engagements (particularly if physical access control systems [PACS] are involved).
If the client is in a position to move to another OTP vendor, the question of application support arises. Authentication solutions are useless if they don’t work with the customer’s applications. If the current OTP solution is protecting many different application types, the customer may need to make a decision about moving to another vendor—at the expense of excluding one or more specific applications from protection. If the client is using the OTP solution for RADIUS-based applications and a small list of web platforms, a switch is likely possible.
Given all these complications, few customers elect to switch OTP vendors because of inertia. First, the customer has invested time and money implementing the current OTP solution. Customers typically have “rolling” token renewals, so a multi-vendor solution would be required for a period of time (most likely years).
In IT you CAN replace anything with time and money, but it is best not to underestimate the effort of doing so. Some OTP vendors provide a RADIUS proxy to broker authentication requests back to the original OTP infrastructure, which can help with transitioning to another OTP solution.
Let’s work from the perspective that the customer is concerned that specific OTP devices have been compromised. Are other vendors’ OTP solutions immune from this type of compromise? Is it faster and cheaper for the customer to distribute replacement OTP devices from the first vendor, particularly if the vendor provides free replacement tokens?
To me, the macro-level dialogue we can (and do) have with our customers is about matching authentication methods to resources based upon risk. All authentication types have strengths and weaknesses. Even the gold standard for authentication—the smart card—can be compromised: once the user has authenticated to the smart card, a trivial piece of malware can send data down to the card for private key signing functions, enabling the attacker to authenticate to PKI-enabled applications. OTPs are subject to man-in-the middle attacks, particularly in consumer phishing scenarios. Consumer authentication solutions have holes in them (e.g., lightweight device IDs that are easily impersonated, risk analytic solutions with high false accept rates). Software PKI solutions are subject to many of the same types of attack as passwords. In the end, the layering of authentication methods and other techniques can improve identity assurance. My blog post from 2007 discusses these concepts in more detail.
Recommended Reading (subscription required):
Authentication System Selection Decision Point
Road Map: Implementing OTP Authentication
Road Map: Implementing Smart Card Authentication (soon to be published)
Category: Uncategorized Tags:
by Mark Diodati | March 22, 2011 | 2 Comments
While we wait for more information from RSA about the recent attack on its SecurID tokens, I’d like to revisit a potential attack vector that I discussed in my first blog entry on the topic (March 18). The OTP device’s seed and the serial number are present during the manufacturing process. What if the OTP device’s symmetric key (AKA seed) can be derived from the OTP device serial number? Can something private be derived from something public?
Every SecurID OTP device has a serial number. The serial number is plainly visible on the back of the OTP device, the shipping packaging, in electronic form in many places, and (potentially) on shipping documentation. OTP devices that are shipped to customers are sequentially numbered.
It is easy to imagine the disclosure of one OTP serial number, including direct visualization, social engineering, insider knowledge, etc. The attacker would need to know the username associated with the OTP serial number as well as the user’s PIN. I discuss the ways this information can be captured in my last blog entry on the topic (March 21). If the attacker can acquire even one customer OTP serial number, it can get many customer serial numbers because the devices are sequentially numbered.
If the attack on the SecurID OTP system enables the calculation of the seed based upon the serial number, it presents risk to customer deployments. I am keen to hear more actionable information from RSA on its recent attack. Our clients are asking us for guidance. Without knowing exactly what transpired, we have to be mindful of the worst outcome and advise accordingly.
Category: Uncategorized Tags:
by Mark Diodati | March 21, 2011 | Comments Off
After writing about the recent SecurID attack on Friday, I began thinking about the utility of the SecurID symmetric keys (AKA “seeds”) in the hands of the attacker. Specifically, what would the attacker need in order to leverage these seeds to access protected resources? I must emphasize that RSA has (at this point) not stated that any seeds have been stolen. But the compromise of the seeds is receiving the most scrutiny given that the public nature of the SecurID algorithm.
The attacker’s challenge is correlating the stolen seed to a real user. To do this, the attacker must capture a one-time password and the username. Both are provided when the user authenticates to a resource, and both can be captured at least two ways.
First, The OTP and username can be captured over the network in cases where session encryption is not used for the authentication transaction (between user and resource, and between resource and Authentication Manager). Today, it is relatively rare not to use transport security, but it does happen. Second, the username and password could be captured via user malware at the workstation. My colleague Avivah Litan discusses this type of malware attack on her blog.
Once a single OTP and username is captured, the attacker knows the user’s PIN and can correlate the passcode to the stolen seed (please see my prior entry for information about how the PIN and passcode are combined to make the OTP). This correlation would not be a herculean effort by any means. Run all the stolen seeds and the time of authentication through a software generator, and then compare the calculated passcode to the captured one. Once the attacker finds a match, he has the user’s PIN and device seed, and can use both to impersonate the user during future authentication attempts. The attacker would need to compensate for “clock drift”, which means that the attacker would likely calculate about 10 passcodes per seed to match the captured passcode.
Ironically, the modern version of RSA’s SecurID software OTP device may be less susceptible to attack as compared to the hardware OTP device. In general, hardware OTP devices are more secure due their tamper resistance capability. The software OTP device supports the CT-KIP protocol. The CT-KIP protocol enables the software OTP device and server to negotiate a new seed, which is not mathematically related to the seed that RSA shipped to the customer. The seed in the hardware OTP device cannot be changed and is the same as the seed that RSA ships to the customer with the device.
My assessment is that capturing a username and single OTP is hardly theoretical and has real world implications to SecurID customers if their seeds have been stolen. But what seeds—if any—were compromised at RSA? This is the magic question, the one we do not yet know the answer to. None, all, or seeds associated with specific organizations? If all of the seeds have been stolen, then the attacker can mount a generalized attack on any SecurID authentication. If seeds associated with a specific organization have been compromised, then the attacker must target users associated with that specific organization. Hopefully, RSA will disclose more about the attack so that its customers can take the appropriate next steps. As an aside, many customers do not adequately secure the seeds upon receipt from RSA, regardless of the recent attack on RSA; this is another vector for an attack on SecurID.
Many thanks to Merritt Maxim (my colleague at CA and RSA; twitter: merrittmaxim, blog: http://community.ca.com/blogs/iam/default.aspx) for reviewing this entry prior to its publishing.
Category: Uncategorized Tags:
by Mark Diodati | March 18, 2011 | 6 Comments
As I write this, RSA has announced it experienced an attack on its RSA SecurID one-time password (OTP) products. You can see Art Coviello’s letter to RSA’s customers here. The letter is very light on the nature of the attacks and what was stolen. In the interest of full disclosure, I worked at RSA for six years until 2003.
RSA SecurID leverages a symmetric key algorithm to generate one-time passwords. The device stores the symmetric key (in SecurID speak – the “seed”) and passes it through the time-based algorithm. Voila: the passcode appears on the screen. The passcode is concatenated with the user’s static PIN (think ATM card PIN) to build the one-time password.
There are two items in the SecurID secret sauce: the algorithm and the seed. From a cryptography 101 perspective, algorithms should be public and keys should be private. Nevertheless, the SecurID algorithm is ostensibly “private”. We should not consider this algorithm secret and it is distributed more broadly than you would expect.
So the security of the SecurID system quite rightly rests with the seeds. When RSA ships OTP devices to its customers, it also ships a seed file, which contains all of the symmetric keys associated with the OTP devices in the order. The seed file is sent for both hardware- and software-based devices, and is loaded into the Authentication Manager. As an aside, many SecurID customers don’t protect these seeds adequately.
Let’s conjecture about the attack might look like. If the algorithm was stolen, RSA may not be happy about it, but I think the hoof prints are already visible outside the barn door. If any customer seed records were stolen (or the ability to create them based upon the device serial number), this is a significant attack that directly compromises customer SecurID deployments.
If the seeds have been stolen, one might argue that the user’s PIN will save the day; not so. First, the PIN is a weak password. Second, not all OTP devices have PINs. Devices that have not been distributed yet don’t have a PIN. Deployed tokens that will be redistributed to new users will have their PIN reset.
Now, we wait and learn more about the attack.
Recommended reading (subscription required):
Authentication Decision Point
Road Map: Replacing Passwords with OTP Authentication
Strong Authentication: Increased Options, but Interoperability and Mobility Challenges Remain
Category: Uncategorized Tags:
by Mark Diodati | March 4, 2011 | Comments Off
One of the research topics that I am responsible for is UNIX1 security. Very early in my career, I grew to love awk, sed, and the Korn shell. While working out, I listen to Korn, too (That Korn/Korn coincidence never gets old for my sys admin buddies – these pictures are hanging in many enterprise cubicles).
UNIX systems have a root account, and many people typically require access to this account to do their job. We call root a ‘privileged account’ because is it not associated with a single carbon-based life form, and many people use it. The sudo open source utility enables the delegation of root privileges to lesser-privileged users. It’s a way to enable UNIX people to do their job without directly using the root account. The root account “owns” the system and therefore can breach confidential data and cause other mayhem. The sudo utility is good for what it does, but it lacks what UNIX security products provide: practical centralized policy management and auditing, as well as an easier-to use privilege delegation shell. It’s fighting words for some of my buddies (and maybe you); sudo is not up for the task for large scale UNIX security deployments.
But sudo is widely deployed and beloved (particularly in smaller deployments). Quest has been working for at least 18 months with Todd Miller (the sudo project maintainer) to extend sudo. The recently released version (1.8) has enhanced modularity. One use case is an external policy server.
It’s a smart move for Quest, and it is good for enterprises that leverage sudo. It opens up sales opportunities for Quest and other UNIX security vendors (e.g., Novell, CA, Centrify, Cyber-Ark, BeyondTrust [previously Symark], and Fox Technologies) to sell into sudo-centric environments. Quest obviously gets “first mover” advantage. Enterprises will acquire practical centralized policy management without changing the user’s experience. When the time is right, the enterprise can leverage the UNIX security product for its other capabilities.
Recommended reading (subscription required):
Providing a Strong Foundation: The Resurgence of UNIX Security Products
Privileged Account Management: Addressing the Seedy Underbelly of Identity
Markets Colliding: UNIX Security, Active Directory Bridge, and Privileged Account Management
PS: Just before posting this blog entry, I came across a great article from Joe Brockmeier that discusses the new sudo functionality in greater detail.
1 When discussing identity management, the Burton Group/Gartner definition of UNIX includes the classic UNIX variants, Linux, and occasionally Mac OS.
Category: Uncategorized Tags:
by Mark Diodati | January 25, 2011 | Comments Off
“I been here for years”. Admit it, the first thing that pops into your mind when hearing LL Cool J’s magnum opus is the hardware storage module (HSM). The HSM is traditionally leveraged for x.509 certificate deployments in high identity assurance use cases. The HSM protects the certificate authority’s (CA) private key in a tamper-resistant hardware device. When issuing smart cards and certificates to users, an HSM is always in the picture. After all, why bother issuing hardware credentials to users when the CA’s private key resides in software?
From an authentication and single sign-on perspective, the rise of the Security Assertion Markup Language (SAML) federation credential was the response to the inherent difficulties of certificate distribution and private key security. Federation raised the PKI abstraction level upward; instead of issuing certificates to every user and managing a complicated validation process, a single certificate is issued to the identity provider (IdP). The IdP signs the user’s SAML assertion to facilitate the single sign-on SSO session. The issuance of fewer certificates is a good thing.
Enterprises are beginning to leverage the Cloud for critical infrastructure applications. Critical applications have higher identity assurance requirements and therefore need enterprise-grade credentials. Federation has become the Rosetta Stone between Cloud applications and on-premises user identities. Yet, few organizations use an HSM to protect the federation IdP’s private key. If the IdP’s private key is compromised, the attacker can issue SAML assertions and grant access to your critical Cloud applications. Are you using an HSM to protect your critical Cloud applications?
Category: Uncategorized Tags:
by Mark Diodati | January 10, 2011 | 1 Comment
There’s a story that goes along with ‘Where the Streets Have No Name”, the opening track of U2’s “The Joshua Tree”. The song seamlessly melds a wonderful introduction–which has a 6/4 time signature—into the body of the song, which is in 4/4. The recording process got so onerous that progress was slow. Very slow. Brian Eno—a prolific and influential producer—concluded that the best approach for finishing the song was starting from scratch. The song’s framework rendered its completion nearly impossible. Just as he was about to erase several months of work, an engineer physically restrained him from hitting the button on the tape machine. U2, Eno, and Daniel Lanois (the other producer) muddled through and finished the masterpiece.
The good news was that the track was nearly completed. Too bad our work in identity management is not. Has its framework become so convoluted to be useful?
The Cloud has changed the enterprise computing model forever. Are we shoehorning Cloud-based identity constructs into antiquated enterprise notions of identity ownership? Have we sliced the identity market so thinly that it has lost coherence and any hope of synergy? Has the market pushed the suite vendors to build integration and common administrative consoles that remain unused and don’t solve real problems?
I am thinking about four goals:
- How do we provide identity attributes to applications when (and only when) they need them?
- How do we enable users to prove their identities while addressing privacy concerns and without needless repetition?
- How do we ensure that users have appropriate access to sensitive information and how do we prove it?
- How do we do these things in an agile, cost-effective manner?
The IdPS team (Bob, Ian, Kevin, Lori, Robin, and I) are planning the 2011 Catalyst agenda and are interested in your perspectives. The agenda will incorporate the thoughts discussed here.
Some relevant research (subscription required):
Category: Uncategorized Tags: