Mark Diodati

A member of the Gartner Blog Network

Mark Diodati
Research VP
6 years at Gartner
21 years IT industry

Mark Diodati is a Research Vice President with Gartner's IT Professionals research and advisory service. His focus topics include mobility, authentication, cloud idenitity, federation, directory services, provisioning, identity services, Active Directory interoperability, Web access management…Read Full Bio

Mobile Device Certificate Enrollment: Are You Vulnerable?

by Mark Diodati  |  July 2, 2012  |  1 Comment

Last week, US-CERT published a vulnerability note on the Simple Certificate Enrollment Protocol (SCEP). The vulnerability was reported by Certified Security Solutions, a consulting company with extensive Windows and PKI deployment experience. The company’s summary of the vulnerability is here. This vulnerability—when combined with two additional pieces of information—enables an attacker to impersonate another user when enrolling for an X.509 certificate.

We should care about the addressing the SCEP vulnerability because X.509 certificate usage is important stitching in the Bring Your Own Device (BYOD) fabric. SCEP is the de facto standard for certificate enrollment from mobile devices. Many organizations rely upon certificates for mobile access to the internal network, email, SharePoint, virtual desktops, web applications—you name it. The attacker can impersonate an authorized user and gain unauthorized access to these applications.

When examining the origins of this vulnerability, it’s helpful to trace the origins of SCEP. Before mobile device certificate enrollment was commonplace, enterprises leveraged SCEP as an easy way to install certificates on network devices on the internal network. The enrollment process had fewer actors and the identity of the device was easily vetted.

In the era of mobility, SCEP is used in a much more complicated ecosystem with untrusted devices at the source of the enrollment request.

The vulnerability appeals more to the insider, because two pieces of information are required to leverage the vulnerability. The first thing is the SCEP shared secret, which the certificate requester uses as a credential to authenticate to the certificate authority[1] (CA) during the enrollment process. In the case of iOS (for example), a service distributes a profile containing the shared secret. The service is typically a mobile device management MDM solution, but it can also be a simple publishing mechanism[2]. Upon receipt, the mobile device generates the key pair and requests the certificate via SCEP. The mobile device contacts the CA with the SCEP request. The CA authenticates the request via the shared secret and—voilà—the mobile device now has a certificate.[3] The second piece of information needed is the distinguished name of a user object that is stored in the enterprise directory. The vulnerability enables the attacker to change the distinguished name in the SCEP request before enrolling for the certificate.

For short-term mitigation of the SCEP vulnerability, organizations should use unique shared secrets for each enrollment request. Frequently many organizations use the same shared secret for all of the devices, or worse fail to use the shared secret at all. Additionally, organizations should leverage an LDAP proxy service and/or a directory synchronization service in an effort to limit exposure of the directory, which would enable attackers to query for user distinguished names.

Moving forward, organizations need to perform better user proofing prior to certificate issuance. The best approach may be the use of MDM solutions from vendors including AirWatch, Good Technology, Fiberlink, MobileIron, and Zenprise[4]. These products replace (or proxy) the SCEP enrollment process to prevent the switch of the distinguished name. Certified Security Solutions provides an alternative solution to via its SCEP Validation Service (read about how it works here, on page 7) that enforces the coupling of distinguished name to SCEP secret via a certificate authority plugin. The SCEP Validation Service can  complement an MDM solution because it enforces shared secret-distinguished name integrity for device-based SCEP enrollment.


And while we are on the topic of mobile devices and PKI security, we should talk about the risk of an attacker exporting the private key from the mobile device. The export will enable the attacker to use the certificate independently of the device. Some mobile operating systems preclude this export, but I am more concerned about the ability to retrieve the key pair from a device backup.[5] The NFC secure element (see my blog post) will mitigate export risks, but it increases the complexity of certificate distribution.

I will be talking about the SCEP vulnerability as part of my “Mobility and Identity: Getting It Right” talk at this year’s Catalyst. It will also be discussed in my upcoming research document on IAM capabilities in MDM products.

Suggested Reading

The Evolving Intersection of Mobile Computing and Authentication (subscription required)

Identity Bridges: Uniting Users and Applications Across the Hybrid Cloud (subscription required)

[1] For the purposes of clarity, I am not distinguishing between registration and certificate authority PKI components. For example, Microsoft has implemented SCEP in its Network Device Enrollment Service (NDES), which functions as a registration authority for the certificate authority (Microsoft Certificate Services).

[2] My document on Mobility and Authentication (subscription required) describes this setup in more detail.

[3] This is a high-level summary of the certificate enrollment process.

[4] These happen to be the vendors in the upper-right corner of Gartner’s latest Magic Quadrant. No doubt, there other MDM vendors that can help with certificate issuance, too.

[5] I am unaware of this attack in the wild. Yet.

1 Comment »

Category: Authentication Cloud IAM Mobility     Tags:

RSA SecurID, Crypto, and Satan’s Computer

by Mark Diodati  |  June 27, 2012  |  Comments Off

You may have read about two recent vulnerabilities associated with RSA authentication products. Last month, a researcher specified how to copy a SecurID software token from one computer to another, which can enable an impersonation attack (Ars Technica). This week, researchers described a way to decrypt data encrypted with a SecurID smart card (again, Ars Technica). You can read RSA’s response (via Sam Curry) to the second vulnerability here.

What do these two attacks mean for RSA’s SecurID one-time password (OTP) customers? The answer is likely “not much”, particularly if they are using hardware OTP devices (the predominant form factor).

Software Token Vulnerability

In the first attack, a researcher was able to successfully copy the OTP secret (AKA symmetric key or “seed”) from one computer to another. Honestly, after reading the specifics of this attack my first reaction was … a yawn. For at least a decade, RSA has offered a software equivalent to the hardware OTP token. The company implemented additional controls to make it difficult to copy the secret from one computer to another, therefore raising the bar on an impersonation attack. The controls are better than most (if not all) than other implementations in the market. Now, a researcher has found a way to copy the OTP. While this may be the first public demonstration of this capability, I am confident that this vulnerability has existed for years.

The software OTP device has always been a cost-effective alternative to the hardware OTP, which is highly tamper-resistant form factor. But remember that software OTP devices function in an unsecure environment. Did anybody ever think that a software OTP device running on a PC is as tamper-resistant as a hardware OTP device? Did people really think that last month’s attack was not possible? How can one expect high-grade, hardware device tamper resistance when the cryptographic secret is stored on Satan’s Computer?

PKCS #11 Vulnerability

Last week’s attack leverages an older implementation of PKCS #11 middleware. PKCS #11—like Microsoft’s Cryptographic API—provides interoperability between PKI-consuming applications (for example, browsers) and smart cards. Kudos to the researchers, however, as they optimized the cryptographic attack and lowered the “work time” to make the decryption viable. While other vendors’ solutions were mentioned in the research, RSA received the most press as it is the market-leading enterprise strong authentication vendor.

But the PKCS #11 vulnerability has nothing to do with the RSA SecurID OTP system; it only impacts the PKI part of the smart card. More facts:

  • The attack does not yield the user’s private asymmetric key
  • The vulnerability is not present in the current middleware that RSA ships
  • Most Windows applications don’t use PKCS #11. They use Microsoft Cryptography APIs (MS-CAPI or CNG) as it provides better interoperability. In my 15+ years of experience with smart cards, PKCS #11 never provided true interoperability and frequently required tweaks to support new applications.
  • There are other simpler, quicker attacks that yield the same (or better) results. For example, workstation malware can capture the user’s smart card PIN and decrypt the data faster. This is the modus operandi of the Sykipot attack that I spoke about in January.

What does it mean for smart card customers, regardless of the vendor? These customers should continue to be diligent about malware protection and the deployment of the latest smart card middleware. They should consider using MS-CAPI or CNG instead of PKCS #11 on Windows workstations.

Bottom Line

No authentication mechanism is bulletproof. Even smart cards are vulnerable to attack. If you want OTP authentication that provides high identity assurance, buy a hardware OTP device. Software-based credentials that run on the  user’s computing computing device—be that a PC or a mobile phone—should be carefully considered. Software OTP devices can provide moderate identity assurance, but only after you invest a little time thinking about the device that they run on.

Additional Reading

Déjà Vu – The Sykipot Attack on Smart Cards

OTP Systems and Mobile Devices: Don’t Make The Biggest Implementation Mistake

Nothing is Bulletproof

The Evolving Intersection of Mobile Computing and Authentication (subscription required)

Road Map: Replacing Passwords with Smart Card Authentication (subscription required)

Authentication Decision Point (subscription required)

Road Map: Replacing Passwords with OTP Authentication (subscription required)

Comments Off

Category: Authentication IAM Mobility Uncategorized     Tags:

It’s … Minty

by Mark Diodati  |  May 7, 2012  |  2 Comments

Recently, I had the opportunity to talk with Sharon Epperson (CNBC/Today/NBC News). She was preparing for a Today show segment on the security of I address this topic in my 2011 FFIEC authentication guidance document. is Quicken for the cloud era. Like Quicken, it enables the analysis of personal financial data, including banking, loan, investment, and credit card transactions. Users can evaluate the transactions against a budget and calculate their net financial worth. Unlike Quicken, is currently “read-only”; it cannot execute transactions on behalf of the user.

Intuit—no strangers to securing personal financial data—has implemented reasonable security measures within the service.  There aren’t any known security issues with, but two security considerations exist—one for the bank and one for the user.

First, banks lose some fraud detection capabilities because the traffic originates from—not the user’s device. Several of our banking clients have expressed their displeasure because they can’t leverage tricks like geolocation or device identification to improve user authentication.

Second, the user’s password for enables access to many financial services accounts. Therefore, the user must take great care with the password and PC security. The password is easily captured via workstation malware, enabling the fraudster to access the user’s financial services accounts.

The good news is that (for now, anyways) is “read-only”. If the password is compromised, the risk is limited to disclosure of personal data—not fraudulent transactions. Once becomes “read/write”, the risk changes dramatically. Intuit should augment’s internal fraud detection capabilities and enhance its ability to provide user session details to the banks.

 Suggested Reading

The 2011 FFIEC Guidance on Authentication (subscription required)


Category: Authentication IAM     Tags:

The Next Revolution In Mobility: Near Field Communication

by Mark Diodati  |  April 20, 2012  |  Comments Off

I want to welcome you to a multi-post discussion about near-field communication (NFC). Over the next few blog posts, I will be talking about:

  • NFC’s moving parts
  • Impending demand from your users
  • NFC’s potential for access to buildings and applications
  • Missing ecosystem components

The next revolution in mobility is coming: it is near field communication (NFC). The industry focus today for near field communication (NFC) is tap-to-pay systems that leverage mobile devices, a la Google Wallet and ISIS.  That’s all well and good because it will feed customer demand for NFC-enabled devices. Last year, Gartner estimated that 50% of smartphones will be NFC-based[1]. In my opinion, the estimate is conservative. I am more interested in what happens after payments, when NFC-enabled devices reach a deployment tipping point and will be used for enterprise access.

The enterprise use of NFC can be distilled into two challenges: the lack of ownership and increased complexity. For starters:

  • The user owns the smartphone.
  • The mobile network operator (MNO) owns the network pipe to the smartphone, which is required for over-the-(OTA) air provisioning of credentials to the secure element (the storage area inside the smartphone).
  • The MNO also owns the keys necessary for writing the credentials, so “sideloading” the credentials (via physical access to the smartphone) will not work.
  • An additional actor in the NFC ecosystem is the Trusted Service Manager (TSM). Its job is to enable OTA provisioning of credentials by acting as an intermediary between you (the enterprise) and the plethora of MNOs who gate access to your users’ smartphones.



My next posts will talk about the specifics of NFC and the missing puzzle piece required for NFC to work. To telegraph the punch line a little, a new service is required to provide secure credential and application distribution. We’re calling this service mobile credential management. It doesn’t exist yet. The service must be able to:

  • Distribute applications to mobile devices
  • Interface with credentialing services (like an on-premises certificate authority)
  • Interact with the MNOs provision credentials to the secure element. There is a fair amount of technical interoperability[2] to make this work.

I’ll also discuss mobile credential management, authentication, NFC, and authorization at Catalyst 2012—hope to see you there.

Additional Reading

The Evolving Intersection of Mobile Computing and Authentication (research document – subscription required)

How Soon is Now: NFC Smartphones and Physical Access Control Systems (blog)

[1] Subscription required.

[2] If you are a smart card enthusiast, think GlobalPlatform keys and security domains.

Comments Off

Category: Applications Authentication Cloud IAM Mobility NFC     Tags:

OTP Systems And Mobile Devices: Don’t Make The Biggest Implementation Mistake

by Mark Diodati  |  April 12, 2012  |  Comments Off

The topic of the secure distribution of one-time password (OTP) secrets recently surfaced again as part of our ongoing mobility research.

Many organizations make the classic distribution mistake; they couple a weak identity proofing mechanism with the deployment of stronger authentication systems1. In our research, I call this an “impendance mismatch”. For example, if an organization distributes the OTP secret via email, in many cases a password is all that is needed to procure the OTP secret. Let’s pause for a second to think about all of the places where the user typically embeds an Active Directory password: Outlook on the PC and in the native smartphone email client. This distribution process significantly diminishes the value of the OTP system.

An automated out-of-band identity proofing mechanism via telephone can provide a user-friendly and cost-effective solution (other identity proofing mechanisms exist, too). I have seen many organizations try to fix the identity proofing problem after deployment because they have little confidence that the OTP secrets are in the hands of authorized users. It can cost as much as ten times the initial deployment.

Another concern looms over the horizon. Now that tablets deliver a viable user computing platform, the value of on-device OTP generation should be re-evaluated. It blurs the multi-factor “what you have and what you know” concept. Tablet malware will be able to capture the user’s OTP PIN, run the OTP device API to generate the OTP, and then replay both bits without user knowledge. This vulnerability may already exist for tablets; it exists for PCs. Value remains for placing the OTP device sur la table(t), because it can help mitigate network attacks. I chose the word “mitigate” carefully; no authentication mechanism is bulletproof—not even hardware-based OTP devices and smart cards.


Additional Reading:

Roadmap: Deploying One-Time Password Devices (subscription required)

1 Good question! The nuance of dynamic secret generation a la CT-KIP/DSKPP does not fix the problem.

Comments Off

Category: Authentication IAM Mobility     Tags:

Dialoguing about SCIM

by Mark Diodati  |  February 23, 2012  |  1 Comment

Gartner’s Identity and Privacy Service (IdPS) has closely tracked provisioning standards since 2003. I published our first research document on Service Provisioning Markup Language (SPML v2) in early 2006. Additionally, I published a realistic assessment of developing an SPML service in early 2010. A few months later, I worked with industry leaders to publish a statement on SPML’s viability and the need for realistic provisioning standard.

I closely track the Simple Cloud Identity Management (SCIM) standard. Gartner was the first research and advisory firm (as far as I know) to publically support SCIM (May of 2011). At that time, some members of the analyst community were unenthusiastic and publically recommended staying with SPML. Things have since moved in a more positive direction. In an effort to learn more about the standard, I developed a SCIM consumer prototype, using REST, JSON, and Unbound ID’s Directory Server running in an EC2 instance. I’ve developed heterogeneous “user account management” solutions since 1996. In short, I am a passionate about the topic of provisioning.

After seeing members of the SCIM committee propose features and capabilities in addition to the core CRUD operations, I became concerned about “kitchen sinking”, something that exists in SPML v2 and something I warned about in my blog last year. Kitchen sinking will over-complicate the SCIM standard and delay its release. I contacted Trey Drake, Architect at Unbound ID with my concerns. Trey is one of the leaders working on SCIM, as well as the editor of the SCIM protocol document. He provided insightful responses; I hope you find them as helpful as I did. Based upon his statements, it appears that SCIM remains on track.

There’s also good news today from the IETF. It approved the creation of the SCIM birds-of-a-feather (BoF). The BoF is a significant first step as BoF acceptance acknowledges the basis for a SCIM standard within IETF.


It seems that much thought has gone into the 1.0 specification. People are coding to it. I know a vendor that supports it (UnboundID). Why doesn’t the committee explicitly state that any IETF work will be compatible with 1.0? Without that, people will wait, which will hold up adoption of something that is needed in the industry. As a background question, does IETF standardization really buy anything for the industry if people already like the 1.0 specification and the vendors are integrating it (or have roadmaps to do so)? Is the IETF process a distraction?


We stated the goal for backward compatibility in the strongest way we could. Things can obviously change, but we’ve built the very strong desire to remain backwards compatible into the proposed charter. In practice, I don’t see additional risks for “spec drift” in IETF as compared to keeping SCIM in a working group.

After all, the major SCIM players will participate in the IETF committee. It is clear that folks like SCIM— there are many implementations, webcasts, and blogs popping up all over the place. We’re obviously addressing a pain point. The mailing list membership now exceeds 250 individuals; I’m still surprised at the number of people investing time and effort into the specification. The growing membership and rapid uptake means that strong governance will be essential.

For the 1.0 specification, the core authors/contributors just did “the right thing” as we were all of similar minds. The specification was completed very quickly. With the expansion in the number of stakeholders, I’m expecting more contention, hence stronger governance will be required.

In short, IETF will help with governance. History proves it will take a long time to get the specification through IETF; the recent experience with OAuth 2 comes to mind. But that didn’t stop SalesForce, Google, Facebook, and others from touting and implementing OAuth 2. The goal of the IETF effort is to strengthen governance and broaden participation while staying the course with SCIM 1.0. It will take a long time to morph SCIM into an IETF standard and in the meantime those implementing 1.0 will benefit.


Do you feel like the folks on the committee are pushing to explicitly include authorization frameworks into the standard? It feels like a distraction and an opportunity to delay adoption and make it more complicated.


There are some that want to, but most do not. I will continue to evangelize “opaqueness”. That said, I’m in favor of vendors or other working groups extending the SCIM model to fit their needs.


Do people want to augment the core standard to better manage objects besides users and groups?


Yes. Note that any object schema can be represented in SCIM. That said, I’m not in favor of representing the kitchen sink —though I do believe there is utility in representing other common objects in the specification. Devices may be a candidate (think smartphones and tablets). For example, if I implement an IDP, I may want to provision devices and associate them with a user just as I do groups today.

As an aside, our implementation enables the representation of any LDAP object class as a SCIM object. In effect, it is s a generic REST API for the Unbound ID Directory and Synchronization Server. I wrote some stuff about the topic years ago (though I implemented it differently). Finally, my goal has been achieved!


I can see the utility of managing devices, but shouldn’t this be tackled via custom schema later and let folks move to more pressing standards work?


Yes, I expect most of this stuff to be done via extension a la LDAP.


Why is there a strong desire to bind SCIM schema to the Lightweight Directory Access Protocol schema? SCIM starts to look like DSML and SPML (which is to say old school). It seems that LDAP-fluent people can do this for themselves. The hierarchical object class syntax of LDAP doesn’t belong in such a clean, simple standard.


But we won’t change SCIM. The idea is to provide guidance to implementers on how to map SCIM in other representations. For example, I, along with other working group members, are drafting a binding that describes how one may represent SCIM in LDAP. Others will join in and that will become a best practice. That’s the idea anyway.

Additional Reading:

OASIS or Mirage: Standards-Based Provisioning (subscription required)

SCIM and the Future of Standards-Based Provisioning (blog)

Consensus on the Future of Standards-Based Provisioning and SPML (blog)

Atom and LDAP sitting in a tree… (Trey Drake)

1 Comment »

Category: Uncategorized     Tags:

Commentary on Centrify’s new MDM product

by Mark Diodati  |  February 19, 2012  |  Comments Off

Industry analysts discuss emerging concepts and current events with journalists. We are misquoted more than you might think (or we would like). Sometimes the misquote is minor. On occasion, the statement attributed to us differs materially from our original statement; we are inclined to speak out and make a correction.

Misquotes can be the result of the interview process. Frequently, we speak to the journalist for less than five minutes, which may be insufficient for nuanced technical topics. Sometimes, the journalist will forward the article to us so that we can provide corrections before they publish it.

Occasionally, misquotes come from good journalists. I’ve known Rob Westervelt (TechTarget) for several years now and he does good work. He contacted me for comments about the new Centrify mobile device management (MDM) product—Centrify DirectControl for Mobile. You can find the link to his article here. I have some corrections.

I like the administrative model used by Centrify DirectControl for Mobile. It naturally extends Centrify’s ability to manage heterogeneous devices via native Active Directory tools. Centrify provides an identity bridge that monitors Active Directory for changes, then feeds those changes  to a SaaS-based service that abstracts the complexities of mobile device interaction. Many enterprises use Active Directory tools to manage users and devices (including UNIX, Linux, and Mac OS), and this model enables them to manage mobile devices in the same fashion. This approach may not work for all organizations, but it is a valid one.

I see no significant issues with managing mobile device policies via Windows Group Policy. It’s a BYOD world and most organizations will leverage an MDM product to manage device policies. Centrify happens to use an existing policy framework.

Centrify DirectControl for Mobile lacks some of the features of the established MDM products in the market. It is a first generation product and everyone needs to be realistic about its capabilities. But I like its administration model and its hybrid architecture. I continually research the identity management capabilities of MDM products, so expect a research document from Gartner soon.

 Additional reading:

Of Identities, Clouds, and Bridges

How Soon is Now: NFC Smartphones and Physical Access Control Systems

Physical Identity and Access Management (subscription required)

The Evolving Intersection of Mobile Computing and Authentication (subscription required)

Market Profile: Identity Management as a Service (IDaaS) (subscription required)

Comments Off

Category: Authentication IAM Mobility     Tags:

Déjà Vu – The Sykipot Attack on Smart Cards

by Mark Diodati  |  January 15, 2012  |  2 Comments

Kelly Jackson Higgins at Dark Reading provides an excellent summary of the Sykipot malware variant attack on smart cards. The malware opens the smart card and uses it for private key signing functions. Signing functions are the backbone of public key technology—they enable users to authenticate to mutually authenticated SSL and Microsoft Windows sessions, for example. The initial target—quelle surprise—appears the Department of Defense and its vendor community.

The malware leverages a phishing attack and an Adobe Reader vulnerability for installation on the user’s workstation. If this sounds familiar to you, it is because this technique was used as part of the attack on RSA SecurID system in 2011. The malware includes a keylogger to capture the smart card PIN, which enables it to open the card. The Sykipot attack does not compromise the user’s smart card and it does not steal the credentials stored on smart card. Rather, it sends data down to the card for cryptographic processing for as long as the smart card is in the reader.

The Sykipot attack should not be surprising. I discussed this attack vector as far back as 2006 with my research document Consumer Authentication and the FFIEC Guidance and my technical position on User Authentication (subscription required). I also discuss the attack vector in my 2007 blog Nothing is Bulletproof. Our clients have seen similar attacks in the wild for at least three years.

There are several important lessons that we can derive from the Sykipot attack. First, no authentication method is bulletproof. Smart card authentication is widely held as the gold standard for commercial user authentication. That’s a perspective I share, by the way. But even smart cards can be compromised, regardless of their resistance to hardware tampering. The layering of additional techniques—including anti-malware software, user activity analysis, and network forensics—is required. Second, like the RSA SecurID attack, the Sykipot attack is another example of an advanced persistent threat. U.S. military secrets are under attack like no other time in history; willful and proactive actions are required to protect them.


Category: Uncategorized     Tags:

How Soon is Now: NFC Smartphones and Physical Access Control Systems

by Mark Diodati  |  October 31, 2011  |  1 Comment

You may have read about a recent pilot at Arizona State University, where 30+ students used their smartphones augmented with NFC (near field communication) to access facilities at the college. Instead of building access cards, the students used their smartphones.

The pilot has fueled an already intense industry interest regarding the use of NFC and smartphones. NFC-enabled smartphones are entering into the market, with Samsung, Blackberry, HTC (and others) introducing new models this year. The industry interest in NFC today is primarily focused on “tap to pay” at retail point-of-sale environments. I’ll be speaking more about Google Wallet and ISIS (competing payment systems) in a future blog post. The next logical application of NFC after payments will be authentication, including to physical access control systems (PACS).

Per review of available information, the pilot has some interesting technical details.

  • First, the students were issued fully-personalized smartphones for the pilot. The phones already possessed the HID iClass credential stored in the NFC secure element (the smart card embedded in the NFC chipset).
  • The smartphones (primarily Blackberry, but also iPhone) did not have native NFC capabilities. Instead ASU leveraged NFC chipsets from Device Fidelity. The Blackberry used the microSD card with an embedded NFC chipset (including antenna). The antenna from the chipset was extended to the exterior of the phone to better facilitate access to the PACS door reader. In the case of the iPhone—which does not have a microSD port—the pilot used an innovative phone case with embedded NFC capability.
  • The users activated an app on the phone prior approaching the door. The application enabled the use of the iClass credential for 30 seconds. The students initially ran into some application usability problems because of the timeout period.
  • The pilot did not require any modification the Lenel PACS because the smartphones emulated HID iClass cards.

The pilot is an important first step towards using NFC-enabled mobile devices for PACS access, but don’t expect this capability at work anytime soon. NFC smartphones are a rare breed. Gartner estimates that 50% of smartphones will have NFC capability by 2015, which improves the viability of opening doors with phones. Second, management needs to catch up with raw technology: consider the integration, scalable provisioning of credentials to employee-owned phones and binding those credentials to identity repositories in the enterprise.

I am expecting that the percentage of NFC-enabled phones will increase to a tipping point where their usage for broad scale authentication becomes viable. The mobile device management vendors will pick up the ability to manage and provision enterprise credentials to the secure element in a way that does not compromise the user’s payment credentials. Finally, those enterprise credentials will be both proprietary and standards-based, including HID iClass, X.509 certificate, OAuth, SecurID OTP, and OATH-based OTP. The credentials will enable access to both physical and logical systems.

I discuss NFC smartphones my “Mobility and Authentication” document, which will be published in the coming weeks. Two other documents—Let’s Get Logical: The Convergence of Physical Access Control and Identity Systems and Road Map: Replacing Passwords with Smart Card Authentication (subscription required)—discuss PACS systems and contactless authentication.

1 Comment »

Category: Uncategorized     Tags:

Of Identities, Clouds, and Bridges

by Mark Diodati  |  October 20, 2011  |  Comments Off

In response to the large number of client inquiries about identity management and the cloud, Gartner has recently published a research document that discusses identity management as a service (IDaaS)—turnkey identity management services that exist in the cloud.

In the document (Market Profile: Identity Management as a Service (IDaaS) [subscription required]), I discuss over 20 vendors and classify their product capabilities (that is, federation IDP, directory sync, provisioning, strong authentication as a service, federation SP, web access management, identity and access governance, XACML authorization, consumer authentication, and password vault). I also discuss recent IDaaS acquisitions, including Arcot, idOnDemand, Nordic Edge, and TriCipher.

In addition to discussing the market, the document examines three use cases that intersect identity management and cloud computing:

  • To the Cloud.Organizations that want to extend their existing identity management processes to manage users in SaaS or partner applications. This use case is the most prevalent and aligns with larger established companies that have significant on-premises IT infrastructure.
  • In the Cloud.Smaller organizations whose core IT functions are delivered via SaaS applications. These organizations are searching for off-premises, turnkey identity management solutions for users and applications in the cloud. Alternatively, larger organizations with distinct user constituencies might leverage an “in the cloud solution” for a specific user population.
  • From the Cloud.This is perhaps the most forward-looking use case. Some organizations want to leverage off-premises IDaaS for on-premises identities and applications. Many organizations aren’t comfortable yet with storing user information in an IDaaS application. Therefore, many of the “from the cloud” vendors offer a hybrid solution that stores user information on-premises.

Speaking of “hybrid”, the document discusses an important emerging IDaaS concept: the identity bridge. As organizations straddle on-premises and off-premises identity management, a single, bi-directional, on-premises component becomes essential. Preferably, this component should be delivered as a virtual appliance. Today, most on-premises IDaaS helper gateways are single-function and unidirectional; they work well for simpler use cases. They won’t be up for the task as the organizations add more identity management functions and distribute those functions more evenly between the on-premises environment and the cloud.

Identity Bridge

Comments Off

Category: Uncategorized     Tags: