by Earl Perkins | December 19, 2012 | Comments Off
2012 has been quite a year for identity and access management for our clients and for the IT and business world in general. The amount of interest and inquiry has grown to unbelievable rates. Our research has been read, discussed, questioned and challenged. Our IAM Summit in Las Vegas had its strongest attendance since the summit’s inception. IAM vendors and service providers have been working with Gartner in record numbers to discuss their product and service roadmaps and futures. New ones have appeared almost monthly. While not necessarily a record year in mergers and acquisitions by IAM solution providers, it was robust. Actions by clients and providers alike point to an inflection point in IAM for 2013– in the way it is planned, produced, purchased and put into production.
Clients using IAM are growing more mature in IAM usage, demanding more of solution providers, and innovating as a result of the changing dynamics in business. Clients selecting IAM tools for the first time are asking harder and more penetrating questions regarding capabilities, pricing, and the nature of relationships with providers. The broader impacts of IT changes in mobile, cloud, social media and information (i.e. the Gartner Nexus of Forces) are being felt as IAM customers struggle to keep up with challenges and choices.
All of that sounds impressive, but what does it really mean for clients in 2013? What does it say about the future of IAM as a practice, a process, or a market?
IAM as a practice has finally gained a degree of credibility within maturing enterprises. Clients recognize the value of knowing who has access to what, who gave it to them, and what they’ve done with it. They leverage such knowledge not only for regulatory compliance purposes, but to enable business decision-makers to “index” decisions with a “who view”– to provide an identity context to decisions involving enterprise resources, supply chains, customer relationships and human resources. IAM as a process is now defined– there is more formalism and structure around employee, customer and partner onboarding, change management and offboarding of identities. There is better sharing of information between IAM systems and security systems that can also use that identity context in delivering their own answers to IT and the business alike, from data loss prevention to security information and event management, from network access control to governance, risk and compliance management. IAM as a market continues to grow at a formidable pace, addressing the increase in the means of delivery (via cloud and social media) as well as in access points (via mobile). Information is the delivery mechanism for identity context, but is also useful in providing a degree of granularity to the IAM experience, whether in authentication, authorization, provisioning or other capabilities.
2013 is going to be an exciting year for IAM and for clients that use it. Validation of all of those painful, pricey efforts to implement a robust identity data and log model will begin to bear fruit. IAM as a service (IDaaS) in the market will continue to grow in market presence, finding its place in realistic implementations that leverage the uniqueness of that delivery and challenge the status quo of enterprise solutions. The rise in mobile needs for IAM as well as the enabling of IAM options via mobile ensures a rich growth opportunity for innovation. Social media requirements as well as its contributions to IAM ensure a unique opportunity to redefine identity itself to be more encompassing than just for the enterprise. The quantity, quality and velocity of information from 2013 IAM systems will be dramatic, and clients will need to be careful that they don’t drown in a sea of IAM information by leveraging new skills sets and new analytics tools to ensure information becomes knowledge.
Happy New Year! And buckle your seatbelts. It’s going to be quite a ride.
Category: IAM Tags:
by Earl Perkins | August 31, 2012 | 1 Comment
In a previous blog, I had touched upon the concerns that I had regarding the U.S. efforts at moving toward a consensus on how to secure N. America’s critical infrastructure, particularly in the energy and utilities markets. I believe the point of that blog was that many people were beating the warning drums, but fewer were offering up practical advice about how to counter the threats.
I recently read yet another article regarding the U.S. government’s “interference” in answering operational technology (OT) security concerns. The general thrust of the article was that the government was once again going to bumble their way into industries that it did not understand well and create more problems than it would solve by applying regulation in some form. The latest attempts in this arena involved the U.S. Cyber Security Act of 2012, which did not pass Congress prior to their latest recess. The article went on to underscore the belief that if the government would just ‘stay out of the way’, the private sector would self-regulate in the necessary fashion to ensure a secure critical infrastructure.
I am not here to debate whether that is true or not, though watching events over the last 4 years in the financial services sector leaves me a bit cynical about the ability of individual industries to look out for the welfare of the average citizen. What I DID want to say is enough already with the whining about critical infrastructure– how scary it is, how no one understands it, how government or industry is going to create an apocalyptic scenario if they continue on the current path. Here are some suggestions instead:
1- For the private industries, quit whining and complaining about how no one understands the trouble you’ve seen in security, and start cooperating to reduce the number of different forums giving advice (some of it conflicting). I’m dizzy trying to track the number of studies being released by government and private sector groups alike, some with different terminology for the same things, others with conflicting information (e.g. “The sky is falling!! No it’s not!! Yes it is! No it’s not!!). Try prioritizing your venues for communication and information dissemination and collectively establish authoritative voices about the nature of the problem, the current state, and what can be done to address the problems. If you want to avoid regulation, be consistent with how you describe the problem to Congress by agreeing upon credible, factual sources rather than fighting it out in the media. You may not like the idea of government regulation, but at least they appear to be TRYING to do something, however misdirected you may feel it is;
2- For the government, quit your bickering over who’s in charge and sort out a strategic hierarchy. Bring some consistent to YOUR studies and reports as well, and come up with a taxonomy of which study is for which purpose and which group or infrastructure. In the case of energy and utilities, decide what the roles of DHS, ODNI, DOE, NRC, FERC, NIST (to name just a few), the White House, and Congress are and be clear about it. I know this isn’t likely to happen until after the election, but perhaps we can set this as an early goal for the next administration. In addition, quit changing the NERC CIP regulations long enough for consultants, integrators, and the companies affected by those regulations to have a stationary target. Most important of all, work with private sector to ensure that you’re ALL drawing upon valid, credible, scientific sources of information from which to make decisions. Relegate questionable media reports by agencies that don’t have knowledge or awareness of the specific industries affected to their proper place in the decision process;
3- For all involved: we continue needing refinement to the common language we use about operational technology security and to agree upon the major issues we must address. We need to agree upon what obvious priorities are, i.e. what are the basics that can be done TODAY to take incremental steps to improve security for our critical infrastructure (such as ensure that basic security policy is in place and APPLIED, and that organizational requirements be identified and established early so training can commence, for example). Most importantly, we need to understand WHO IS IN CHARGE for the particular priorities identified, and what being in charge means from a governance and program perspective.
As my wife often says, it’s time to put your big boy pants on and act your age. It’s possible to sort out major issues related to critical infrastructure protection if the grown-up willingness to admit something must be done and someone must be able to lead and coordinate the effort. The rest should follow. I know it sounds easier than it really is, but it isn’t going to solve itself by wringing our hands or whining about who’s in charge.
Category: Uncategorized Tags:
by Earl Perkins | June 27, 2012 | 1 Comment
I REALLY shouldn’t have to write this piece. There are some things in life that you just learn to do that is built upon the ruin of those who came before you. George Santanya once said “those who do not learn from history are condemned to repeat it”. Out of all of the wisdom passed down to us– from history– you would think this would resonate in 2012, particularly in enterprises where information technology plays such a vital role in success.
And yet we continue to read about major companies– even IT companies for heaven’s sake– that make fundamental freshman security mistakes considered standard practice 20 years ago. Is it because these standard, common-sense security steps just aren’t sexy, and therefore aren’t pursued with the same vigor as an exciting CSI-like forensics investigation? Is it because you really don’t have to BUY technology to perform many of the standard practices that have been patientily codified, process by process, industry by industry? Is it because you lack the drive to deliver security awareness, training, and education into the culture of your organization? Or is it because you’ve grown complacent and lack the energy– in other words, have you grown lazy?
IT security as a priority for executives seems to have slipped in surveys taken in enterprises over the last two years, supplanted by issues that focus on data or applications, sometimes infrastructure. It is hard to know how to interpret that slippage, but one would hope it isn’t because of a perception that the problem has been ‘solved’ or that ‘adequate’ measures have been taken to address most risks. I’m sure that many enterprises have made enough progress to feel that way, and remain vigilant without necessarily consuming a major part of the IT budget to do so. But the news from the industry keeps coming, time and time again, of enterprises that have suffered major breaches or system failures due to simple, preventable occurrences. If we combine these simple issues with (a) a growing level of sophistication and persistence of threats; (b) the growing dimensions of security planning and management that are converging with our current IT security (e.g. physical security, industrial control security); (c) the complexity of ensuring privacy in an increasing consumerized infrastructure; (d) the growth in the number and type of IT service delivery; and (e) the expanding set of regulations that enterprises must comply with in their respective industries– you can see that IT security remains a non-trivial concern.
So what is the lesson here? Let’s apply a radical concept known as common sense to ensure that EVERYTHING that can be done from a process and organizational perspective is done to ensure an effective IT security program is in place and operating at peak efficiency. Do not skimp on security awareness and education– not training, but REAL education that draws upon the lessons we appear to keep relearning as we keep making the same simple errors of procedure and process. Optimizing the environment before you spend anything on technology is a priceless investment, and can show that you can indeed learn from an excess of teachable moments still occurring daily.
Category: Uncategorized Tags:
by Earl Perkins | May 7, 2012 | 3 Comments
Do you find yourself sometimes looking at a problem in hindsight and saying to yourself “well, the answer to THAT was obvious”? When you are able to examine trends or history looking back, you can spot patterns where they may not have been obvious previously. I find myself doing that in identity and access governance (IAG) when it comes to the problem of governing access to data, whether unstructured, semi-structured, or structured.
If you look at IAG products today, a rather clear characteristic emerges about them– they are application-centric. The features that address access request administration assume that the requests for access are primarily for applications. The discovery and mining tools are predominantly focused on repositories that serve applications and applications themselves. The analytics tools often deliver reports in terms of applications. This is a good thing, not a bad thing. But it isn’t a complete thing.
Clients also have similar requests for access to data, whether it’s data in Windows file systems, data stored as email or documents, data that has well known formats, but data nevertheless. Sure, there may be an application between the requestor and the data, but it is primarily the data that is the target. That application doesn’t dictate the rules of engagement, the data does. Many of the products today that can or do handle access to data are often not covered or spoken of in IAM in general and IAG in particular.
Fortunately, that is starting to change.
A number of the IAG vendors are beginning to aggressively partner with data loss prevention (DLP) and security information and event management (SIEM) vendors in pursuit of extending their functionality into the data realm. Some are developing such capabilities organically rather than via partnership. Most are leveraging their identity and access intelligence functionality to collect, correlate, and analyze data to produce the intelligence required to broaden the scope of IAG from just applications to applications and data.
It isn’t a moment too soon. Stand-alone IAG vendors are under ‘attack’ by the IAM portfolio or suite vendors. The suite vendors believe that IAG administration and management features should be absorbed into the traditional user provisioning/de-provisioning products they have been selling for years. Or to view it another way, suite vendors believe that they should absorb the user provisioning features of their established products into the versions of IAG products they have acquired or developed. Whatever the direction, they are seeking to marginalize the smaller, more nimble players by showing that IAG features should join the mainstream side of user administration. This means these standalone players must seek new ways to innovate and expand their feature set– preferably in a logical and customer-driven way. In the case of the marriage of data and application access governance, it is a logical union. The question will be whether they can pull it off at a pace that addresses customer demand with competitive differentiation.
The next time you talk with IAM vendors about identity and access governance, ask them about their plans for data access governance. Make sure their story and what they can deliver matches your expectations for complete IAG.
Category: IAM IT Governance Tags:
by Earl Perkins | May 3, 2012 | 4 Comments
I had a recent conversation with a client regarding concerns on the impact of supporting an increasingly mobile worker for security and access to enterprise applications. This isn’t a new concern, but trends and events unfolding at an ever-increasing pace have highlighted the problem and potential complexity of solutions for it. Let’s take a look at a few of them.
1- Improving capabilities of different mobile client devices (e.g. smartphones, tablet PCs) are drawing them inevitably into use as entry points to enterprise applications and data. I remember riding on a train in England going 80 miles an hour responding to email on an HP95LX “palm” device in 1998, so as I said, this isn’t a new problem. But the sophistication of the devices, their flexibility, and their ease of use are pressuring IT shops to provide some form of IAM support for these devices, particularly for certain important customers (read executives). The ‘bring your own device’ (BYOD) phenomenon is characteristic also part of this, where more employees and contractors use their own purchased smart client devices (including PCs) to access enterprise applications. All of this just adds more pressure on IAM solutions to broaden their functionality to support such environments;
2- The evolution of applications and services in terms of how they are delivered is also demanding more of IAM in a mobile world. Where the ‘components’ of the application are executed, how they are protected and accessed, and how identity administration changes in such a world as a result are key concerns. A hybrid world of cloud computing applications, enterprise applications, hosted applications with outsourced services– all must be supported with a common look and feel to access, a common system for reporting for compliance, for applying a graduated scale of access based on risk and sensitivity– the list goes on. Classical IAM products are attempting to extend their functionality to include these different client types and scenarios, but it remains a major concern for enterprises with a heavy reliance on mobility;
3- Integrating IAM systems with systems such as mobile data management and mobile applications development are in the early stages and represent a positive (and needed) trend. Within enterprises, the asset management team that ensures the issuance of mobile phones, tablet PCs, and the like must talk to the IAM team that does provisioning and deprovisioning of access to make sure there is a convergence of process for these activities– and vice versa. Mobile application developers that seek to incorporate mobile client services into enterprise application environments must understand that requirements for authentication and authorization requirements may be different than that to which they are accustomed, resulting in changes to their methodology and approach to programming for security and access.
I really don’t like to use the phrase “this is in an early stage of evolution” for trends this volatile and dynamic, but it is what it is. This wave will roll over traditional environments like IAM, applications, and infrastructure and leave its mark– hopefully not like tsunami leaves its mark. Ignoring mobility in IAM, like ignoring tsunamis, is not an option.
Category: IAM Tags:
by Earl Perkins | April 25, 2012 | Comments Off
My colleague Gregg Kreizman and I just completed a market analysis on different facets of the IAM services market. I focused on the IAM consulting and system integration (C&SI) market, Gregg focused on the IAM as a service (or IDaaS) market. During and after our research, we were discussing the next task– a look at the IAM managed and hosted services market, research I intend to deliver in the summer. It was during that discussion that the subject came up: what is the difference between IDaaS and IAM managed/hosted services?
When I first started looking at all of these service types in 2009, I created a simple taxonomy that divided the IAM services market into (1) C&SI; (2) managed/hosted; (3) IDaaS, and (4) another category that was focused on how IAM services were architected. Gregg and I both now see a blurring of the definitions between managed/hosted and IDaaS. Many of the providers that claim IDaaS are actually more like managed/hosted providers in the way the services are delivered, contracted, and maintained. So is there really a difference?
If I look at the Gartner taxonomy for cloud computing applications, it informs me about how I should define IDaaS in contrast to managed/hosted services. There are primarily 3 differences:
(1) There is a high degree of standardization in IDaaS that allows an offering to clearly delineate feature sets, standard practice for implementation and use, and organizational support requirements. An IDaaS will depend upon standard design principles related to multi-tenancy (likely delivered via virtualization architecture and product), scale, and systems support using well-defined metrics and simple SLAs. It will also be targeted at a specific service such as access management or single sign-on– more sophisticated activities around areas such as identity and access governance are not yet mature enough to have established standards and practices applied to achieve the degree of standardization needed;
(2) Speaking of simple, the second key characteristic of IDaaS is in the service ability to deliver very simple contracts, without detail around MIPs or storage or processing, but instead around simple concepts related to metered usage and/or user counts. An IDaaS contract will be much smaller and simpler to understand than a managed/hosted contract;
(3) An IDaaS requires no dedicated network links or sophisticated engineering at the DMZ-level to consume the service, and is based on Internet formats, interfaces, and protocols. There may be an appliance (hardware or software) that links the service via a VPN, but it uses the Internet, not a dedicated network link;
So the key words for IDaaS are simplicity of design and delivery, scalable and metered, with a heavy emphasis on standardization across technology, process, and organization. As I have said, you’re likely to find a blurring between IDaaS and managed/hosted services, and I’m not that sure whether it matters so much as long as the client is happy and satisfied with the result. This does indicate a trend to me, that there will be more players in the market, those players will borrow generously from ALL services types (including consulting and system integration), and that alliances will form around all three in different combinations to deliver IDaaS. Even the traditional IAM product vendors themselves will be active in the mix– they can provide the foundational raw material for some services, and facilitate the inevitable creation of the ‘hybrid’ world of the management of mixed cloud and enterprise applications.
So is there REALLY a difference between managed/hosted services and IDaaS? Whether it is evolution or revolution, there probably isn’t as much difference as you think.
Category: Cloud IAM Tags:
by Earl Perkins | April 20, 2012 | 4 Comments
Alright, that’s enough!
I cannot pick up a news feed or peruse a blog about operational technology (OT) or industrial control security (e.g. securing the electric power grid, water, transportation, intelligent health care systems, etc.) without reading yet another story about how life as we know it will end any day now once mysterious governments and other dark elements of the Underworld wreak havoc on our comfortable lives. They will hack into nuclear power plants and cause meltdowns, they will control transportation systems and airport control towers and cause wrecks to occur and planes to crash, they will pollute the rivers and shut off the power, they will etc. etc. etc.
As an analyst covering OT security at Gartner and in previous lives as a worker in the electric utility industry, I recognized long ago that (a) there IS a threat that these things can happen; (b) many OT systems (and what my colleague Hung LeHong calls the “Internet of Everything” to denote Internet connected intelligent devices) are vulnerable to these threats; (c) steps must be taken to minimize the risk that these threats will be successful. I’m not trying to minimize the seriousness of this issue or to challenge the level of threat.
What I AM doing is making a plea for the media and my industry colleagues to bring more of a balance in writing between (a) what the nature of the problem IS with (b) what IS being done today to mitigate the risk and what should be done. I know it is more sexy and exciting to talk about doomsday and the destruction of civilization. I’ve read my share of post-apocalypse books and seen the movies. We get the picture. However, it is the less sexy act of PREVENTING apocalypse and how it is being done step by step, inch by inch, that also deserves air time.
I had a manager once when I was young that gave me some valuable advice. One day, as a newly appointed supervisor, I was in his office complaining about something. He held up his hand and said something that I remember to this day: “no more b-m-w! Enough already with the b-m-w! I want the SOLUTIONS. When you have a solution, THEN you can come back in here and b-m-w all you want, just end with the solution.” For those who are scratching their heads, b-m-w in this case meant b******g, moaning, and whining. I never forgot that advice.
So I offer a challenge to the reporting community at large– For every scary story you feel compelled to publish about the end of life through scary OT security stories, have a balanced part of the same story put aside to describe what is being done TODAY to mitigate the risk of threats. I will help you with those use cases, as I’m sure most of the professionals in the OT-centric industries will– if you just ask. Try some solution writing along with the b-m-w.
Category: Uncategorized Tags:
by Earl Perkins | April 17, 2012 | 7 Comments
A curious thing has occurred as identity and access management systems are deployed in enterprises world-wide. The nature of the relationship between IT and HR, or human capital management (HCM) has evolved. For many IAM implementations, the relationship between the IAM team and HR hasn’t been a particularly good one. While the HR database can and has served as the authoritative source for IAM systems, negotiating with some HR departments to make that connection and share the neccessary data to build the IAM data repository has been, shall we say, difficult. The need to control and maintain the privacy of employee information is viewed as a sacred trust by many in HR, and they don’t like the idea of even an extract of their data used outside of their purview, even if it is for purposes other than HR.
IAM projects need a starting point, and HR data is a logical one. Synchronizing the data in HR that exists about a person and their job activities with the directory for authentication or the entitlement catalog for authorization is a natural design step. However, in making this connection, IAM data stores also become an extension of HR, because they also obtain and store data NOT found in HR systems but data about identities nonetheless. Not only is data for access found in the identity store, but even data about other people not found in many HR systems, such as contractors or partners, for example.
It is at this point that HR and IT find themselves in a bit of a dilemma. HR does not have a mandate to track contractor identities in many industries, but they ARE human resources for the business. Therefore some of the data collected by IAM systems can be and often is of interest to HR. If both of them can overcome the friction of shared data quality and access responsibilities, it can actually be an productive partnership. IAM data repositories can become a small, miniature ‘surrogate’ to larger HR systems, with data from these repositories used for purposes other than access. My colleague Lori Rowland coined an interesting term for this and related phenomena– “accidental identity management”. An IAM program captures valuable data about identities that may be used by other parties within the business for something completely outside of access.
Even though consumers and/or citizens are also not the purview of HR systems, IAM also provides valuable identity and access information (dare I say ‘intelligence’?) for a broader view of people and their interaction with IT resources. We see today how valuable such information is to consumer technology providers such as Google and Facebook. Imagine an equivalent in use cases for enterprise IAM activity and event information leveraged in business decision-making. The first and most obvious ‘customer’ of such intelligence will be HR, providing a more accurate record of what an employee actually DOES, rather than what we say they do with job titles and ‘roles’. There are of course some significant privacy implications to ‘role activity monitoring’, but I think you get the picture.
In addition to everything else you know and understand about IAM, add to that the potential to be a valuable asset in the pursuit of best-in-class human capital management.
Category: IAM IT Governance Tags:
by Earl Perkins | April 16, 2012 | 3 Comments
There are so many ways of looking at IAM, but is looking at IAM differently helping you to DO anything differently? Let me explain.
The market for IAM tools and services coalesces around 3 primary ‘targets’, or environments where managing IAM is a good idea. Those areas are (1) data, (2) systems, and (3) applications.
In the vast majority of cases, data is the ultimate target for users of IT systems. That data may be structured, semi-structured, or unstructured, but at any one moment in time we’re after some data, information, or knowledge to make a decision and/or initiate an action. The IAM and related security industries have products that tend to focus more on data than other dimensions, products such as DLP for example.
When I use the word ‘systems’, I am encompassing IT platforms, operating systems, and networks together. Again, you’ll find a number of products that focus on these systems as the primary target for IAM. Privileged Account Activity Management (PAAM) is a prime example here, as well as some access management solutions.
The applications dimension of IAM is where a lot of action is taking place in the industry today. Mainstream access governance and provisioning solutions have applications as their target focus, and the solutions are architected around how to manage identities for application access.
When an enterprise knows what how the IAM products and services markets target these dimensions, it makes planning easier. It reminds me of the old saying “when you have a hammer, everything looks like a nail.” Remember that IAM products first start as an idea in someone’s head, and if that person has a data, system, or application ‘hammer’ dimension, it’s likely the product will reflect that.
It’s important that the planning and design of an effective IAM system NOT be driven by the way the market defines targets or dimensions, but instead on what then enterprise’s true requirements are and what is currently available in the enterprise to sustain an IAM program. It is one reason why Gartner is emphasizing the concept of an “identity data model” to more formally address the first dimension described here, the data dimension of IAM.
The data dimension itself has three targets: (1) the IAM data itself (identifiers, credentials, attributes, entitlements, etc.); (2) the log information of IAM activities and events, to be collected, correlated, analyzed, and used; (3) the target data to be accessed, structured, semi-structured, and unstructured (e.g. database contents, documents, messages, etc.). An understanding of how all of these different kinds of data interact is the first step towards addressing the data dimension of IAM.
Now you have another way of looking at IAM. Hopefully it is one that can help you plan, build, and operate such systems more effectively.
Category: IAM Tags:
by Earl Perkins | February 22, 2012 | Comments Off
And now for something really special. My colleague Ant Allan has written a blog on the recent NIST moves to fund alternatives to passwords. Enjoy!
So, NIST intends to provide tens of millions of dollars in funding for people to develop and commericalize something better than legacy passwords, as part of the National Strategy for Trusted Identities in Cyberspace. [http://www.washingtonpost.com/business/capitalbusiness/nist-seeking-to-move-beyond-passwords/2012/02/06/gIQAtjU1NR_story.html]
But surely we already have enough alternatives to passwords? NIST’s spokesman, Jeremy Grant, notes that other technologies, “such as smartcards and tokens that generate [one-time] passwords [OTPs]” are used but haven’t caught on. In the 12 years I’ve been with Gartner – and mostly in the past five or six years – we’ve seen a huge growth in the availability and uptake of other alternatives that do without dedicated hardware devices, including enhanced password methods (simple approaches that allow a user to scramble a memorized password), a few different ways of using someone’s mobile phone as a kind of authentication token, and new biometric authentication modes such as typing rhythm. And these undoubtedly offer organizations better tradeoffs of total cost of ownership (TCO) and user experience (UX) against authentication strength — we’ve seen clients migrate away from legacy hardware tokens as a way of reducing TCO, improving UX or both and others moving away from legacy passwords as these methods lower the price point of improved security. (They might not be as strong as X.509 smart cards and OTP tokens, but they can be strong enough.) And in many countries’ financial services systems, regulators identified that passwords alone are insufficient several years ago, so there are already many mature implementations of authentication methods beyond legacy passwords.
And yet… and yet legacy passwords — which both logic and empiricism tell us are critically flawed — remain the most widely used authentication methods over a wide range of use cases where they are no longer appropriate. (Legacy passwords can still be appropriate where risks are minimal, of course. Although there can still be reasons to seek an alternative method: For example, clients tell us that many users struggle to remember passwords that they use only every several months or every year.)
So, will this NIST funding stimulate the evolution of existing technologies or will we see something wholly new? Possibly something that combines both or exploits existing technologies in a novel way. Our Burton IT1 colleague Bob Blakley has suggested that recognition technologies, which combine passive biometric technologies and broad aggregations of contextual information about a user, will lead to the demise of (traditional) authentication (see “Maverick Research: The Death of Authentication” [http://www.gartner.com/resId=1818025]). I’m not convinced about this — and Avivah Litan, Bob and I will be debating this on stage at the upcoming Gartner IAM Summit in London [http://www.gartner.com/technology/summits/emea/identity-access/]) — but I think it’s inevitable that these recognition technologies will increasingly play a part. Indeed, they are already used, albeit from a different angle — Web fraud detection tools already make use of a variety of contextual information to dynamically assess risk, so determining if the user’s initial authentication (say, by password) provides a sufficient level of trust. Adoption of such techniques is part of a best-practice layered approach to security (see “The Five Layers of Fraud Prevention and Using Them to Beat Malware” [http://www.gartner.com/resId=1646115]). (So, Bob’s research may not be as “maverick” as some might think!) I’d certainly expect NIST’s initiative to attract proposals along these lines.
But NIST’s initiative has a broader scope even than this. NIST’s Grant also says that any new authentication methods might, for example, work at multiple business and government bodies. This extends the initiative from the realm of authentication and recognition methods into the realms of identity federation using established standards (SAML, WS-Federation) and emerging protocols (such as OpenID and OAuth), where bodies such as the Kantara Initiative are already working on the supporting governance and legal frameworks. NSTIC will encompass all of this.
I’m not sure if this doesn’t muddy the waters. My feeling is that the pilots for authentication methods and interoperability frameworks should be discrete, resulting in services that organizations can plug together according to their needs. If single proposals do address both aspects, I’d hope that the parts could be easily decoupled. If the NIST initiative stimulates the development of the ideal authentication (or recognition!) technology it will not be as useful if it’s inseparable from a novel interoperability framework — that will be a barrier to adoption by organizations that have already invested in SAML federation, for example (unless vendors can be unusually prompt in adopting the new technology!). I’d expect to see the first real world implementations being messy hybrids… and those might be rather persistent.
I’d be very interested in hearing people’s thoughts on this!
PS. Thanks to Gregg and Avivah for their suggestions. Any errots that remain are purely my own.
Category: IAM Tags: