Richard Hunter

A member of the Gartner Blog Network

Richard Hunter
VP Distinguished Analyst
17 years at Gartner
32 years IT industry

Richard Hunter is vice president and Gartner Fellow in Gartner's CIO Research Group – Office of the CIO Research Team, where his recent work has focused on issues of interest to CIOs, including risk and value. ...Read Full Bio

Target CEO’s resignation is a shot across the bow for enterprises (and maybe Cloud)

by Richard Hunter  |  May 5, 2014  |  4 Comments

The Associated Press reported today that Target’s CEO resigned, largely as a result of the damage done to the company in the wake of the massive credit card breach reported five months ago.  You can read the AP’s full story here: http://www.wjla.com/articles/2014/05/target-ceo-fired-following-last-year-s-security-breach-102788.html

The gist of the story is contained in one paragraph:  “He was the face of the public breach. The company struggled to recover from it,” said Cynthia Larose, chair of the privacy and security practice at the law firm Mintz Levin. “It’s a new era for boards to take a proactive role in understanding what the risks are.”

Boy. I’ll say.  The last time I heard of a C-level executive (other than a CIO) who lost his job because of an IT failure, it was January 2005.  The President of ComAir resigned three weeks after the company’s crew scheduling system failed on Christmas Eve 2004, stranding 30,000 passengers for three days during the busiest travel (and family) season of the year, an incident that also cost the company seven percent of its revenue for the year, not to mention a Federal government investigation.

But that was then, and this is now.  In those days an IT-related incident with that level of consequence was one-of-a-kind.  Not any more.  Now IT risk is a societal issue worldwide–not an enterprise issue, or even an industry issue, but a global issue for anyone who uses a computer keyboard, which increasingly is everyone, period.  It’s official: failure to take every reasonable action to ensure the security of computing resources can get a CEO fired.

Survey data from Gartner analysts such as Laura McLellan and Kurt Potter indicate that somewhere in the neighborhood of a third of all enterprise spending on IT is spent without input or advice from the IT organization, and that spending is increasing at a rate faster than the growth of the traditional IT organization’s budget.   The increasingly widespread availability of cloud computing resources makes it easy for anyone with a budget to acquire IT-related services pretty much on demand.  Whether that’s a good thing depends on your point of view.  IT professionals, who are used to vetting IT services for performance issues that include security, availability, and reliability tend to take the point of view that it’s terrifically risky, and therefore not a very good thing; marketing and sales professionals, who often see IT’s apparent obsession with traditional performance virtues like security as obstructionist, tend to think otherwise.  IT professionals know that these things can go horribly wrong; the new buyers haven’t had enough experience with IT to know that.  The new buyers value speed-to-access above all.  And why not?  What could go wrong, after all?  Doesn’t the stuff work?

The newly apparent answer is that the technology works until it doesn’t, and it might stop working when somebody with ill intent makes it his business to break it. If and when it fails, a CEO can pay the price.  You can bet that CEOs everywhere sent out a barrage of memos to their Chief Marketing Officers today asking about what they’re doing to protect all the stuff they’re running in the cloud.  The answers are likely to range somewhere between “not much” and “nothing.”   Such answers were probably acceptable last week.  They’re not acceptable now.   Not if the CEO can lose her job over it.

I suspect that a lot of the non-IT professionals who are carefree buyers of IT services, in the cloud or elsewhere, are about to find out that the party is over, and from this point on their IT purchases are going to be subject to a new and much higher level of scrutiny.   That doesn’t mean that enterprise IT spending outside the IT budget will stop.  It might not even slow down.  But the days when anyone in the enterprise could throw down a credit card and put a bunch of sensitive data anywhere they liked without having to demonstrate due diligence in protecting that data are probably drawing to an imminent close.  Sarbanes-Oxley compliance became an urgent enterprise project when failure to comply meant potential jail time for the CFO and CEO.  Managing IT risk for every IT purchase is about to become an urgent priority for similar reasons: the real costs, and the parties who will pay it, are now blindingly obvious.          

 

 

 

 

 

4 Comments »

Category: cloud IT risk     Tags:

Fracking the Human Ecosphere

by Richard Hunter  |  April 7, 2014  |  3 Comments

The center cannot hold.  

W.B. Yeats, “The Second Coming”

 

For roughly 50 years, IT organizations have been structured according to what my former Gartner colleague Mark McDonald called the “dominant model”: a pattern that was so widely adopted that it was effectively ubiquitous.  In this model, IT organizations were designed above all to provide reliable and predictable execution at scale, and every activity of the IT organization, from exploration of a problem space to delivery and ongoing management of a solution, was painstakingly ordered and executed to that end.

So dominant was this model that for decades it was possible to take a professional working in an IT team in any given enterprise, industry, or geographic location, and change any of those factors without interrupting the programmer’s effectiveness for more time than it took him or her to locate their new desk.   It was an extraordinary uniformity for an industry that has come to see itself as a key enabler for change.  But the dominant model wasn’t really about change.   It was about tightly knit processes whose fundamental characteristics were the antithesis of change: stability, accuracy, security, regulated throughput.  Change was only tolerated to the extent that it produced improvement in price to performance ratios for these specific outcomes.  Change that threatened those outcomes—such as the introduction of personal computers in the 1980s, the increasing ubiquity of personal computing and communications devices, and purchase of Cloud computing services by marketing teams—was and is resisted by many, if not most, IT organizations.

This model is coming to an end.  Of course it is innovation in technology that is the cause.  The dominant model could not survive the explosive spread and diversification of information technology that has occurred worldwide in only a few decades.   The uniformity of culture and technology that preceded the Nexus of forces—Gartner’s shorthand term for the unprecedented democratization of powerful technologies worldwide that began with the exponential growth of the Internet in the 1990s, and is proceeding now with technologies that include social, mobile, cloud, and big data—was key to the success of the dominant model.  It is not at all clear what will follow it now that it has been largely if not entirely eviscerated.

IT’s Dominant Model is not the only one dissolving

What is even more important is that these same forces are eviscerating the dominant models for societies at the same time.   It may in fact be that the confusion reigning in IT organizations as they seek a new model for purpose and structure is simply a reflection of the confusion that reigns in global societies as they struggle with the impact of the Nexus.

Commentators ranging from Stratfor to Tom Friedman to (probably) your next door neighbor have observed that there is a crisis in global leadership and deep fragmentation in bodies politic.  In my book “World Without Secrets,” published in early 2002 in the wake of the 9/11 attacks, I discussed the rise of the “Network Army,” a form of social and political movement based on shared (if often niche) values and beliefs that transcend geography and nationality, oriented to action, and enabled by instantaneous, ubiquitous communications and access to information.  Our bodies politic are rapidly devolving into network armies that include fundamentalist libertarians in the Iowa Caucuses, right-wing nationalist street gangs in the Ukraine, and a myriad of actors in Arab Spring uprisings.  They are everywhere, and wherever they are found they are in no mood for compromise.

In the early days of the Internet many believed that widespread access to information would create a new age of global harmony and prosperity.  How otherwise could Google possibly justify the belief that it is not evil?  But it is increasingly apparent that something very different has happened.  Our barely-born technologies for information manipulation and communication have contributed to, if not created, an enormous rift between social, political, economic, and military elites and the peoples they supposedly serve.  The free flows of information that were supposed to create common good have instead helped to consolidate wealth and power on an unprecedented scale, leaving the disenfranchised everywhere to battle it out among themselves for the remaining scraps with little mediation from their supposed leaders.

21st century IT industrialists frack the human ecosphere, extracting value and leaving society to struggle with mountains of waste that include rapidly burgeoning cyber-crime, just as 19th-century industrialists strip-mined the planetary ecosphere and left the local population to deal with ruined landscapes and poisoned rivers.  Abandoned and openly manipulated by elites, ordinary citizens worldwide react by turning ever more stubbornly inward to values and beliefs that a massive and increasing flood of information from untrusted sources would otherwise call into greater question with each passing moment.  Attitudes substitute for thoughtful consideration; slogans that represent purposeful oversimplifications of complex and often self-contradictory ideals substitute for debate.  Political opponents do not merely disagree; they have no understanding of the premises behind their opponents’s positions, and therefore no ground for common action for the common good.  Winner-take-all street fights replace bargaining and pragmatic change.   In this land ruled by network armies there is no perceived time or space for pragmatic change, only for eager exploitation of temporary commercial, political, or ideological advantage.

Will Innovation End?

This extraordinary age of global discord is fueled by rampant, accelerating innovation that has delivered technology of unprecedented power to multitudes worldwide.  It is possible that it will only end when increasingly powerful elites decide that innovation is over.

Cycles of change have already accelerated far past the point where they can be assimilated without substantial pain by humans and the planet we inhabit.  Yet it is an article of faith among many, including most of my colleagues at Gartner, not to mention widely quoted gurus like Ray Kurtzweil, that continuing acceleration of innovation is a given.

I doubt it.  As my first mentor at Gartner, Mike Braude, once said, “When everybody believes something, that’s very good evidence that it’s not true.”   Innovation has flourished in our era because new industrial and political elites have found it to their advantage to make it do so.  Innovation can end when those elites or counter-elites decide that more change is dangerous to their interests.

An end to real science and innovation was a key element of two of the greatest visionary masterpieces of the 20th century, Frank Herbert’s “Dune” and George Orwell’s “1984.” In those books it was political, not economic, elites that rang the bell.  But economic elites may easily substitute.  Currently we have the spectacle of gambling mogul Sheldon Adelson spending millions, perhaps billions eventually, to influence gaming industry players and regulators against Internet gambling.  That’s transparent opposition to industry innovation, and it fundamentally has nothing to do with what’s better or worse for the industry’s customers, for whom the “benefits” of gambling are in any case arguable.  It’s all about who controls the channels, and monopoly never operates to the customer’s benefit.

Will a new dominant model emerge?  Maybe.

Humans crave stability even more than they crave novelty, and ultimately a new—or revived—dominant model is likely to emerge.  The one that’s emerging now at the societal level seems to be along the lines of a society in which vast wealth and consolidated industry power openly rules all by setting and enforcing de facto rules, industry by industry, with the help of a paid-for political establishment.   (I trust that there is no longer room for argument about whether money rules politics, at least in the USA, where the Supreme Court has just equated unfettered spending on elections to free speech for the second time?)

If the same thing happens at the enterprise level, then the future of the IT organization lies in little more than operational support in an otherwise balkanized enterprise.  It’s not a given; it’s a significant possibility in a fractured world where consensus in and outside the enterprise comes down to “just do what we say, and nobody gets hurt.”  Of course, running operational support is no mean thing if innovation disappears or is trivialized; once that’s done, what’s left besides operations?

It’s a cardinal tenet of business writing that an author never raises an issue for which she does not have an, if not the, answer.  I have done so here, and I apologize to the reader for potentially ruining what might otherwise have been a pretty nice day.   I do think it is important to make the point that the dissolution of the dominant model for IT may only be a symptom of a much deeper dissolution of social pacts that is driven by rampant innovation, and that may only stabilize when innovation ends.

If you think otherwise, or if you’d like to comment on any aspect of this post, please do.   These ideas are by no means settled, for me or anyone else, no matter how strongly they’re stated here.

 

P.S. As an aside, when I re-read the Yeats poem referenced above, written almost 100 years ago, I was astonished to see how modern its message is.  I could have quoted almost any line in it with relevance to this piece.

3 Comments »

Category: IT risk privacy     Tags:

Privacy is Power

by Richard Hunter  |  November 4, 2013  |  4 Comments

The recent revelations of NSA surveillance of just about everybody have of course provoked a lot of discussion about privacy–what it is, why it matters, who needs it and who doesn’t.  Daniel Solove, writing well before the NSA’s activities came to light, took on the “I’ve Got Nothing to Hide” argument against privacy in this piece.  On the other side of the debate, Mike Rogers, chair of the US House of Representatives Intelligence Committee, proposed the novel theory that your privacy can’t be violated unless you know it’s been violated:

Rogers: I would argue the fact that we haven’t had any complaints come forward with any specificity arguing that their privacy has been violated, clearly indicates, in ten years, clearly indicates that something must be doing right. Somebody must be doing something exactly right.

Vladeck: But who would be complaining?

Rogers: Somebody whose privacy was violated. You can’t have your privacy violated if you don’t know your privacy is violated.

I’m sure that extortionists everywhere would applaud Rep. Rogers’s novel legal theory–if no one’s complained, there can’t possibly be a crime–whole-heartedly.   That aside, the fact is that there’s plenty of controversy about what privacy is, why it matters, and what harms result from its violation.   The latter question in particular is vexing for privacy advocates.  As per Solove’s article, many might argue that if I have nothing to hide, I have no reason to fear the loss of my privacy.

I propose a simple definition of privacy that may help clarify the harm involved in the violation of privacy.  To wit: privacy is power.  Privacy is a line over which others may not step.  Privacy is a protected space within which I think, do, and say what I please.  Absent privacy, I’m powerless to prevent the intrusion of the powerful into my life, and that intrusion may take any form that the powerful deem appropriate.

We see this most clearly in police states, whose defining characteristic is the utter absence of privacy.  The absence of privacy in such states in effect makes the individual the property of the state.   For an extraordinary illustration of this principle in art, I highly recommend the film “The Lives of Others,” which describes the relationship between an experienced and extremely capable East German intelligence officer and the writer he is charged to monitor.  I promise that regardless of your political beliefs, you will consider this film to be two hours very well spent.  But even in commercial relationships, which are far less coercive than those between authorities and citizens in police states, the absence of privacy tips the balance of power.

I repeat: privacy is power.  Without it, the individual has none.  Period.

If you’re comfortable with that, then you may indeed have nothing to hide, because anything worth taking has probably already been taken.

4 Comments »

Category: privacy     Tags:

Maturity models are proxies for value, not value itself

by Richard Hunter  |  October 23, 2013  |  2 Comments

Maturity models are all the rage in IT circles.  There are maturity models for nearly everything an IT organization does.  Lots of IT professionals, in practically every IT discipline, talk about improving maturity as if it was the ultimate goal for the organization.  And that’s a problem for the IT professional.

The heart of the problem is that “maturity” is not value.  Value is an outcome, and maturity is not an outcome; it’s something we pursue in order to develop the capabilities that make an outcome possible.   At best, increasing maturity is a leading indicator for value, not the thing itself.  An IT organization that touts its improving “maturity” to an executive team is not talking about value, but about IT activities. Executives are at most mildly interested, and at worst worry that IT isn’t focused on the right things–which of course means initiatives that explicitly deliver more value to the enterprise.

I’m not opposed to increasing maturity.   I’m all in favor of everyone improving performance, ideally by leaps and bounds.  I merely think that in most cases maturity is not a topic for discussion with the executive team.  What IT professionals should discuss is not “maturity”, but the outcomes that increasing maturity produces.

As an example, consider the grand-daddy of all maturity models: the Watts Humphries Capability Maturity Model (CMM) for applications development, developed in the 1980s at Carnegie Mellon University.  The point of this model is to rate the ability of an AD organization to consistently execute against project plans.  Humphries tied the original model to quality; it can just as easily be tied to outcomes that include higher productivity, improved user satisfaction, and any others that can benefit from systematic execution.

And here we encounter our first maturity dilemma: consistency in execution says little or nothing about what outcomes are supposed to be consistently delivered.  Should our AD organization optimize for quality, adherence to schedule and budget, or for productivity?  The first is what NASA’s space shuttle program optimized for; the second is what the typical large external serivce provider optimizes for; the third is what the typical AD shop thinks it’s optimizing for.  Which of these outcomes is the desired one for a given IT team?

The answer is dependent on what the enterprise needs, which can change over time.  Twenty years ago most enterprises wanted high quality systems, the stuff we now call “systems of record’ in Gartner’s applications development pace layering model; now most enterprises want rapid delivery, figuring that an 80% solution now is better than a 100% solution three years from now.  An IT organization whose “maturity” defaults to rigorous methods for creating high quality above all else might find itself losing value in the eyes of peers throughout the enterprise when the goal shifts (as indeed it has for many of our clients).

The solution to the problem is to avoid discussing IT’s performance in terms of maturity, and focus instead on the outcomes that maturity is supposed to produce.  Let’s assume for the sake of argument that desired outcomes for a given enterprise include delivering more projects, in shorter time spans, with less waste for both IT and non-IT personnel.  As opposed to reporting to the executive team on AD “maturity,” the AD team can talk about percentage of projects delivered on time and budget, and increased value delivered as represented by reduced waste and higher yields on resources invested in business imperatives.

Any of these outcomes represents real value, as opposed to “maturity”, which is at best a proxy for value–a symbol, not the thing itself, and a symbol that doesn’t resonate with anyone outside IT.  When you want to talk value, talk outcomes, not maturity.  If the outcomes don’t resonate, the increased maturity that makes the outcomes achievable won’t either.

 

2 Comments »

Category: value     Tags:

IT investment decisions are the best early-warning system for the success of enterprise strategies

by Richard Hunter  |  October 17, 2013  |  7 Comments

I’ve been thinking a lot lately about the “kernal” construct that Richard Rumelt proposes in his book “Good Strategy, Bad Strategy.”  According to Rumelt, the irreducible core of a strategy–the “kernal”–consists of a diagnosis of the environment an enterprise operates in, one or more policies that are intended to address the diagnosis, and coherent execution in support of the policies.  IT governance looms large in coherent execution; every change in an enterprise that’s more than trivial eventually comes to IT, because nothing that’s more than trivial can be implemented without supporting technology. 

In other words, the IT project proposal process essentially generates a running list of proposed and actual change in the enterprise.  In fact, the IT project portfolio is the only place in the enterprise where all that change is visible at once.  Smart enterprise leaders can use that fact to their advantage by turning the IT investment decision making process into a forecasting system for the success of enterprise strategies. Here’s how:

1)       Ensure that any proposal for change that comes to the IT investment decision making process (a/k/a IT governance) has two explicit characteristics:

  • It spells out outcomes in material, quantified, baselined terms, and
  • It links those outcomes to the goals associated with specific enterprise strategic imperatives, of which there are probably no more than half-a-dozen at any point in time, assuming that the enterprise is genuinely clear on its strategy; great strategies don’t have 15 segments.

Investment portfolios aligned to strategic imperatives are useful for the latter purpose.   The CFO or COO, not the CIO, is the appropriate party to ensure this basic investment discipline, because it’s not the CIO’s job to enforce investment discipline, whether IT is involved or not.

2)      By adding up the outcomes, costs, and timeframes associated with projects within an investment portfolio linked to a strategic imperative, the enterprise can determine how far (hypothetically) the full range of initiatives will move the enterprise towards the goals specified by the imperative, at what cost, and within what timeframes.  That knowledge can be put to good use in deciding how much and where to invest.  Summing of these factors is relatively trivial if the basic information is included in project proposals, and can be done by a project management office, by personnel within IT or a sponsoring business unit, or by the CFO’s office.

3)      When proposed projects are reviewed, approved, resourced, and started, the same approach can be used to determine how far the approved project portfolio (as opposed to the proposed project portfolio) will move the enterprise towards its goals.  Comparing proposed project portfolios to approved portfolios can help to show whether the enterprise is investing enough, too little, or too much in pursuit of its strategies; for example, whether attractive investment opportunities are being left on the table because of arbitrary constraints on “IT spending.”  (Calling an initiative supported by IT an “IT project” is like calling your exercise program “the Nautilus project”; yes, machinery is involved, but that’s not the point of the exercise.)

4)      When approved projects are completed (and after a suitable waiting period to account for the lag in benefits that always follows any change initiative), actual outcomes can be measured, which will help both to refine forecasting going forward and to improve current forecasts.

Suppose, for example, that we have a strategic imperative to improve product quality in order to increase initial and repeat sales and reduce capital allocated to allowances for returns.    If we can quantify, even hypothetically, the extent to which a given increase in product quality will increase sales and reduce allowances for returns, and we frame our project proposals in those terms, we can then use our IT-supported project portfolio for that strategic imperative to estimate how fast, how much, and at what cost we will be able to achieve these strategic goals with our current project initiatives.

I said earlier that this approach offers a level of insight into the potential for success or failure of strategic imperatives that is available nowhere else.  Even more importantly, this insight can be achieved with only minor adjustments (if any) to an investment decision-making process that is already present in the vast majority of enterprises.  For many, the discipline of specifying initiative outcomes in material, quantified, baselined terms (as opposed to fuzzy measures of success like “improve collaboration”) will be the most difficult to master, but this can be expected to improve over time with management attention and team learning.  In return, the enterprise gets a powerful early-warning mechanism for strategic success, not to mention a better way to think about its investments involving scarce IT and non-IT resources, and (ultimately) higher yields on those investments.

Most enterprises think about IT governance as a mechanism for allocating IT resources to “IT projects,” but that’s way too limited a view of what’s really going on.  I hope I’ve made it clear here that not only is IT governance one of the most important enterprises for coherent execution of strategy, it’s also the one that offers the best and most comprehensive tools for estimated and validating the outcomes that execution will deliver.

I’ve been talking about these ideas with CFOs, CEOs, and CIOs lately, and the reception has generally been favorable.  I wouldn’t be surprised if Wall Street investment analysts figure this out at some point, and CIOs start getting calls from said analysts about what’s in the project portfolio. 

Think about it, and let me know what you think.

 

7 Comments »

Category: value     Tags:

Help Tina Nunno name her new book!

by Richard Hunter  |  July 3, 2013  |  1 Comment

My esteemed colleague Tina Nunno has been writing and presenting on the topic of CIO politics for years now, and her book on this topic is well underway.  Tina needs help naming the book, and I’m helping to spread the word.  Follow the link below to help Tina choose between two titles (and get a free gift–a very nifty piece of recent Gartner research).  Thanks!

http://blogs.gartner.com/tina-nunno/welcome-to-my-blog-please-vote-on-the-title-of-my-new-book/

 

1 Comment »

Category: Uncategorized     Tags:

What Ethics Are Appropriate to a Hacker?

by Richard Hunter  |  July 2, 2013  |  4 Comments

I’ve been thinking lately about Matt Honan, who wrote almost a year ago in Wired about his experiences with a hacker named Phobia who hacked Honan’s Apple account, and soon after used his access to the account to wipe out Honan’s electronic memories–everything Honan had acquired and stored on his Apple devices.  Honan later was contacted by Phobia, and in the course of their correspondance Honan asked why Phobia had done him such grievous harm:

I asked Phobia why he did this to me. His answer wasn’t satisfying. He says he likes to publicize security exploits, so companies will fix them. He says it’s the same reason he told me how it was done. He claims his partner in the attack was the person who wiped my MacBook. Phobia expressed remorse for this, and says he would have stopped it had he known.

“yea i really am a nice guy idk why i do some of the things i do,” he told me via AIM. “idk my goal is to get it out there to other people so eventually every1 can over come hackers”

The ethos expressed in those few words is close to monstrous.  Here’s what it boils down to:

1)       If I hack you, it’s your fault for being unprotected.  (As Clint Eastwood said in “Unforgiven” after being taken to task for shooting an unarmed man, “He shoulda armed himself.”)   Therefore…

2)      I have the right to hack anybody who’s vulnerable to hacking.  And that’s a good thing, because…

3)      I serve a higher purpose when my hacking wrecks the lives of people who are too careless or innocent to protect themselves.  As Phobia wrote to Honan, “my goal is to get it out there to other people so eventually every1 can over come hackers”.

Phobia is confused, to say the least.  You can’t simultaneously not know why you do the things you do, and purport to have a goal that represents the justification for doing what you do.   Beyond that, there’s an obvious hypocrisy in Phobia’s comments.  If you claim to be concerned for the welfare of others—for example, to care about whether they can protect themselves from some kind of harm—you don’t start your relationship with those others by launching a brutal surprise attack.  In a civil (read: moral) society, you don’t attack strangers on the street (or anywhere else) on the pretense that it’s a lesson to the vulnerable to make themselves less vulnerable.

Indeed, civil societies reserve special scorn and punishment for those who attack the most vulnerable.  As one obvious example, pedophiles are not commended for “teaching” little kids to protect themselves from sexual predators.  Responsibility in a civil society lies with the perpetrator, not the victim, not God.

Of course it’s important for all of us to take steps to avoid being victimized in all sorts of ways, but that’s not the point.  My being weak or defenseless does not make my attacker noble or justified, whether or not the attack is successful.  The moral measure of a person in a civil society is how well he or she treats vulnerable people, not how successfully he or she takes advantage of their vulnerability.   (I won’t pretend that there aren’t plenty of people out there who have achieved fame and fortune in modern society by taking advantage of the vulnerable; I’m saying that such people don’t get to claim the moral high ground, which is what’s happening when a hacker says he does what he does to make the world safe from hackers.)

In this sense, hackers like Phobia are not vanguards of a new cyber-civilization, as Phobia seems to fancy himself.   They are the precise moral equivalent of a street thug ambushing an old woman, or a man throwing acid in his ex-lover’s face (a recently popular activity in some parts of the world, and one that is also intended to teach the victim a lesson).  They are representative of the worst tendencies in humanity, not the best, and it’s discouraging that some very bright young people (Phobia is 19) have convinced themselves that victimizing the vulnerable is representative of high ethics.

I doubt that Phobia thinks of himself as religious, but his justifications are eerily similar to the tenets of any number of fundamentalist religions.   Many fundamentalists will tell you that if you’re attacked—by a terrorist, a cyclone, an angry dog, a fellow fundamentalist, or anything else—it’s God’s will (which is another way of saying “it’s your fault”).   If the fundamentalist hears from God (or the little voice inside his head that purports to be God) that you should be attacked, the attack will begin as soon as is practically feasible.  After you’re righteously punished, you will be more likely to heed God’s will; if not, God’s will is still satisfied, it being the will of God that the faithful seek relentlessly to destroy unbelievers.  No fundamentalist ever seems to think that it’s God’s will to be kind to everybody.  Nor does any hacker.  Once you’ve established a higher purpose that supercedes the mere interests of any individual, kindness has nothing to do with it.

Civil societies rely on due process and rule of law, which were developed painstakingly over centuries precisely to protect individuals and societies from arbitrary attacks by the self-empowered, be they bandits or kings.  Before due process and rule of law, life was (as Hobbes put it) nasty, brutish, and short, which is a pretty good description of a typical cyberattack (although of course modern attacks are recently tending to the ongoing and persistent, as opposed to the episodic).

Do hackers really want to return society to the law of the jungle?  Do they really want every man and woman to decide for themselves what “right” means, in utterly exclusive and non-negotiable terms, and to act without further ado to destroy those who stray from the path of righteousness? Do they really want to add to the world’s sum total of pain and anguish, of which neither will ever be in short supply, with or without the help of hackers?

We live in an era in which authorities of all sorts have been revealed to be both unethical and incompetent, and it’s not surprising that many in this era have anointed themselves as moral authorities with the right to mete out drastic punishment as they see fit.  But that doesn’t make it right.

If you’re a hacker, and you disagree with this argument, write and tell me why.

4 Comments »

Category: IT risk     Tags:

What does KCG’s recent debacle say about governance and technology?

by Richard Hunter  |  August 6, 2012  |  Submit a Comment

In the days since Knight Capital Group suffered a “computer glitch” that cost the company $440M in losses, I’ve been discussing with my colleagues how this catastrophe might have been prevented.   

Some of my colleagues have argued that the failure was basically about IT governance–that the IT team at Knight was responsible only for implementing flawed trading algorithms specified to them by their non-IT colleagues.  The argument essentially boils down to this: the fault lies with those who did not understand, and therefore did not adequately specify or test, what would happen when the technology operated according to the rules it was given.  The implication in this argument is that disaster could have been prevented if the people involved had made better decisions in the requirements/specification/design/testing/implementation/etc. of the software. 

My argument is that none of that is sufficient to prevent disaster—including future disasters—and that focusing on the technology is the only effective approach to solving the problem, meaning to reduce the risks to manageable levels.   Indeed, I would argue that the risks of high speed trading systems are intrinsic, ungovernable, and potentially threatening to all participants in the markets.  “Intrinsic” means that the problems these systems supposedly model with logical rules are beyond the ability of logic to solve.  “Ungovernable” means that the risks introduced by these systems can’t be resolved by the tools of governance, largely because of the intrinsic logic problem.  You cannot govern reality away; you operate within the bounds of reality, or reality teaches you to do so, more or less brutally and directly. 

Markets by their nature produce unforeseen circumstances.  It is as impractical to expect a piece of technology to respond appropriately—or even predictably–to every unforeseen circumstance as it is to expect a human being to do so.  When technology is empowered to execute massive trades instantly, guided by rules based on various combinations of market circumstances, bad things can be expected to happen as soon as unforeseen circumstances arrive—in fact, at the very moment unforeseen circumstances arrive.

What is happening now on Wall Street is that in pursuit of competitive weapons, firms are empowering their machines to make bigger and bigger decisions faster and faster—literally, to trade millions or hundreds of millions of shares in small fractions of a second.  It’s an arms race, and the weapons in question are being deployed at many, many firms, each of which has their own views on what constitutes “acceptable risk.”

If we define this as a governance issue, then the solution must be for the firms involved to make smarter decisions about the risks.  The first problem with this approach is, as Harvey Keitel said in “Thelma and Louise,” that brains will only take you so far (to which he added that luck always runs out, something every executive should remember every day).  High speed trading systems create severe risks that are not only unanticipated, but which realistically can never be anticipated in an environment where technology is continuously pushed to the limit.  

In short, better governance won’t solve the problem because the people involved in governance are no more able to anticipate all possible failure modes than the people involved in designing and building the systems.  Even if they were, it scarcely needs saying that Wall Street traders in general are heavily incented to take risks, and that they are often able to make others pay the price for risks gone bad–circumstances that do not inspire confidence in the “governor’s” ability to manage risks down.   Finally (for the governance argument), there is no reason to believe that all players will adopt “good” (meaning in this case risk-aware) governance policies–and a single point of high-speed trading failure can potentially impact many players in the markets.  

If you take the point of view that disasters such as this are the result of using technology in a way that it should not be used–to solve a problem that computer logic cannot solve, at least in the current state of the art–then the solution is to prevent the technology from being used in that way, either by banning it outright or by heavily taxing the proceeds of trades that are too short-lived to be called “investments.”  I appreciate that regulating high-speed trading systems out of existence one way or another is a drastic approach.   I believe that the risks–which extend to market participants far removed from the businesses that create these events–justify the means.  There’s no more reason to allow individual trading companies to implement technology that potentially destroys markets than there is to allow private citizens to carry nuclear weapons.  In both cases we could argue that careless or deranged (or whatever pejorative you like) individuals are the real problem.  I agree that this argument is valid to a point; triggers don’t pull themselves, and we’d all be better off if everyone behaved decently.  But positioning “better governance” as the solution to the problem doesn’t work when the consequences of one failure of governance are so severe.  

As the saying goes, one atomic bomb can really ruin your day.  That’s why we’re all glad that atomic bombs are not for sale to anyone who wants one, and why we should really, really question why we need automated trading programs on Wall Street.  

There may be other solutions to this problem, and I’d be delighted to hear from readers about what they think might work.   One thing I’m certain will not work is to continue on the current path, with the potential for bigger and bigger disasters.  (But if you’d like to argue that point, feel free to do so.)

Submit a Comment »

Category: IT risk     Tags:

Q: Can IT failure bring a company down? A: Yes.

by Richard Hunter  |  August 2, 2012  |  1 Comment

My colleagues at Gartner and I have recently been discussing the importance of IT risk, often in the context of Cloud adoption, where the discussion is usually about the extent to which risks in the Cloud will slow adoption.  (Our basic take on that question is some,but not enough to significantly impede the march to Cloud.)  Some of my colleagues are skeptics about the potential impact of cloud failures; being a risk maven, I’m pretty bearish on the topic.  The question my more bullish colleagues often ask is: how bad could it get, anyway?  Has any company ever failed because of an IT risk come to fruition?

The answer is yes, of course.  Cardsystems Inc. lost 95% of its revenues within 3 weeks of the breach it announced in 2005, and was sold shortly thereafter for a fraction of its pre-incident worth.  ComAir didn’t fail as a result of the December 2004 incident in which their crew scheduling system went down on Christmas Eve, stranding an estimated 30,000 passengers during the Christmas holidays; however, the company lost 7% of its revenue for the year, a pretty big deal for an airline, and was the subject of an FTA investigation, which no one much enjoys.  And the president of the company lost his job; not the CIO, the president.  Most businesses would consider that to be a pretty steep price for IT failure.

The interesting thing isn’t that some companies have failed when their IT failed; the interesting thing is that the risks are almost certainly increasing.  Plenty of executives don’t yet understand that while IT spend only represents 5% or less (on average) of enterprise revenues, the impact of IT on revenues is far higher than that.   To put it another way, many executives don’t yet realize that their businesses don’t run much, if at all, without IT, and when IT is misused or fails, the impacts can be very large indeed.  The recent events involving the Knight Capital Group make it clear how far we’ve come in terms of the importance of IT risks.

According to this NY Times article, Knight Capital Group lost $440 million on Wednesday in a matter of a few minutes when a “computer glitch” resulted in the purchase of a very large pile of stocks on behalf of the company.  (The losses were incurred when the stocks were sold.)  I quote the Times:

“In its statement, Knight Capital said its capital base, the money it uses to conduct its business, had been ‘severely impacted’ by the event and that it was ‘actively pursuing its strategic and financing alternatives.'”

The Times added: “The losses are greater than the company’s revenue in the second quarter of this year, when it brought in $289 million.”  The article goes on to quote Christopher Nagy of KOR trading as saying that this might be “the beginning of the end for Knight.”

So the basic story is that Knight put in a new trading system; the system went haywire; the malfunction produced $440 million in losses in less than 5 minutes; and the company may fail as a result.   Let there be no doubt: in the modern era companies fail because of IT misuse or failure.  Period.  This is not the same as civilization failing, of course.  But it’s pretty serious for the owners and employees (and maybe customers) of Knight Capital Group.

What this means is that it’s more important than ever for IT professionals to make the connection for the rest of the executive team between what IT does and what everybody in the enterprise does with IT–to identify clearly what business outcomes might result from an IT failure.  It’s possible that doing so would not have prevented this incident; I have no idea how many tests would have been necessary to discover and eliminate the “glitch” that cost Knight Capital Group $440 million.  But I wonder whether the executive team at Knight was fully aware of just how bad a “computer glitch” could be–and I know that executives at many other companies are not.

1 Comment »

Category: cloud IT risk     Tags:

Let me entertain you…

by Richard Hunter  |  July 18, 2012  |  Submit a Comment

This is my first blog for Gartner. Anyone who’s read the books I co-wrote with George Westerman on Harvard Business Press, “IT Risk” and “The Real Business of IT,” knows that the issues that interest me most revolve around IT’s value to the enterprise. Since a blog is first and foremost a means of personal expression, I expect to write plenty here about IT value and its reflection in IT risk.

There’s plenty to write about in those terms. IT has become an essential lever for creating value in every enterprise of any size worldwide. As the importance of IT in creating value increases, so do the risks. This morning I was alerted by Paul Proctor, one of my colleagues, to a video of a fireworks show in San Diego on July 4th in which a purported computer glitch fired off 18 minutes worth of fireworks in about 30 seconds. (By the way, I found the 30-second version to be truly thrilling, to an extent far surpassing the usual fireworks show. Only the last 30 seconds are really exciting in most fireworks shows anyway.) When people lit fireworks by hand, you didn’t get that kind of outcome.

We are at the beginning of major platform and demographic changes in IT, and the role of IT organizations in the enterprise is changing even as the value they provide goes up and up. It’s an exciting time to be in IT, in fact the best time ever to be an IT professional, and I’m looking forward to writing about it. So stay tuned.

Submit a Comment »

Category: IT risk value     Tags: ,