by Richard Hunter | November 4, 2013 | 2 Comments
The recent revelations of NSA surveillance of just about everybody have of course provoked a lot of discussion about privacy–what it is, why it matters, who needs it and who doesn’t. Daniel Solove, writing well before the NSA’s activities came to light, took on the “I’ve Got Nothing to Hide” argument against privacy in this piece. On the other side of the debate, Mike Rogers, chair of the US House of Representatives Intelligence Committee, proposed the novel theory that your privacy can’t be violated unless you know it’s been violated:
Rogers: I would argue the fact that we haven’t had any complaints come forward with any specificity arguing that their privacy has been violated, clearly indicates, in ten years, clearly indicates that something must be doing right. Somebody must be doing something exactly right.
Vladeck: But who would be complaining?
Rogers: Somebody whose privacy was violated. You can’t have your privacy violated if you don’t know your privacy is violated.
I’m sure that extortionists everywhere would applaud Rep. Rogers’s novel legal theory–if no one’s complained, there can’t possibly be a crime–whole-heartedly. That aside, the fact is that there’s plenty of controversy about what privacy is, why it matters, and what harms result from its violation. The latter question in particular is vexing for privacy advocates. As per Solove’s article, many might argue that if I have nothing to hide, I have no reason to fear the loss of my privacy.
I propose a simple definition of privacy that may help clarify the harm involved in the violation of privacy. To wit: privacy is power. Privacy is a line over which others may not step. Privacy is a protected space within which I think, do, and say what I please. Absent privacy, I’m powerless to prevent the intrusion of the powerful into my life, and that intrusion may take any form that the powerful deem appropriate.
We see this most clearly in police states, whose defining characteristic is the utter absence of privacy. The absence of privacy in such states in effect makes the individual the property of the state. For an extraordinary illustration of this principle in art, I highly recommend the film “The Lives of Others,” which describes the relationship between an experienced and extremely capable East German intelligence officer and the writer he is charged to monitor. I promise that regardless of your political beliefs, you will consider this film to be two hours very well spent. But even in commercial relationships, which are far less coercive than those between authorities and citizens in police states, the absence of privacy tips the balance of power.
I repeat: privacy is power. Without it, the individual has none. Period.
If you’re comfortable with that, then you may indeed have nothing to hide, because anything worth taking has probably already been taken.
Category: privacy Tags:
by Richard Hunter | October 23, 2013 | 1 Comment
Maturity models are all the rage in IT circles. There are maturity models for nearly everything an IT organization does. Lots of IT professionals, in practically every IT discipline, talk about improving maturity as if it was the ultimate goal for the organization. And that’s a problem for the IT professional.
The heart of the problem is that “maturity” is not value. Value is an outcome, and maturity is not an outcome; it’s something we pursue in order to develop the capabilities that make an outcome possible. At best, increasing maturity is a leading indicator for value, not the thing itself. An IT organization that touts its improving “maturity” to an executive team is not talking about value, but about IT activities. Executives are at most mildly interested, and at worst worry that IT isn’t focused on the right things–which of course means initiatives that explicitly deliver more value to the enterprise.
I’m not opposed to increasing maturity. I’m all in favor of everyone improving performance, ideally by leaps and bounds. I merely think that in most cases maturity is not a topic for discussion with the executive team. What IT professionals should discuss is not “maturity”, but the outcomes that increasing maturity produces.
As an example, consider the grand-daddy of all maturity models: the Watts Humphries Capability Maturity Model (CMM) for applications development, developed in the 1980s at Carnegie Mellon University. The point of this model is to rate the ability of an AD organization to consistently execute against project plans. Humphries tied the original model to quality; it can just as easily be tied to outcomes that include higher productivity, improved user satisfaction, and any others that can benefit from systematic execution.
And here we encounter our first maturity dilemma: consistency in execution says little or nothing about what outcomes are supposed to be consistently delivered. Should our AD organization optimize for quality, adherence to schedule and budget, or for productivity? The first is what NASA’s space shuttle program optimized for; the second is what the typical large external serivce provider optimizes for; the third is what the typical AD shop thinks it’s optimizing for. Which of these outcomes is the desired one for a given IT team?
The answer is dependent on what the enterprise needs, which can change over time. Twenty years ago most enterprises wanted high quality systems, the stuff we now call “systems of record’ in Gartner’s applications development pace layering model; now most enterprises want rapid delivery, figuring that an 80% solution now is better than a 100% solution three years from now. An IT organization whose “maturity” defaults to rigorous methods for creating high quality above all else might find itself losing value in the eyes of peers throughout the enterprise when the goal shifts (as indeed it has for many of our clients).
The solution to the problem is to avoid discussing IT’s performance in terms of maturity, and focus instead on the outcomes that maturity is supposed to produce. Let’s assume for the sake of argument that desired outcomes for a given enterprise include delivering more projects, in shorter time spans, with less waste for both IT and non-IT personnel. As opposed to reporting to the executive team on AD “maturity,” the AD team can talk about percentage of projects delivered on time and budget, and increased value delivered as represented by reduced waste and higher yields on resources invested in business imperatives.
Any of these outcomes represents real value, as opposed to “maturity”, which is at best a proxy for value–a symbol, not the thing itself, and a symbol that doesn’t resonate with anyone outside IT. When you want to talk value, talk outcomes, not maturity. If the outcomes don’t resonate, the increased maturity that makes the outcomes achievable won’t either.
Category: value Tags:
by Richard Hunter | October 17, 2013 | 7 Comments
I’ve been thinking a lot lately about the “kernal” construct that Richard Rumelt proposes in his book “Good Strategy, Bad Strategy.” According to Rumelt, the irreducible core of a strategy–the “kernal”–consists of a diagnosis of the environment an enterprise operates in, one or more policies that are intended to address the diagnosis, and coherent execution in support of the policies. IT governance looms large in coherent execution; every change in an enterprise that’s more than trivial eventually comes to IT, because nothing that’s more than trivial can be implemented without supporting technology.
In other words, the IT project proposal process essentially generates a running list of proposed and actual change in the enterprise. In fact, the IT project portfolio is the only place in the enterprise where all that change is visible at once. Smart enterprise leaders can use that fact to their advantage by turning the IT investment decision making process into a forecasting system for the success of enterprise strategies. Here’s how:
1) Ensure that any proposal for change that comes to the IT investment decision making process (a/k/a IT governance) has two explicit characteristics:
- It spells out outcomes in material, quantified, baselined terms, and
- It links those outcomes to the goals associated with specific enterprise strategic imperatives, of which there are probably no more than half-a-dozen at any point in time, assuming that the enterprise is genuinely clear on its strategy; great strategies don’t have 15 segments.
Investment portfolios aligned to strategic imperatives are useful for the latter purpose. The CFO or COO, not the CIO, is the appropriate party to ensure this basic investment discipline, because it’s not the CIO’s job to enforce investment discipline, whether IT is involved or not.
2) By adding up the outcomes, costs, and timeframes associated with projects within an investment portfolio linked to a strategic imperative, the enterprise can determine how far (hypothetically) the full range of initiatives will move the enterprise towards the goals specified by the imperative, at what cost, and within what timeframes. That knowledge can be put to good use in deciding how much and where to invest. Summing of these factors is relatively trivial if the basic information is included in project proposals, and can be done by a project management office, by personnel within IT or a sponsoring business unit, or by the CFO’s office.
3) When proposed projects are reviewed, approved, resourced, and started, the same approach can be used to determine how far the approved project portfolio (as opposed to the proposed project portfolio) will move the enterprise towards its goals. Comparing proposed project portfolios to approved portfolios can help to show whether the enterprise is investing enough, too little, or too much in pursuit of its strategies; for example, whether attractive investment opportunities are being left on the table because of arbitrary constraints on “IT spending.” (Calling an initiative supported by IT an “IT project” is like calling your exercise program “the Nautilus project”; yes, machinery is involved, but that’s not the point of the exercise.)
4) When approved projects are completed (and after a suitable waiting period to account for the lag in benefits that always follows any change initiative), actual outcomes can be measured, which will help both to refine forecasting going forward and to improve current forecasts.
Suppose, for example, that we have a strategic imperative to improve product quality in order to increase initial and repeat sales and reduce capital allocated to allowances for returns. If we can quantify, even hypothetically, the extent to which a given increase in product quality will increase sales and reduce allowances for returns, and we frame our project proposals in those terms, we can then use our IT-supported project portfolio for that strategic imperative to estimate how fast, how much, and at what cost we will be able to achieve these strategic goals with our current project initiatives.
I said earlier that this approach offers a level of insight into the potential for success or failure of strategic imperatives that is available nowhere else. Even more importantly, this insight can be achieved with only minor adjustments (if any) to an investment decision-making process that is already present in the vast majority of enterprises. For many, the discipline of specifying initiative outcomes in material, quantified, baselined terms (as opposed to fuzzy measures of success like “improve collaboration”) will be the most difficult to master, but this can be expected to improve over time with management attention and team learning. In return, the enterprise gets a powerful early-warning mechanism for strategic success, not to mention a better way to think about its investments involving scarce IT and non-IT resources, and (ultimately) higher yields on those investments.
Most enterprises think about IT governance as a mechanism for allocating IT resources to “IT projects,” but that’s way too limited a view of what’s really going on. I hope I’ve made it clear here that not only is IT governance one of the most important enterprises for coherent execution of strategy, it’s also the one that offers the best and most comprehensive tools for estimated and validating the outcomes that execution will deliver.
I’ve been talking about these ideas with CFOs, CEOs, and CIOs lately, and the reception has generally been favorable. I wouldn’t be surprised if Wall Street investment analysts figure this out at some point, and CIOs start getting calls from said analysts about what’s in the project portfolio.
Think about it, and let me know what you think.
Category: value Tags:
by Richard Hunter | July 3, 2013 | 1 Comment
My esteemed colleague Tina Nunno has been writing and presenting on the topic of CIO politics for years now, and her book on this topic is well underway. Tina needs help naming the book, and I’m helping to spread the word. Follow the link below to help Tina choose between two titles (and get a free gift–a very nifty piece of recent Gartner research). Thanks!
Category: Uncategorized Tags:
by Richard Hunter | July 2, 2013 | 4 Comments
I’ve been thinking lately about Matt Honan, who wrote almost a year ago in Wired about his experiences with a hacker named Phobia who hacked Honan’s Apple account, and soon after used his access to the account to wipe out Honan’s electronic memories–everything Honan had acquired and stored on his Apple devices. Honan later was contacted by Phobia, and in the course of their correspondance Honan asked why Phobia had done him such grievous harm:
I asked Phobia why he did this to me. His answer wasn’t satisfying. He says he likes to publicize security exploits, so companies will fix them. He says it’s the same reason he told me how it was done. He claims his partner in the attack was the person who wiped my MacBook. Phobia expressed remorse for this, and says he would have stopped it had he known.
“yea i really am a nice guy idk why i do some of the things i do,” he told me via AIM. “idk my goal is to get it out there to other people so eventually every1 can over come hackers”
The ethos expressed in those few words is close to monstrous. Here’s what it boils down to:
1) If I hack you, it’s your fault for being unprotected. (As Clint Eastwood said in “Unforgiven” after being taken to task for shooting an unarmed man, “He shoulda armed himself.”) Therefore…
2) I have the right to hack anybody who’s vulnerable to hacking. And that’s a good thing, because…
3) I serve a higher purpose when my hacking wrecks the lives of people who are too careless or innocent to protect themselves. As Phobia wrote to Honan, “my goal is to get it out there to other people so eventually every1 can over come hackers”.
Phobia is confused, to say the least. You can’t simultaneously not know why you do the things you do, and purport to have a goal that represents the justification for doing what you do. Beyond that, there’s an obvious hypocrisy in Phobia’s comments. If you claim to be concerned for the welfare of others—for example, to care about whether they can protect themselves from some kind of harm—you don’t start your relationship with those others by launching a brutal surprise attack. In a civil (read: moral) society, you don’t attack strangers on the street (or anywhere else) on the pretense that it’s a lesson to the vulnerable to make themselves less vulnerable.
Indeed, civil societies reserve special scorn and punishment for those who attack the most vulnerable. As one obvious example, pedophiles are not commended for “teaching” little kids to protect themselves from sexual predators. Responsibility in a civil society lies with the perpetrator, not the victim, not God.
Of course it’s important for all of us to take steps to avoid being victimized in all sorts of ways, but that’s not the point. My being weak or defenseless does not make my attacker noble or justified, whether or not the attack is successful. The moral measure of a person in a civil society is how well he or she treats vulnerable people, not how successfully he or she takes advantage of their vulnerability. (I won’t pretend that there aren’t plenty of people out there who have achieved fame and fortune in modern society by taking advantage of the vulnerable; I’m saying that such people don’t get to claim the moral high ground, which is what’s happening when a hacker says he does what he does to make the world safe from hackers.)
In this sense, hackers like Phobia are not vanguards of a new cyber-civilization, as Phobia seems to fancy himself. They are the precise moral equivalent of a street thug ambushing an old woman, or a man throwing acid in his ex-lover’s face (a recently popular activity in some parts of the world, and one that is also intended to teach the victim a lesson). They are representative of the worst tendencies in humanity, not the best, and it’s discouraging that some very bright young people (Phobia is 19) have convinced themselves that victimizing the vulnerable is representative of high ethics.
I doubt that Phobia thinks of himself as religious, but his justifications are eerily similar to the tenets of any number of fundamentalist religions. Many fundamentalists will tell you that if you’re attacked—by a terrorist, a cyclone, an angry dog, a fellow fundamentalist, or anything else—it’s God’s will (which is another way of saying “it’s your fault”). If the fundamentalist hears from God (or the little voice inside his head that purports to be God) that you should be attacked, the attack will begin as soon as is practically feasible. After you’re righteously punished, you will be more likely to heed God’s will; if not, God’s will is still satisfied, it being the will of God that the faithful seek relentlessly to destroy unbelievers. No fundamentalist ever seems to think that it’s God’s will to be kind to everybody. Nor does any hacker. Once you’ve established a higher purpose that supercedes the mere interests of any individual, kindness has nothing to do with it.
Civil societies rely on due process and rule of law, which were developed painstakingly over centuries precisely to protect individuals and societies from arbitrary attacks by the self-empowered, be they bandits or kings. Before due process and rule of law, life was (as Hobbes put it) nasty, brutish, and short, which is a pretty good description of a typical cyberattack (although of course modern attacks are recently tending to the ongoing and persistent, as opposed to the episodic).
Do hackers really want to return society to the law of the jungle? Do they really want every man and woman to decide for themselves what “right” means, in utterly exclusive and non-negotiable terms, and to act without further ado to destroy those who stray from the path of righteousness? Do they really want to add to the world’s sum total of pain and anguish, of which neither will ever be in short supply, with or without the help of hackers?
We live in an era in which authorities of all sorts have been revealed to be both unethical and incompetent, and it’s not surprising that many in this era have anointed themselves as moral authorities with the right to mete out drastic punishment as they see fit. But that doesn’t make it right.
If you’re a hacker, and you disagree with this argument, write and tell me why.
Category: IT risk Tags:
by Richard Hunter | August 6, 2012 | Submit a Comment
In the days since Knight Capital Group suffered a “computer glitch” that cost the company $440M in losses, I’ve been discussing with my colleagues how this catastrophe might have been prevented.
Some of my colleagues have argued that the failure was basically about IT governance–that the IT team at Knight was responsible only for implementing flawed trading algorithms specified to them by their non-IT colleagues. The argument essentially boils down to this: the fault lies with those who did not understand, and therefore did not adequately specify or test, what would happen when the technology operated according to the rules it was given. The implication in this argument is that disaster could have been prevented if the people involved had made better decisions in the requirements/specification/design/testing/implementation/etc. of the software.
My argument is that none of that is sufficient to prevent disaster—including future disasters—and that focusing on the technology is the only effective approach to solving the problem, meaning to reduce the risks to manageable levels. Indeed, I would argue that the risks of high speed trading systems are intrinsic, ungovernable, and potentially threatening to all participants in the markets. “Intrinsic” means that the problems these systems supposedly model with logical rules are beyond the ability of logic to solve. “Ungovernable” means that the risks introduced by these systems can’t be resolved by the tools of governance, largely because of the intrinsic logic problem. You cannot govern reality away; you operate within the bounds of reality, or reality teaches you to do so, more or less brutally and directly.
Markets by their nature produce unforeseen circumstances. It is as impractical to expect a piece of technology to respond appropriately—or even predictably–to every unforeseen circumstance as it is to expect a human being to do so. When technology is empowered to execute massive trades instantly, guided by rules based on various combinations of market circumstances, bad things can be expected to happen as soon as unforeseen circumstances arrive—in fact, at the very moment unforeseen circumstances arrive.
What is happening now on Wall Street is that in pursuit of competitive weapons, firms are empowering their machines to make bigger and bigger decisions faster and faster—literally, to trade millions or hundreds of millions of shares in small fractions of a second. It’s an arms race, and the weapons in question are being deployed at many, many firms, each of which has their own views on what constitutes “acceptable risk.”
If we define this as a governance issue, then the solution must be for the firms involved to make smarter decisions about the risks. The first problem with this approach is, as Harvey Keitel said in “Thelma and Louise,” that brains will only take you so far (to which he added that luck always runs out, something every executive should remember every day). High speed trading systems create severe risks that are not only unanticipated, but which realistically can never be anticipated in an environment where technology is continuously pushed to the limit.
In short, better governance won’t solve the problem because the people involved in governance are no more able to anticipate all possible failure modes than the people involved in designing and building the systems. Even if they were, it scarcely needs saying that Wall Street traders in general are heavily incented to take risks, and that they are often able to make others pay the price for risks gone bad–circumstances that do not inspire confidence in the “governor’s” ability to manage risks down. Finally (for the governance argument), there is no reason to believe that all players will adopt “good” (meaning in this case risk-aware) governance policies–and a single point of high-speed trading failure can potentially impact many players in the markets.
If you take the point of view that disasters such as this are the result of using technology in a way that it should not be used–to solve a problem that computer logic cannot solve, at least in the current state of the art–then the solution is to prevent the technology from being used in that way, either by banning it outright or by heavily taxing the proceeds of trades that are too short-lived to be called “investments.” I appreciate that regulating high-speed trading systems out of existence one way or another is a drastic approach. I believe that the risks–which extend to market participants far removed from the businesses that create these events–justify the means. There’s no more reason to allow individual trading companies to implement technology that potentially destroys markets than there is to allow private citizens to carry nuclear weapons. In both cases we could argue that careless or deranged (or whatever pejorative you like) individuals are the real problem. I agree that this argument is valid to a point; triggers don’t pull themselves, and we’d all be better off if everyone behaved decently. But positioning “better governance” as the solution to the problem doesn’t work when the consequences of one failure of governance are so severe.
As the saying goes, one atomic bomb can really ruin your day. That’s why we’re all glad that atomic bombs are not for sale to anyone who wants one, and why we should really, really question why we need automated trading programs on Wall Street.
There may be other solutions to this problem, and I’d be delighted to hear from readers about what they think might work. One thing I’m certain will not work is to continue on the current path, with the potential for bigger and bigger disasters. (But if you’d like to argue that point, feel free to do so.)
Category: IT risk Tags:
by Richard Hunter | August 2, 2012 | 1 Comment
My colleagues at Gartner and I have recently been discussing the importance of IT risk, often in the context of Cloud adoption, where the discussion is usually about the extent to which risks in the Cloud will slow adoption. (Our basic take on that question is some,but not enough to significantly impede the march to Cloud.) Some of my colleagues are skeptics about the potential impact of cloud failures; being a risk maven, I’m pretty bearish on the topic. The question my more bullish colleagues often ask is: how bad could it get, anyway? Has any company ever failed because of an IT risk come to fruition?
The answer is yes, of course. Cardsystems Inc. lost 95% of its revenues within 3 weeks of the breach it announced in 2005, and was sold shortly thereafter for a fraction of its pre-incident worth. ComAir didn’t fail as a result of the December 2004 incident in which their crew scheduling system went down on Christmas Eve, stranding an estimated 30,000 passengers during the Christmas holidays; however, the company lost 7% of its revenue for the year, a pretty big deal for an airline, and was the subject of an FTA investigation, which no one much enjoys. And the president of the company lost his job; not the CIO, the president. Most businesses would consider that to be a pretty steep price for IT failure.
The interesting thing isn’t that some companies have failed when their IT failed; the interesting thing is that the risks are almost certainly increasing. Plenty of executives don’t yet understand that while IT spend only represents 5% or less (on average) of enterprise revenues, the impact of IT on revenues is far higher than that. To put it another way, many executives don’t yet realize that their businesses don’t run much, if at all, without IT, and when IT is misused or fails, the impacts can be very large indeed. The recent events involving the Knight Capital Group make it clear how far we’ve come in terms of the importance of IT risks.
According to this NY Times article, Knight Capital Group lost $440 million on Wednesday in a matter of a few minutes when a “computer glitch” resulted in the purchase of a very large pile of stocks on behalf of the company. (The losses were incurred when the stocks were sold.) I quote the Times:
“In its statement, Knight Capital said its capital base, the money it uses to conduct its business, had been ‘severely impacted’ by the event and that it was ‘actively pursuing its strategic and financing alternatives.’”
The Times added: “The losses are greater than the company’s revenue in the second quarter of this year, when it brought in $289 million.” The article goes on to quote Christopher Nagy of KOR trading as saying that this might be “the beginning of the end for Knight.”
So the basic story is that Knight put in a new trading system; the system went haywire; the malfunction produced $440 million in losses in less than 5 minutes; and the company may fail as a result. Let there be no doubt: in the modern era companies fail because of IT misuse or failure. Period. This is not the same as civilization failing, of course. But it’s pretty serious for the owners and employees (and maybe customers) of Knight Capital Group.
What this means is that it’s more important than ever for IT professionals to make the connection for the rest of the executive team between what IT does and what everybody in the enterprise does with IT–to identify clearly what business outcomes might result from an IT failure. It’s possible that doing so would not have prevented this incident; I have no idea how many tests would have been necessary to discover and eliminate the “glitch” that cost Knight Capital Group $440 million. But I wonder whether the executive team at Knight was fully aware of just how bad a “computer glitch” could be–and I know that executives at many other companies are not.
Category: cloud IT risk Tags:
by Richard Hunter | July 18, 2012 | Submit a Comment
This is my first blog for Gartner. Anyone who’s read the books I co-wrote with George Westerman on Harvard Business Press, “IT Risk” and “The Real Business of IT,” knows that the issues that interest me most revolve around IT’s value to the enterprise. Since a blog is first and foremost a means of personal expression, I expect to write plenty here about IT value and its reflection in IT risk.
There’s plenty to write about in those terms. IT has become an essential lever for creating value in every enterprise of any size worldwide. As the importance of IT in creating value increases, so do the risks. This morning I was alerted by Paul Proctor, one of my colleagues, to a video of a fireworks show in San Diego on July 4th in which a purported computer glitch fired off 18 minutes worth of fireworks in about 30 seconds. (By the way, I found the 30-second version to be truly thrilling, to an extent far surpassing the usual fireworks show. Only the last 30 seconds are really exciting in most fireworks shows anyway.) When people lit fireworks by hand, you didn’t get that kind of outcome.
We are at the beginning of major platform and demographic changes in IT, and the role of IT organizations in the enterprise is changing even as the value they provide goes up and up. It’s an exciting time to be in IT, in fact the best time ever to be an IT professional, and I’m looking forward to writing about it. So stay tuned.
Category: IT risk value Tags: risk, value
by Richard Hunter | July 18, 2012 | 3 Comments
I hear a lot lately about cloud as a means to “transformation,” and when I hear that word my personal value-meter kicks in, especially since so many people use the word “transformation” in ways that conflict with its definition in IT portfolio management (from whence the run-grow-transform model emerged in the early 2000s). In this post, I’m going to lay out what “run”, “grow”, and “transform” mean in this context, and how they relate to value.
By definition, to “transform” means to enter new markets with new value propositions for new customer segments. To “grow” means to enhance business performance in established markets serving established customer segments with established value propositions; to “run” means to carry out essential enterprise activities that do not connect directly to a particular customer segment (or, to put it another way, to a particular revenue stream).
When Apple entered the iTunes business, or IBM went full-tilt into the services business, those were transformations. When Apple brings out the iPhone 5, or IBM brings out a new mainframe, that’s growth, because the market, the customer segment(s), and the value proposition are well-established. When Apple or IBM add capacity to data centers that support a wide range of enterprise activities, that’s running the business.
Lots of enterprise and vendors use the word “transformation” to mean a big change, but extent of change isn’t what defines a transformation. It’s not a “transformation” when a supply chain’s costs are dramatically reduced. That’s “growth” in the run-grow-transform model unless a new market is being addressed with a new value proposition. So for example, if Wal-Mart goes into the logistics business, taking advantage of its supply chain capabilities to offer supply chain services to other businesses, that’s transformation (new market, new value prop, new customer segment); if Wal-Mart restructures its supply chain to deliver lower cost, even drastically lower cost, that’s growth (lower cost of doing business/higher margins/more capability in established markets). The value of both growth and transformation investment is ultimately expressed in terms of ROI, which is feasible because we can connect the investment in change to a paying customer (and so have actual returns for the investment).
The value of run-the-business stuff is expressed in terms of price-for-performance, not ROI, in particular because there is no revenue stream (returns) to which run-the-business services can be connected. In that sense, email in the cloud, which some enterprises think of as “transformational,” is anything but—it’s simply about achieving competitive price-for-performance for an essential enterprise service that can’t be tied to a particular revenue stream, which is a classic run-the-business value proposition.
As my colleague Matt Cain has pointed out in his research, the price-for-performance ratio for cloud-based email may not really be all that superior, depending on how “performance” is defined. Low prices for cloud services currently may reflect low levels of provider investment in security/availability/reliability/upgradeability/etc. Enterprises pay for that lower performance over time, by supplying mechanisms to fill the gaps in the performance that their own IT team used to provide. Internal IT organizations have long ago priced the costs of high performance into their services, and cloud providers will sooner or later, meaning that at some point cloud pricing for run-the-business services will approach internal provider unit costs.
Until that happens, remember to focus first on performance in any negotiation on services, because performance drives price–and if performance requirements can’t be met, price is completely irrelevant.
Category: cloud value Tags: cloud, grow, run, transform, value