by Richard Hunter | January 28, 2015 | Submit a Comment
Privacy is not a commercial commodity. It’s a fundamental human right. A fundamental human right by definition can’t be bought or sold.
These comments are inspired by a post written a few months ago by Chris Messina, ex-Google employee, about privacy and Google+ –a post I’ve obviously been thinking about for a while. In that post Messina refers to privacy as a “4-letter word,” and offers a few novel ideas about how privacy might be implemented as a commercial commodity in the 21st century.
Here’s a relevant sample from Messina’s post:
This word, privacy?— it’s a problem.
It’s one of those words that puts a stop to useful conversations and prevents us from actually engaging with what’s going on in our digital lives. It obscures and glosses over.
Maintaining your privacy doesn’t strictly mean keeping people from having data or information about you. Certainly not preventing yourself from having access to data about yourself. Privacy is about the ability to be left alone, or about not being watched, if you don’t want to be. Which is fine. Turn on Do Not Disturb. There — you’ve got a bit of your privacy back. But that has nothing to do with the huge amounts of data you’re still producing and is being tracked.
So, given that expectations of privacy are changing (or being changed), I challenge you: what if you want to be watched? What if you were offered an outsize amount of value in exchange for allowing someone else to watch you? What would you do? Who would you want to watch over you? Who would you want to look after you and your best interests? Who would you trust? Do you feel like you have reasonable choices in today’s marketplace?
Privacy is about the ability to be left alone, or about not being watched, if you don’t want to be: these words echo Brandeis and Warren’s influential paper on privacy from the late 19th century, in which privacy was defined as the right to be left alone. The word ability instead of right is an interesting substitution, implying the real modern issue in privacy, which is control: the individual’s ability to control others’s access to him or herself (or to information about the individual, which in many ways amounts to the same thing). In other words, privacy is a kind of power: the power of the individual to resist intrusion, and hence external control.
Rights are intrinsic, not purchased. Individuals can lose abilities for all sorts of reasons, but rights don’t go away. To define privacy as a right means that it is not and cannot be in any sense a commercial transaction. Imagine some other fundamental right—say, freedom of speech or worship, or protection against self-incrimination–as the commercial commodity described in the quotes above, and ask yourself whether those paragraphs make sense in those terms.
Is it possible that the idea of a human right as a commercial entity is the natural consequence of a philosophy that seems widespread lately in Silicon Valley: the idea that human beings are essentially a resource to be exploited? In the 19th and 20th centuries industrialists exploited natural resources; one result was rampant destruction of the natural environment (which is ongoing, of course, in many places to this day). The natural resource that’s being exploited by techno-industrialists in the 21st century is humanity and all its works. The exploitation may be immediate (as in the case of certain “sharing economy” companies that treat their opportunistically-acquired workers as disposable parts) or secondary, as in the case of Google and Facebook (who use personal data to sell advertising, as opposed to extracting value from the individual customer directly), or Youtube (which offers a range of intellectual property, sourced from legitimate owners and not, to the same end).
To say that privacy is one of those words that puts a stop to useful conversations and prevents us from actually engaging with what’s going on in our digital lives. It obscures and glosses over– simply disparages privacy as a legitimate human interest—a right–just as more-traditional industrialists have argued for decades that it’s not their concern if acid rain from coal-fired plants in Ohio damages New York’s population and environment. Traditional industrialists created toxic chemical waste; information-age industrial waste includes identify theft, omnipresent surveillance by governmental agencies, and the increasing inability of content creators in many fields to make a living from their content. Like traditional industrialists, this waste is left to individuals and societies to deal with. We took your privacy? We polluted your lake? Who cares? Grow up. Stuff happens. Not my problem. I’m only responsible to my bottom line.
Yes, conflicting interests impede progress (though “progress towards what?” is the unanswered question). Some of the competing interests in this case are complex and difficult to reconcile. That doesn’t make those interests illegitimate or irrelevant. The rights of individuals are always relevant, not least because they can’t be bought, sold, or bartered away, ever.
The proposed remedy to that conflict– What if you were offered an outsize amount of value in exchange for allowing someone else to watch you?… Who would you trust? Do you feel like you have reasonable choices in today’s marketplace?—is simply astounding, and not in a good way. The ideas that a) the only choice available is who will exploit you, and b) that this is a marketplace issue as a opposed to a fundamental human rights issue are both spurious. In particular, the idea that privacy can only be exercised in terms of marketplace choices—and will, by implication, be protected by marketplace providers from violation by other actors, such as governments and criminals–is astoundingly naïve, the kind of thing you’d expect from someone who never heard of the 20th century or read “1984.” Further, the idea that any—any–marketplace provider will manage personal information first and foremost on behalf of the person(s) the data represents is utterly contrary to actual business practice, not to mention history.
Finally, the idea of privacy as a marketplace choice ignores something utterly fundamental to any ethical commercial transaction: the awareness of both parties to the transaction of the real value exchanged. In the 1620s the Dutch “bought” Manhattan Island from the local Native Americans for about $24 worth of baubles and beads (or so the legend goes). Assuming for the moment that the legend is true, was this transaction ethical? I don’t think so. The Native Americans involved didn’t have a concept of land as something that any individual could own; they were literally unaware of what they were selling, and of course the Dutch did little to enlighten them. (“After we give you these beads, you all have to stay off the island forever unless we say you can come here, okay? That’s what this deal means” might have been an appropriate caveat.)
The typical citizen now has about as much awareness of the ultimate value of their privacy (and the information that said privacy is supposed to protect) as those Native Americans had of the concept of “owning” land. If you don’t know what you’re selling, how can you make an informed choice to sell? The mismatch in this case between the respective powers of the buyers and sellers is too extreme to make the transaction ethical; it is inherently exploitative.
It remains to be seen how privacy for individuals will be made viable in the 21st century, if indeed it can. Like I said above, it’s a complex problem, and one that technologists have barely addressed. Starting from the point of view that privacy is a transaction, as opposed to a fundamental right, is not a viable approach.
I expect plenty of disagreement from plenty of places on this point of view, and I’m delighted to hear it. I’d be even more delighted to hear a proposal for how privacy-as-control can effectively be implemented in the 21st century, but I’m not expecting that, at least not right away, because no simple solution (short of a return to the 19th century, which is of course no simple thing either) is likely to work.
Category: privacy Tags: human rights, privacy, silicon valley
by Richard Hunter | January 26, 2015 | 3 Comments
In an announcement titled “Submission of an application for approval of extension of deadline to file the quarterly securities report for the third quarter of the fiscal year ending March 31, 2015″ (see the full statement here), Sony Corporation notes that:
“… today that it has filed with the Financial Services Agency of Japan (the “FSA”) an application for approval of the extension of the deadline for Sony to file its quarterly securities report for the third quarter of the fiscal year ending March 31, 2015, pursuant to paragraph 1 of Article 17-15-2 of the Cabinet Office Ordinance on Disclosure of Corporate Information, etc.”
The announcement goes on to detail the reasons for the delay, all of which are the result of the cyberattack on Sony Pictures Entertainment that took place in late 2014:
“In November 2014, Sony Pictures Entertainment Inc. (“SPE”), a consolidated subsidiary of Sony that is reported as the Pictures business segment, identified a cyberattack on SPE’s network and IT infrastructure. As a result of the cyberattack, which has been now recognized as a highly sophisticated and damaging cyberattack, a serious disruption of SPE’s network systems occurred, including the destruction of network hardware and the compromise of a large amount of data on these systems. In response to this cyberattack, SPE shut down its entire network. Since that time, SPE has worked aggressively to restore these systems. However, most of SPE’s financial and accounting applications and many other critical information technology applications will not be functional until early February 2015 due to the amount of destruction and disruption that occurred… However, even with the anticipated restoration of these applications in early February 2015, SPE will not have sufficient time to close its financial statements in time for submission of the quarterly securities report in the middle of February 2015. “
So Sony’s “critical information technology applications will not be functional until early February 2015″–three months after the onset of the attack. Nevertheless, when we read to the end of the filing, we see this:
“While Sony continues to evaluate the impact of the cyberattack on its financial results, it currently believes that such impact is not material.”
I was frankly amazed to see that statement. On November 18 2014, according to Variety, “Sony CEO Kazuo Hirai forecast that revenue at Sony Pictures Entertainment would be in the $10 billion to $11 billion range by 2017-18 fiscal year. That compares with $8.1 billion expected for the current full-year fiscal period ending March 2015.”
So we are in effect being asked to believe that an 8-billion-dollar-plus corporation with over 5,000 employees in the USA can run for three months with its information systems turned off without producing a material impact on its business. I suppose that this might be true. If so, I don’t see why Sony is bothering to restore its systems. If you can live without the systems for 3 months and produce no material impacts on your financials, I’d suggest that those systems have little or no value, and you might as well turn them off forever.
This logic is in apparent conflict with the very idea of “critical information technology applications”. If an application (or anything else) is “critical,” doesn’t that mean that things go substantially wrong when it’s absent? But perhaps the turmoil is largely invisible to senior managers. Deal-making over the telephone is a far cry from much of the activity that takes place in a large corporation. I wouldn’t be surprised if the deal-making is still underway. On the other hand, I’d be very surprised if business operations at Sony were unimpeded by the loss of “critical information systems,” and to the extent that those business operations generate revenue, one can imagine that there is an impact.
I guess we’ll find out. If Sony decides to eliminate (nearly) all spending for information systems going forward, we’ll know that they learned in this incident that their information systems really create little or no value–and everyone in IT will need to pay close attention, because that discovery will upend 50 years of increasing dependence on information systems in every industry worldwide. Never mind Sony–information security teams everywhere will be politely put out to pasture, because there’s no point spending millions of dollars annually to protect the enterprise against a trivial harm. “Don’t worry, be happy” will be the new mantra for the few IT professionals required to run the few surviving enterprise information systems–you know, the ones that support executive tablets and handhelds, where the real work of the enterprise is done.
Or it may turn out that Sony indeed suffered material damage from the incident, even if the damage won’t be fully exposed until months have passed. It’s easier for me to believe that, because I truly believe that IT creates value. (You don’t write a book titled “The Real Business of IT: How CIOs Create and Communicate Value” unless you either believe it, or want to write the world’s shortest book.)
One thing I know for sure is that in a publicly traded corporation, eventually, the truth of this matter will out. I look forward to that with great anticipation. Does IT create value, or not? That’s the question. The Sony case will sooner or later provide an answer, or at least an object lesson.
Category: IT risk privacy value Tags: IT risk, IT value, Sony hack
by Richard Hunter | May 5, 2014 | 4 Comments
The Associated Press reported today that Target’s CEO resigned, largely as a result of the damage done to the company in the wake of the massive credit card breach reported five months ago. You can read the AP’s full story here: http://www.wjla.com/articles/2014/05/target-ceo-fired-following-last-year-s-security-breach-102788.html
The gist of the story is contained in one paragraph: “He was the face of the public breach. The company struggled to recover from it,” said Cynthia Larose, chair of the privacy and security practice at the law firm Mintz Levin. “It’s a new era for boards to take a proactive role in understanding what the risks are.”
Boy. I’ll say. The last time I heard of a C-level executive (other than a CIO) who lost his job because of an IT failure, it was January 2005. The President of ComAir resigned three weeks after the company’s crew scheduling system failed on Christmas Eve 2004, stranding 30,000 passengers for three days during the busiest travel (and family) season of the year, an incident that also cost the company seven percent of its revenue for the year, not to mention a Federal government investigation.
But that was then, and this is now. In those days an IT-related incident with that level of consequence was one-of-a-kind. Not any more. Now IT risk is a societal issue worldwide–not an enterprise issue, or even an industry issue, but a global issue for anyone who uses a computer keyboard, which increasingly is everyone, period. It’s official: failure to take every reasonable action to ensure the security of computing resources can get a CEO fired.
Survey data from Gartner analysts such as Laura McLellan and Kurt Potter indicate that somewhere in the neighborhood of a third of all enterprise spending on IT is spent without input or advice from the IT organization, and that spending is increasing at a rate faster than the growth of the traditional IT organization’s budget. The increasingly widespread availability of cloud computing resources makes it easy for anyone with a budget to acquire IT-related services pretty much on demand. Whether that’s a good thing depends on your point of view. IT professionals, who are used to vetting IT services for performance issues that include security, availability, and reliability tend to take the point of view that it’s terrifically risky, and therefore not a very good thing; marketing and sales professionals, who often see IT’s apparent obsession with traditional performance virtues like security as obstructionist, tend to think otherwise. IT professionals know that these things can go horribly wrong; the new buyers haven’t had enough experience with IT to know that. The new buyers value speed-to-access above all. And why not? What could go wrong, after all? Doesn’t the stuff work?
The newly apparent answer is that the technology works until it doesn’t, and it might stop working when somebody with ill intent makes it his business to break it. If and when it fails, a CEO can pay the price. You can bet that CEOs everywhere sent out a barrage of memos to their Chief Marketing Officers today asking about what they’re doing to protect all the stuff they’re running in the cloud. The answers are likely to range somewhere between “not much” and “nothing.” Such answers were probably acceptable last week. They’re not acceptable now. Not if the CEO can lose her job over it.
I suspect that a lot of the non-IT professionals who are carefree buyers of IT services, in the cloud or elsewhere, are about to find out that the party is over, and from this point on their IT purchases are going to be subject to a new and much higher level of scrutiny. That doesn’t mean that enterprise IT spending outside the IT budget will stop. It might not even slow down. But the days when anyone in the enterprise could throw down a credit card and put a bunch of sensitive data anywhere they liked without having to demonstrate due diligence in protecting that data are probably drawing to an imminent close. Sarbanes-Oxley compliance became an urgent enterprise project when failure to comply meant potential jail time for the CFO and CEO. Managing IT risk for every IT purchase is about to become an urgent priority for similar reasons: the real costs, and the parties who will pay it, are now blindingly obvious.
Category: cloud IT risk Tags:
by Richard Hunter | April 7, 2014 | 3 Comments
The center cannot hold.
W.B. Yeats, “The Second Coming”
For roughly 50 years, IT organizations have been structured according to what my former Gartner colleague Mark McDonald called the “dominant model”: a pattern that was so widely adopted that it was effectively ubiquitous. In this model, IT organizations were designed above all to provide reliable and predictable execution at scale, and every activity of the IT organization, from exploration of a problem space to delivery and ongoing management of a solution, was painstakingly ordered and executed to that end.
So dominant was this model that for decades it was possible to take a professional working in an IT team in any given enterprise, industry, or geographic location, and change any of those factors without interrupting the programmer’s effectiveness for more time than it took him or her to locate their new desk. It was an extraordinary uniformity for an industry that has come to see itself as a key enabler for change. But the dominant model wasn’t really about change. It was about tightly knit processes whose fundamental characteristics were the antithesis of change: stability, accuracy, security, regulated throughput. Change was only tolerated to the extent that it produced improvement in price to performance ratios for these specific outcomes. Change that threatened those outcomes—such as the introduction of personal computers in the 1980s, the increasing ubiquity of personal computing and communications devices, and purchase of Cloud computing services by marketing teams—was and is resisted by many, if not most, IT organizations.
This model is coming to an end. Of course it is innovation in technology that is the cause. The dominant model could not survive the explosive spread and diversification of information technology that has occurred worldwide in only a few decades. The uniformity of culture and technology that preceded the Nexus of forces—Gartner’s shorthand term for the unprecedented democratization of powerful technologies worldwide that began with the exponential growth of the Internet in the 1990s, and is proceeding now with technologies that include social, mobile, cloud, and big data—was key to the success of the dominant model. It is not at all clear what will follow it now that it has been largely if not entirely eviscerated.
IT’s Dominant Model is not the only one dissolving
What is even more important is that these same forces are eviscerating the dominant models for societies at the same time. It may in fact be that the confusion reigning in IT organizations as they seek a new model for purpose and structure is simply a reflection of the confusion that reigns in global societies as they struggle with the impact of the Nexus.
Commentators ranging from Stratfor to Tom Friedman to (probably) your next door neighbor have observed that there is a crisis in global leadership and deep fragmentation in bodies politic. In my book “World Without Secrets,” published in early 2002 in the wake of the 9/11 attacks, I discussed the rise of the “Network Army,” a form of social and political movement based on shared (if often niche) values and beliefs that transcend geography and nationality, oriented to action, and enabled by instantaneous, ubiquitous communications and access to information. Our bodies politic are rapidly devolving into network armies that include fundamentalist libertarians in the Iowa Caucuses, right-wing nationalist street gangs in the Ukraine, and a myriad of actors in Arab Spring uprisings. They are everywhere, and wherever they are found they are in no mood for compromise.
In the early days of the Internet many believed that widespread access to information would create a new age of global harmony and prosperity. How otherwise could Google possibly justify the belief that it is not evil? But it is increasingly apparent that something very different has happened. Our barely-born technologies for information manipulation and communication have contributed to, if not created, an enormous rift between social, political, economic, and military elites and the peoples they supposedly serve. The free flows of information that were supposed to create common good have instead helped to consolidate wealth and power on an unprecedented scale, leaving the disenfranchised everywhere to battle it out among themselves for the remaining scraps with little mediation from their supposed leaders.
21st century IT industrialists frack the human ecosphere, extracting value and leaving society to struggle with mountains of waste that include rapidly burgeoning cyber-crime, just as 19th-century industrialists strip-mined the planetary ecosphere and left the local population to deal with ruined landscapes and poisoned rivers. Abandoned and openly manipulated by elites, ordinary citizens worldwide react by turning ever more stubbornly inward to values and beliefs that a massive and increasing flood of information from untrusted sources would otherwise call into greater question with each passing moment. Attitudes substitute for thoughtful consideration; slogans that represent purposeful oversimplifications of complex and often self-contradictory ideals substitute for debate. Political opponents do not merely disagree; they have no understanding of the premises behind their opponents’s positions, and therefore no ground for common action for the common good. Winner-take-all street fights replace bargaining and pragmatic change. In this land ruled by network armies there is no perceived time or space for pragmatic change, only for eager exploitation of temporary commercial, political, or ideological advantage.
Will Innovation End?
This extraordinary age of global discord is fueled by rampant, accelerating innovation that has delivered technology of unprecedented power to multitudes worldwide. It is possible that it will only end when increasingly powerful elites decide that innovation is over.
Cycles of change have already accelerated far past the point where they can be assimilated without substantial pain by humans and the planet we inhabit. Yet it is an article of faith among many, including most of my colleagues at Gartner, not to mention widely quoted gurus like Ray Kurtzweil, that continuing acceleration of innovation is a given.
I doubt it. As my first mentor at Gartner, Mike Braude, once said, “When everybody believes something, that’s very good evidence that it’s not true.” Innovation has flourished in our era because new industrial and political elites have found it to their advantage to make it do so. Innovation can end when those elites or counter-elites decide that more change is dangerous to their interests.
An end to real science and innovation was a key element of two of the greatest visionary masterpieces of the 20th century, Frank Herbert’s “Dune” and George Orwell’s “1984.” In those books it was political, not economic, elites that rang the bell. But economic elites may easily substitute. Currently we have the spectacle of gambling mogul Sheldon Adelson spending millions, perhaps billions eventually, to influence gaming industry players and regulators against Internet gambling. That’s transparent opposition to industry innovation, and it fundamentally has nothing to do with what’s better or worse for the industry’s customers, for whom the “benefits” of gambling are in any case arguable. It’s all about who controls the channels, and monopoly never operates to the customer’s benefit.
Will a new dominant model emerge? Maybe.
Humans crave stability even more than they crave novelty, and ultimately a new—or revived—dominant model is likely to emerge. The one that’s emerging now at the societal level seems to be along the lines of a society in which vast wealth and consolidated industry power openly rules all by setting and enforcing de facto rules, industry by industry, with the help of a paid-for political establishment. (I trust that there is no longer room for argument about whether money rules politics, at least in the USA, where the Supreme Court has just equated unfettered spending on elections to free speech for the second time?)
If the same thing happens at the enterprise level, then the future of the IT organization lies in little more than operational support in an otherwise balkanized enterprise. It’s not a given; it’s a significant possibility in a fractured world where consensus in and outside the enterprise comes down to “just do what we say, and nobody gets hurt.” Of course, running operational support is no mean thing if innovation disappears or is trivialized; once that’s done, what’s left besides operations?
It’s a cardinal tenet of business writing that an author never raises an issue for which she does not have an, if not the, answer. I have done so here, and I apologize to the reader for potentially ruining what might otherwise have been a pretty nice day. I do think it is important to make the point that the dissolution of the dominant model for IT may only be a symptom of a much deeper dissolution of social pacts that is driven by rampant innovation, and that may only stabilize when innovation ends.
If you think otherwise, or if you’d like to comment on any aspect of this post, please do. These ideas are by no means settled, for me or anyone else, no matter how strongly they’re stated here.
P.S. As an aside, when I re-read the Yeats poem referenced above, written almost 100 years ago, I was astonished to see how modern its message is. I could have quoted almost any line in it with relevance to this piece.
Category: IT risk privacy Tags:
by Richard Hunter | November 4, 2013 | 4 Comments
The recent revelations of NSA surveillance of just about everybody have of course provoked a lot of discussion about privacy–what it is, why it matters, who needs it and who doesn’t. Daniel Solove, writing well before the NSA’s activities came to light, took on the “I’ve Got Nothing to Hide” argument against privacy in this piece. On the other side of the debate, Mike Rogers, chair of the US House of Representatives Intelligence Committee, proposed the novel theory that your privacy can’t be violated unless you know it’s been violated:
Rogers: I would argue the fact that we haven’t had any complaints come forward with any specificity arguing that their privacy has been violated, clearly indicates, in ten years, clearly indicates that something must be doing right. Somebody must be doing something exactly right.
Vladeck: But who would be complaining?
Rogers: Somebody whose privacy was violated. You can’t have your privacy violated if you don’t know your privacy is violated.
I’m sure that extortionists everywhere would applaud Rep. Rogers’s novel legal theory–if no one’s complained, there can’t possibly be a crime–whole-heartedly. That aside, the fact is that there’s plenty of controversy about what privacy is, why it matters, and what harms result from its violation. The latter question in particular is vexing for privacy advocates. As per Solove’s article, many might argue that if I have nothing to hide, I have no reason to fear the loss of my privacy.
I propose a simple definition of privacy that may help clarify the harm involved in the violation of privacy. To wit: privacy is power. Privacy is a line over which others may not step. Privacy is a protected space within which I think, do, and say what I please. Absent privacy, I’m powerless to prevent the intrusion of the powerful into my life, and that intrusion may take any form that the powerful deem appropriate.
We see this most clearly in police states, whose defining characteristic is the utter absence of privacy. The absence of privacy in such states in effect makes the individual the property of the state. For an extraordinary illustration of this principle in art, I highly recommend the film “The Lives of Others,” which describes the relationship between an experienced and extremely capable East German intelligence officer and the writer he is charged to monitor. I promise that regardless of your political beliefs, you will consider this film to be two hours very well spent. But even in commercial relationships, which are far less coercive than those between authorities and citizens in police states, the absence of privacy tips the balance of power.
I repeat: privacy is power. Without it, the individual has none. Period.
If you’re comfortable with that, then you may indeed have nothing to hide, because anything worth taking has probably already been taken.
Category: privacy Tags:
by Richard Hunter | October 23, 2013 | 3 Comments
Maturity models are all the rage in IT circles. There are maturity models for nearly everything an IT organization does. Lots of IT professionals, in practically every IT discipline, talk about improving maturity as if it was the ultimate goal for the organization. And that’s a problem for the IT professional.
The heart of the problem is that “maturity” is not value. Value is an outcome, and maturity is not an outcome; it’s something we pursue in order to develop the capabilities that make an outcome possible. At best, increasing maturity is a leading indicator for value, not the thing itself. An IT organization that touts its improving “maturity” to an executive team is not talking about value, but about IT activities. Executives are at most mildly interested, and at worst worry that IT isn’t focused on the right things–which of course means initiatives that explicitly deliver more value to the enterprise.
I’m not opposed to increasing maturity. I’m all in favor of everyone improving performance, ideally by leaps and bounds. I merely think that in most cases maturity is not a topic for discussion with the executive team. What IT professionals should discuss is not “maturity”, but the outcomes that increasing maturity produces.
As an example, consider the grand-daddy of all maturity models: the Watts Humphries Capability Maturity Model (CMM) for applications development, developed in the 1980s at Carnegie Mellon University. The point of this model is to rate the ability of an AD organization to consistently execute against project plans. Humphries tied the original model to quality; it can just as easily be tied to outcomes that include higher productivity, improved user satisfaction, and any others that can benefit from systematic execution.
And here we encounter our first maturity dilemma: consistency in execution says little or nothing about what outcomes are supposed to be consistently delivered. Should our AD organization optimize for quality, adherence to schedule and budget, or for productivity? The first is what NASA’s space shuttle program optimized for; the second is what the typical large external serivce provider optimizes for; the third is what the typical AD shop thinks it’s optimizing for. Which of these outcomes is the desired one for a given IT team?
The answer is dependent on what the enterprise needs, which can change over time. Twenty years ago most enterprises wanted high quality systems, the stuff we now call “systems of record’ in Gartner’s applications development pace layering model; now most enterprises want rapid delivery, figuring that an 80% solution now is better than a 100% solution three years from now. An IT organization whose “maturity” defaults to rigorous methods for creating high quality above all else might find itself losing value in the eyes of peers throughout the enterprise when the goal shifts (as indeed it has for many of our clients).
The solution to the problem is to avoid discussing IT’s performance in terms of maturity, and focus instead on the outcomes that maturity is supposed to produce. Let’s assume for the sake of argument that desired outcomes for a given enterprise include delivering more projects, in shorter time spans, with less waste for both IT and non-IT personnel. As opposed to reporting to the executive team on AD “maturity,” the AD team can talk about percentage of projects delivered on time and budget, and increased value delivered as represented by reduced waste and higher yields on resources invested in business imperatives.
Any of these outcomes represents real value, as opposed to “maturity”, which is at best a proxy for value–a symbol, not the thing itself, and a symbol that doesn’t resonate with anyone outside IT. When you want to talk value, talk outcomes, not maturity. If the outcomes don’t resonate, the increased maturity that makes the outcomes achievable won’t either.
Category: value Tags:
by Richard Hunter | October 17, 2013 | 7 Comments
I’ve been thinking a lot lately about the “kernal” construct that Richard Rumelt proposes in his book “Good Strategy, Bad Strategy.” According to Rumelt, the irreducible core of a strategy–the “kernal”–consists of a diagnosis of the environment an enterprise operates in, one or more policies that are intended to address the diagnosis, and coherent execution in support of the policies. IT governance looms large in coherent execution; every change in an enterprise that’s more than trivial eventually comes to IT, because nothing that’s more than trivial can be implemented without supporting technology.
In other words, the IT project proposal process essentially generates a running list of proposed and actual change in the enterprise. In fact, the IT project portfolio is the only place in the enterprise where all that change is visible at once. Smart enterprise leaders can use that fact to their advantage by turning the IT investment decision making process into a forecasting system for the success of enterprise strategies. Here’s how:
1) Ensure that any proposal for change that comes to the IT investment decision making process (a/k/a IT governance) has two explicit characteristics:
- It spells out outcomes in material, quantified, baselined terms, and
- It links those outcomes to the goals associated with specific enterprise strategic imperatives, of which there are probably no more than half-a-dozen at any point in time, assuming that the enterprise is genuinely clear on its strategy; great strategies don’t have 15 segments.
Investment portfolios aligned to strategic imperatives are useful for the latter purpose. The CFO or COO, not the CIO, is the appropriate party to ensure this basic investment discipline, because it’s not the CIO’s job to enforce investment discipline, whether IT is involved or not.
2) By adding up the outcomes, costs, and timeframes associated with projects within an investment portfolio linked to a strategic imperative, the enterprise can determine how far (hypothetically) the full range of initiatives will move the enterprise towards the goals specified by the imperative, at what cost, and within what timeframes. That knowledge can be put to good use in deciding how much and where to invest. Summing of these factors is relatively trivial if the basic information is included in project proposals, and can be done by a project management office, by personnel within IT or a sponsoring business unit, or by the CFO’s office.
3) When proposed projects are reviewed, approved, resourced, and started, the same approach can be used to determine how far the approved project portfolio (as opposed to the proposed project portfolio) will move the enterprise towards its goals. Comparing proposed project portfolios to approved portfolios can help to show whether the enterprise is investing enough, too little, or too much in pursuit of its strategies; for example, whether attractive investment opportunities are being left on the table because of arbitrary constraints on “IT spending.” (Calling an initiative supported by IT an “IT project” is like calling your exercise program “the Nautilus project”; yes, machinery is involved, but that’s not the point of the exercise.)
4) When approved projects are completed (and after a suitable waiting period to account for the lag in benefits that always follows any change initiative), actual outcomes can be measured, which will help both to refine forecasting going forward and to improve current forecasts.
Suppose, for example, that we have a strategic imperative to improve product quality in order to increase initial and repeat sales and reduce capital allocated to allowances for returns. If we can quantify, even hypothetically, the extent to which a given increase in product quality will increase sales and reduce allowances for returns, and we frame our project proposals in those terms, we can then use our IT-supported project portfolio for that strategic imperative to estimate how fast, how much, and at what cost we will be able to achieve these strategic goals with our current project initiatives.
I said earlier that this approach offers a level of insight into the potential for success or failure of strategic imperatives that is available nowhere else. Even more importantly, this insight can be achieved with only minor adjustments (if any) to an investment decision-making process that is already present in the vast majority of enterprises. For many, the discipline of specifying initiative outcomes in material, quantified, baselined terms (as opposed to fuzzy measures of success like “improve collaboration”) will be the most difficult to master, but this can be expected to improve over time with management attention and team learning. In return, the enterprise gets a powerful early-warning mechanism for strategic success, not to mention a better way to think about its investments involving scarce IT and non-IT resources, and (ultimately) higher yields on those investments.
Most enterprises think about IT governance as a mechanism for allocating IT resources to “IT projects,” but that’s way too limited a view of what’s really going on. I hope I’ve made it clear here that not only is IT governance one of the most important enterprises for coherent execution of strategy, it’s also the one that offers the best and most comprehensive tools for estimated and validating the outcomes that execution will deliver.
I’ve been talking about these ideas with CFOs, CEOs, and CIOs lately, and the reception has generally been favorable. I wouldn’t be surprised if Wall Street investment analysts figure this out at some point, and CIOs start getting calls from said analysts about what’s in the project portfolio.
Think about it, and let me know what you think.
Category: value Tags:
by Richard Hunter | July 3, 2013 | 1 Comment
My esteemed colleague Tina Nunno has been writing and presenting on the topic of CIO politics for years now, and her book on this topic is well underway. Tina needs help naming the book, and I’m helping to spread the word. Follow the link below to help Tina choose between two titles (and get a free gift–a very nifty piece of recent Gartner research). Thanks!
Category: Uncategorized Tags:
by Richard Hunter | July 2, 2013 | 4 Comments
I’ve been thinking lately about Matt Honan, who wrote almost a year ago in Wired about his experiences with a hacker named Phobia who hacked Honan’s Apple account, and soon after used his access to the account to wipe out Honan’s electronic memories–everything Honan had acquired and stored on his Apple devices. Honan later was contacted by Phobia, and in the course of their correspondance Honan asked why Phobia had done him such grievous harm:
I asked Phobia why he did this to me. His answer wasn’t satisfying. He says he likes to publicize security exploits, so companies will fix them. He says it’s the same reason he told me how it was done. He claims his partner in the attack was the person who wiped my MacBook. Phobia expressed remorse for this, and says he would have stopped it had he known.
“yea i really am a nice guy idk why i do some of the things i do,” he told me via AIM. “idk my goal is to get it out there to other people so eventually every1 can over come hackers”
The ethos expressed in those few words is close to monstrous. Here’s what it boils down to:
1) If I hack you, it’s your fault for being unprotected. (As Clint Eastwood said in “Unforgiven” after being taken to task for shooting an unarmed man, “He shoulda armed himself.”) Therefore…
2) I have the right to hack anybody who’s vulnerable to hacking. And that’s a good thing, because…
3) I serve a higher purpose when my hacking wrecks the lives of people who are too careless or innocent to protect themselves. As Phobia wrote to Honan, “my goal is to get it out there to other people so eventually every1 can over come hackers”.
Phobia is confused, to say the least. You can’t simultaneously not know why you do the things you do, and purport to have a goal that represents the justification for doing what you do. Beyond that, there’s an obvious hypocrisy in Phobia’s comments. If you claim to be concerned for the welfare of others—for example, to care about whether they can protect themselves from some kind of harm—you don’t start your relationship with those others by launching a brutal surprise attack. In a civil (read: moral) society, you don’t attack strangers on the street (or anywhere else) on the pretense that it’s a lesson to the vulnerable to make themselves less vulnerable.
Indeed, civil societies reserve special scorn and punishment for those who attack the most vulnerable. As one obvious example, pedophiles are not commended for “teaching” little kids to protect themselves from sexual predators. Responsibility in a civil society lies with the perpetrator, not the victim, not God.
Of course it’s important for all of us to take steps to avoid being victimized in all sorts of ways, but that’s not the point. My being weak or defenseless does not make my attacker noble or justified, whether or not the attack is successful. The moral measure of a person in a civil society is how well he or she treats vulnerable people, not how successfully he or she takes advantage of their vulnerability. (I won’t pretend that there aren’t plenty of people out there who have achieved fame and fortune in modern society by taking advantage of the vulnerable; I’m saying that such people don’t get to claim the moral high ground, which is what’s happening when a hacker says he does what he does to make the world safe from hackers.)
In this sense, hackers like Phobia are not vanguards of a new cyber-civilization, as Phobia seems to fancy himself. They are the precise moral equivalent of a street thug ambushing an old woman, or a man throwing acid in his ex-lover’s face (a recently popular activity in some parts of the world, and one that is also intended to teach the victim a lesson). They are representative of the worst tendencies in humanity, not the best, and it’s discouraging that some very bright young people (Phobia is 19) have convinced themselves that victimizing the vulnerable is representative of high ethics.
I doubt that Phobia thinks of himself as religious, but his justifications are eerily similar to the tenets of any number of fundamentalist religions. Many fundamentalists will tell you that if you’re attacked—by a terrorist, a cyclone, an angry dog, a fellow fundamentalist, or anything else—it’s God’s will (which is another way of saying “it’s your fault”). If the fundamentalist hears from God (or the little voice inside his head that purports to be God) that you should be attacked, the attack will begin as soon as is practically feasible. After you’re righteously punished, you will be more likely to heed God’s will; if not, God’s will is still satisfied, it being the will of God that the faithful seek relentlessly to destroy unbelievers. No fundamentalist ever seems to think that it’s God’s will to be kind to everybody. Nor does any hacker. Once you’ve established a higher purpose that supercedes the mere interests of any individual, kindness has nothing to do with it.
Civil societies rely on due process and rule of law, which were developed painstakingly over centuries precisely to protect individuals and societies from arbitrary attacks by the self-empowered, be they bandits or kings. Before due process and rule of law, life was (as Hobbes put it) nasty, brutish, and short, which is a pretty good description of a typical cyberattack (although of course modern attacks are recently tending to the ongoing and persistent, as opposed to the episodic).
Do hackers really want to return society to the law of the jungle? Do they really want every man and woman to decide for themselves what “right” means, in utterly exclusive and non-negotiable terms, and to act without further ado to destroy those who stray from the path of righteousness? Do they really want to add to the world’s sum total of pain and anguish, of which neither will ever be in short supply, with or without the help of hackers?
We live in an era in which authorities of all sorts have been revealed to be both unethical and incompetent, and it’s not surprising that many in this era have anointed themselves as moral authorities with the right to mete out drastic punishment as they see fit. But that doesn’t make it right.
If you’re a hacker, and you disagree with this argument, write and tell me why.
Category: IT risk Tags:
by Richard Hunter | August 6, 2012 | Submit a Comment
In the days since Knight Capital Group suffered a “computer glitch” that cost the company $440M in losses, I’ve been discussing with my colleagues how this catastrophe might have been prevented.
Some of my colleagues have argued that the failure was basically about IT governance–that the IT team at Knight was responsible only for implementing flawed trading algorithms specified to them by their non-IT colleagues. The argument essentially boils down to this: the fault lies with those who did not understand, and therefore did not adequately specify or test, what would happen when the technology operated according to the rules it was given. The implication in this argument is that disaster could have been prevented if the people involved had made better decisions in the requirements/specification/design/testing/implementation/etc. of the software.
My argument is that none of that is sufficient to prevent disaster—including future disasters—and that focusing on the technology is the only effective approach to solving the problem, meaning to reduce the risks to manageable levels. Indeed, I would argue that the risks of high speed trading systems are intrinsic, ungovernable, and potentially threatening to all participants in the markets. “Intrinsic” means that the problems these systems supposedly model with logical rules are beyond the ability of logic to solve. “Ungovernable” means that the risks introduced by these systems can’t be resolved by the tools of governance, largely because of the intrinsic logic problem. You cannot govern reality away; you operate within the bounds of reality, or reality teaches you to do so, more or less brutally and directly.
Markets by their nature produce unforeseen circumstances. It is as impractical to expect a piece of technology to respond appropriately—or even predictably–to every unforeseen circumstance as it is to expect a human being to do so. When technology is empowered to execute massive trades instantly, guided by rules based on various combinations of market circumstances, bad things can be expected to happen as soon as unforeseen circumstances arrive—in fact, at the very moment unforeseen circumstances arrive.
What is happening now on Wall Street is that in pursuit of competitive weapons, firms are empowering their machines to make bigger and bigger decisions faster and faster—literally, to trade millions or hundreds of millions of shares in small fractions of a second. It’s an arms race, and the weapons in question are being deployed at many, many firms, each of which has their own views on what constitutes “acceptable risk.”
If we define this as a governance issue, then the solution must be for the firms involved to make smarter decisions about the risks. The first problem with this approach is, as Harvey Keitel said in “Thelma and Louise,” that brains will only take you so far (to which he added that luck always runs out, something every executive should remember every day). High speed trading systems create severe risks that are not only unanticipated, but which realistically can never be anticipated in an environment where technology is continuously pushed to the limit.
In short, better governance won’t solve the problem because the people involved in governance are no more able to anticipate all possible failure modes than the people involved in designing and building the systems. Even if they were, it scarcely needs saying that Wall Street traders in general are heavily incented to take risks, and that they are often able to make others pay the price for risks gone bad–circumstances that do not inspire confidence in the “governor’s” ability to manage risks down. Finally (for the governance argument), there is no reason to believe that all players will adopt “good” (meaning in this case risk-aware) governance policies–and a single point of high-speed trading failure can potentially impact many players in the markets.
If you take the point of view that disasters such as this are the result of using technology in a way that it should not be used–to solve a problem that computer logic cannot solve, at least in the current state of the art–then the solution is to prevent the technology from being used in that way, either by banning it outright or by heavily taxing the proceeds of trades that are too short-lived to be called “investments.” I appreciate that regulating high-speed trading systems out of existence one way or another is a drastic approach. I believe that the risks–which extend to market participants far removed from the businesses that create these events–justify the means. There’s no more reason to allow individual trading companies to implement technology that potentially destroys markets than there is to allow private citizens to carry nuclear weapons. In both cases we could argue that careless or deranged (or whatever pejorative you like) individuals are the real problem. I agree that this argument is valid to a point; triggers don’t pull themselves, and we’d all be better off if everyone behaved decently. But positioning “better governance” as the solution to the problem doesn’t work when the consequences of one failure of governance are so severe.
As the saying goes, one atomic bomb can really ruin your day. That’s why we’re all glad that atomic bombs are not for sale to anyone who wants one, and why we should really, really question why we need automated trading programs on Wall Street.
There may be other solutions to this problem, and I’d be delighted to hear from readers about what they think might work. One thing I’m certain will not work is to continue on the current path, with the potential for bigger and bigger disasters. (But if you’d like to argue that point, feel free to do so.)
Category: IT risk Tags: