Blog post

What does KCG’s recent debacle say about governance and technology?

By Richard Hunter | August 06, 2012 | 0 Comments

IT risk

In the days since Knight Capital Group suffered a “computer glitch” that cost the company $440M in losses, I’ve been discussing with my colleagues how this catastrophe might have been prevented.   

Some of my colleagues have argued that the failure was basically about IT governance–that the IT team at Knight was responsible only for implementing flawed trading algorithms specified to them by their non-IT colleagues.  The argument essentially boils down to this: the fault lies with those who did not understand, and therefore did not adequately specify or test, what would happen when the technology operated according to the rules it was given.  The implication in this argument is that disaster could have been prevented if the people involved had made better decisions in the requirements/specification/design/testing/implementation/etc. of the software. 

My argument is that none of that is sufficient to prevent disaster—including future disasters—and that focusing on the technology is the only effective approach to solving the problem, meaning to reduce the risks to manageable levels.   Indeed, I would argue that the risks of high speed trading systems are intrinsic, ungovernable, and potentially threatening to all participants in the markets.  “Intrinsic” means that the problems these systems supposedly model with logical rules are beyond the ability of logic to solve.  “Ungovernable” means that the risks introduced by these systems can’t be resolved by the tools of governance, largely because of the intrinsic logic problem.  You cannot govern reality away; you operate within the bounds of reality, or reality teaches you to do so, more or less brutally and directly. 

Markets by their nature produce unforeseen circumstances.  It is as impractical to expect a piece of technology to respond appropriately—or even predictably–to every unforeseen circumstance as it is to expect a human being to do so.  When technology is empowered to execute massive trades instantly, guided by rules based on various combinations of market circumstances, bad things can be expected to happen as soon as unforeseen circumstances arrive—in fact, at the very moment unforeseen circumstances arrive.

What is happening now on Wall Street is that in pursuit of competitive weapons, firms are empowering their machines to make bigger and bigger decisions faster and faster—literally, to trade millions or hundreds of millions of shares in small fractions of a second.  It’s an arms race, and the weapons in question are being deployed at many, many firms, each of which has their own views on what constitutes “acceptable risk.”

If we define this as a governance issue, then the solution must be for the firms involved to make smarter decisions about the risks.  The first problem with this approach is, as Harvey Keitel said in “Thelma and Louise,” that brains will only take you so far (to which he added that luck always runs out, something every executive should remember every day).  High speed trading systems create severe risks that are not only unanticipated, but which realistically can never be anticipated in an environment where technology is continuously pushed to the limit.  

In short, better governance won’t solve the problem because the people involved in governance are no more able to anticipate all possible failure modes than the people involved in designing and building the systems.  Even if they were, it scarcely needs saying that Wall Street traders in general are heavily incented to take risks, and that they are often able to make others pay the price for risks gone bad–circumstances that do not inspire confidence in the “governor’s” ability to manage risks down.   Finally (for the governance argument), there is no reason to believe that all players will adopt “good” (meaning in this case risk-aware) governance policies–and a single point of high-speed trading failure can potentially impact many players in the markets.  

If you take the point of view that disasters such as this are the result of using technology in a way that it should not be used–to solve a problem that computer logic cannot solve, at least in the current state of the art–then the solution is to prevent the technology from being used in that way, either by banning it outright or by heavily taxing the proceeds of trades that are too short-lived to be called “investments.”  I appreciate that regulating high-speed trading systems out of existence one way or another is a drastic approach.   I believe that the risks–which extend to market participants far removed from the businesses that create these events–justify the means.  There’s no more reason to allow individual trading companies to implement technology that potentially destroys markets than there is to allow private citizens to carry nuclear weapons.  In both cases we could argue that careless or deranged (or whatever pejorative you like) individuals are the real problem.  I agree that this argument is valid to a point; triggers don’t pull themselves, and we’d all be better off if everyone behaved decently.  But positioning “better governance” as the solution to the problem doesn’t work when the consequences of one failure of governance are so severe.  

As the saying goes, one atomic bomb can really ruin your day.  That’s why we’re all glad that atomic bombs are not for sale to anyone who wants one, and why we should really, really question why we need automated trading programs on Wall Street.  

There may be other solutions to this problem, and I’d be delighted to hear from readers about what they think might work.   One thing I’m certain will not work is to continue on the current path, with the potential for bigger and bigger disasters.  (But if you’d like to argue that point, feel free to do so.)

Leave a Comment