If you wanted to sabotage a trading system, you might set out to design suicide mechanisms that look very much like today’s automated trading mechanisms. Blaming Knight Capital’s screwed pooch on ‘software bug’ is a simplistic and flawed starting point for understanding the bigger risk picture.
Automated mechanisms within trading systems act as positive feedback loops, latching onto any tiny bits of information, and leveraging it into buys or sells. Any significant success in the use of a trading algorithm inevitably invites attention, which invariably results in reverse engineering and an escalating rate of copying until the opportunity is over-exploited (fished out) and disappears. Arbitrage does provide liquidity into a market, and those who engage in it claim, with some justification, that their efforts are ensuring that all buys and sells are based on the maximum access to market information.
Under normal circumstances, opportunity algorithms tend to work towards their own obsolescence, but if events unfold too rapidly for a graceful dismantling of a particularly popular algorithm, then the trading floor turns into the Indian electrical grid. Automated market runs happen when a perfect storm scenario is created by a sufficient number of identical trading algorithms that have not yet obsoleted each other, all suddenly kicking into extraordinary levels of play because of some significant new information.
Temporarily avoiding politically-loaded words such as ‘regulation’, let’s try to objectively understand what types of things could prevent feedback-based systems from spiraling into feedback-based overload. If you set up a microphone, an amplifier, and a speaker, and the mike can pick up sounds emanating from the speaker, the natural tendency is for a feedback loop, causing a loud and painful destabilization of the amplifier. All modern public address systems have mechanisms (now digital, originally variable capacitors) that avoid unwanted oscillations by dampening feedback. AWS splits its cloud up into regions that prevent failures in one region from cascading into another. Lawnmower engines have a simple mechanical governor that feeds back information on engine speed into the fuel system. Splitting the US power grid into multiple systems ensures that no single point of failure can simultaneously impact 48 states. As explained in a recent MSDN blog, “Windows Azure’s network infrastructure uses a safety valve mechanism to protect against potential cascading networking failures by limiting the scope of connections that can be accepted by our datacenter network hardware devices.”
ALL COMPLEX, AND MANY SIMPLE SYSTEMS, NEED GOVERNORS AND ANTI-FEEDBACK MECHANISMS TO MAINTAIN STABILITY. Given this basic fact of engineering design, it is a wonder of today’s economy that trading systems work as well as they do, given that the participants in the system have financial incentive to implement anti-anti-feedback mechanisms.
When the participants in a trading system refuse to self-govern, and game theory suggests that this will inevitably be the case, then the only possibility for self-governance is for the exchange to force a mechanism onto the market–or a government does so. This isn’t about finding and fixing ‘bugs’.