Blog

An open infrastructure could curb high-frequency trading disasters

By Aaron Williamson | August 10, 2012

In yesterday’s New York Times, Ellen Ullman criticized the SEC’s suggestion that mandated software testing could prevent automated-trading catastrophes like the one that shook the market and nearly bankrupted Knight Capital at the beginning of this month. More testing won’t work, according to Ullman, for a few reasons. First, computer systems are too complex to ever “fully test,” because they comprise multiple software and hardware subsystems, some proprietary, others (like routers) containing “inaccessible” embedded code. Second, all code contains bugs, and because bugs can be caused by interactions between modules and even by attempts to fix other bugs, no code will ever be completely bug-free. And finally, it is too difficult to delineate insignificant changes to the software from those that really require testing.

Ullman’s critique of a testing-centric solution has some merit, although few professional developers test individual “coding changes” (they really do test entire systems anytime changes are introduced). But her proposed alternative is heavyweight and difficult to square with the needs of the industry. She proposes making brokers liable for losses caused by trading errors in order to induce them to write “artificial intelligence programs that recognize unusual patterns” and shut down runaway trading algorithms. Her model is the regulation of credit card companies: since they’re liable for most fraudulent charges, they’ve created software to track purchases and put holds on accounts showing suspicious activity. In Ullman’s scheme, the SEC would also create its own electronic sentries as a backstop.

I doubt Ullman’s assumption that the rogue trading program problem can be fought with A.I. as successfully as credit card fraud. The speed of automatic trading—the systems can evaluate and execute thousands of trades per second—makes automated trouble-shooting much trickier. Even very fast A.I. would likely take more than a few seconds to spot bad behavior reliably. This is time enough for several thousand erroneous trades to be made. And false positives would be far more expensive, since for every second the program was down, thousands of legitimate trades would not be made. In the credit card context, where fraud happens at human speed, it makes sense to have humans double-check the computer’s determinations, but Ullman’s suggestion that the same human-backup process would port easily to the algorithmic trading context is ill-considered.

Algorithmic traders could reduce their error rate much less expensively (and more realistically) by collaborating on a common infrastructure for executing trades, using free and open source software everyone could review, test and improve. Trading firms are understandably tight-lipped about the algorithms that actually choose which trades to make; these are the source of their competitive advantage over other firms. But most of the pieces of everyone’s high-frequency trading systems are not so secret, including the real-time operating system, high-volume message queuing, and software actually executing the selected trades. By opening, standardizing, and collaborating on these ancillary but complex components, trading firms could reduce errors and improve reliability, all without exposing their trading strategies. A common infrastructure used and collaboratively produced by several firms would be better-tested than the balkanized systems in use now, and less prone to the interaction effects that Ullman finds prevalent in complex systems. Open code would also enable the SEC to audit the system directly, without the complexity and expense of AI “watchers.”

Doubtless many firms believe their competitive advantage derives partly from proprietary kernel modifications or optimizations to their messaging systems. But the Knight Capital failure, the 2010 Flash Crash, and similar episodes have made it clear that trading firms—as well as their investors and the market as a whole—pay a heavy price for their secrecy. That price will only increase if the SEC passes expensive regulations to deter future failures. If the firms can work together now, they may find that, by turning their infrastructure into shared, standard components, they can not only keep the regulators at bay, but also free their own resources to concentrate on the trading algorithms that are the true value add to their business.

Please email any comments on this entry to press@softwarefreedom.org.

Other SFLC blog entries...