The Knight Capital computer glitch that cost it $440 million is the latest in a series of software updates that’ve caused catastrophic mess-ups in the financial markets. In March, electronic exchange BATS had to cancel its own IPO when a previously undetected bug in its new IPO auction software reared its head at the worst possible time, this despite months of internal testing. In May, Nasdaq’s IPO software encountered a major error during the early stages of the hugely anticipated Facebook IPO. Again, like Bats, Nasdaq had performed thousands of hours of testing replicating “a hundred scenarios” aimed at anticipating problems, Nasdaq Chief Executive Robert Greifeld said.
And now this. Though Knight declined to comment, more and more market watchers believe the glitch occurred in a piece of code that Knight unveiled Wednesday morning to prepare for the New York Stock Exchange’s Retail Liquidity Program, which was also launched Wednesday morning. NYSE says it made the program’s technical specifications available for 240 days before the launch, and its testing environment was open for six months. It was up to the trading firms, such as Knight, to adjust their software. “Clearly, some wrote good software, and some wrote bad,” says Manoj Narang, founder and CEO of high-frequency trading firm Tradeworx.
Today’s stock market is essentially the domain of automated trading programs. For every one thing that goes wrong, there are millions of things that go right every second. But the recent incidents in which software that supposedly has been rigorously tested crashes and burns raise an alarm. One question is whether the testing that firms do on new pieces of code and software is robust enough.
“They’re absolutely not being adequately tested,” says Eric Hunsader, CEO of market data service Nanex. “That’s the problem for sure.” Considering how complex the markets are, it’s almost impossible to anticipate everything that can go wrong. No simulation will capture all the variables of live trading.
High-frequency trading has no industrywide best practices that govern the internal testing of developmental systems and software. It might be a good idea for the industry to come up with protocols based on standards similar to those of the International Organization of Standardization, which has developed some 19,000 methods of standardized best practices covering almost all aspects of technology and business. Except for high-frequency trading.
Rick Cooper and Ben Van Vliet, two professors at the Illinois Institute of Technology are proposing a plan for the development of industry standards. The two have written a paper laying out basic principles for monitoring the behavior of a high-frequency trading algorithm in real time and shutting it down if unusual glitches occur. Published this spring in the Journal of Trading, the paper suggests ways the industry could police itself. The paper argues for outside monitoring systems that act like a circuit breaker. If trading volume and profits and losses aren’t adhering to a pattern of expected behavior, the “circuit breaker” can shut down the algorithm. “That way you dont screw up the marketplace,” says Cooper. Or in the case of Knight, blow yourself up.
The most shocking part of Knight’s glitch isn’t that it happened, but that it took the firm so long to respond. According to a person familiar with the situation, who would speak only on condition of anonymity, it was the New York Stock Exchange that alerted Knight to the problem early Wednesday morning. It took Knight 30 to 45 minutes to fix things. “Any time you’re starting a new system, you have to be aware of the heightened possibility of something going wrong,” says Narang. “You have to watch it like a hawk.”