Posted by The Progress Guys
Editor’s Note: the following post is written by Theo Hildyard, Solutions Architect, Market Surveillance at Progress Software was origianally published on TABB Forum.
Since the May 6, 2010 flash crash the issue of out-of-control machines trading on global stock markets has made headlines over and over. Just last week a US market maker, Knight Capital, nearly blew itself up with a rogue algorithm. The calls for regulation are growing louder and regulators globally are struggling to bring order to an automated marketplace that is increasingly chaotic.
In India, the Securities and Exchange Board is considering imposing speed limits on high-frequency trading. The Hong Kong Securities and Futures Commission's CEO is very keen to regulate HFT and proposes that algorithms are tested annually. Australia's Securities and Investments Commission (ASIC) wants automated trading systems tested. In Europe the Securities and Markets Authority (ESMA) is preparing to crack down on every aspect of automated trading from algorithms to CDS to short selling. And in the US the Securities and Exchange Commission is tightening rules on automated trading systems and programs, with Knight Capital having added to the urgency.
Machines trade anywhere from 25% (Australia) to 70% (US) of the volume on stock exchanges. The opportunity to make money depends upon the speed of your trading systems along with the intelligence of your algos. Algorithmic innovation is critical in order for high frequency trading firms to find an edge. Research done by the AITE Group suggests that the average lifespan of a trading algorithm can be as short as three months. With such a small window of opportunity trading firms must design, test and deploy new algos on an almost continual basis. No wonder there are problems.
When we allow machines to make the decisions for us, it is imperative that we design them to be fail-proof. Testing in a non-production environment should be mandatory, and back-testing should be exhaustive. Poor quality due diligence and quality assurance is producing catastrophic consequences. It is our responsibility to ensure that our machines, or robots if you will, do no harm.
I am reminded of author Isaac Asimov's first law of robotics: "A robot may not injure a human being or, through inaction, allow a human being to come to harm." The 'harm' in Rule #1 is happening to the marketplace. The flash crash wiped a trillion dollars off of US-listed firms' market capitalization. Knight Capital's rogue algo wiped $440m off its balance sheet and forced it to look for backers in order to survive.
If algorithms and trading systems were programmed with Asimov-style parameters, there would be far fewer glitches. But even if you are the most conscientious firm out there, you cannot ensure that your counterparties have also programmed and tested their systems and algos thoroughly. And it remains your responsibility, to your customers, staff and shareholders, to ensure that those counterparties do not do any harm to your bottom line or reputation.
Catastrophe can only be avoided by adding an extra layer of control in the trading process; a layer which monitors counterparties for rogue algos or fat fingered trades. That way you have both belts and braces - control over internal trading systems and awareness of external ones. Yes, there will be a tiny bit more latency. But isn't a small latency hop better than bankruptcy?