Fraud

Monday, June 13, 2011

Tickets Please: Technology to Keep You on the Train

Posted by Dan Hubscher

The ticket to preventing and deterring rogue trading could well be technology.  Although most financial services firms have some form of surveillance and monitoring technology in place, it isn't good enough to keep them from getting kicked off the regulation train.

Financial services firms risk running afoul of new regulations because their technology is not the "right" technology anymore. The burning question now is – what will be the “right” technology be, in an unpredictable future?

Detecting, preventing and deterring market abuse can only be effective when it permeates financial services activities from pre-trade to settlement. The number of different places that trading activity occurs is constantly increasing. Trading can be done at the office, or via cell phone. Or a trader can begin to work a deal at the office, go for lunch and finish it via instant messaging with his broker.

Surveillance is necessary in order to provide transparency in trading activity, whether it is via formal trading platforms, using an instant messenger platform, e-mails, Twitter or other social media sites, or even old-fashioned phone conversations. Compliance officers need to have full visibility in order to spot and prevent abusive trading activity - and that vision has to encompass it all; every message, every trade, every conversation, every Tweet has to be recorded, taped and downloaded into a database for on-the-spot or future scrutiny.

The technology of yesterday will not be able to cope with the audit trail of today. Plus those audit trails need to occur in real-time, not just looking back over history. This means that current methods, employing historical analysis of already-old data just won't do. Analysis has to be done both in real-time and historically in order to make sense. It has to span asset classes including cash equities, interest rates, swaps, commodities, OTC derivatives - cleared or not. Silos can no longer exist in terms of monitoring; trading today is truly democratic, crossing borders, asset classes and currencies.

New market abuses seemingly proliferate by the day. Some are really the old ones - only done faster (like front running), but there are fresh ones too. Just last week the SEC suspended trading in 17 OTC microcap stocks because of doubts over the publicly available information on the companies.  Here, investigators from different offices and working groups used “a coordinated, proactive approach to detecting and deterring fraud.”

Packaged applications cannot handle new rules or monitor new types of market abuses. Add flash crashes, mini flash crashes, cross-asset crashes (we call these "splash crashes") to the mix and a picture starts to reveal itself. In this picture there are Chief Compliance and Technology Officers handing the regulatory conductors their tickets to prove that they have the right technology, and then getting kicked off the train because they have the wrong tickets.

Flexible, extensible surveillance and monitoring technology is the top-up fare needed to stay on the train. If you can see every move your traders make today, you can take control. If you can see every move your traders make down the line, you will stay in control.  A real-time platform that can handle the massive, increasing volumes of transactions and events in today's electronic marketplaces, and handle the rules of tomorrow’s, is imperative to staying on top of rapidly changing regulations. 

-Dan

 

Friday, February 18, 2011

Fixing the Fat Fingered Faux Pas Epidemic

Posted by John Bates

The age of electronic execution has brought with it a niggling problem - fat fingered trading. A fast-paced, stressful trading environment creates an ideal incubator for hatching mistakes. They are often simple mistakes and involve pressing the wrong key on the keyboard, possibly at the wrong time and/or even for the wrong thing.

Anecdotal evidence would have us believe that fat fingered trading is rife. Real-life fingers pushing the wrong buttons include incidents as recent as January when human error caused the Canadian dollar to slump in Asian trading hours. The U.S. dollar shot up from around C$0.99 to over C$1.0030 against the Canadian dollar before immediately dropping back down, Reuters reported. The spike in the rate had little long-term impact on the market as a whole. These sorts of mini-flash crashes happen on a regular basis in many instruments. 
Higher profile fat fingers include:

·         In September 2006 a Bank of America trader’s keyboard was set up to execute an order when a rugby ball landed on it and executed the $50 million trade ahead of schedule. 
·         In June 2005 a Mizuho Securities Trader sold 610,000 shares at 1 yen instead of 1 share at 610,000 yen at a loss of approximately $225 million.
·         In October 2002 a Bear Sterns trader caused a 100 point drop in the Dow Jones Index after entering a 4 billion share sell order rather than 4 million. 
·         In May 2001 a Lehman Brothers dealer in London wiped £30 billion off the FTSE when he inadvertently keyed in £300 million for a trade instead of £3 million, causing a 120-point drop in the FTSE 100.

Human error is part of being human. But what about algorithmic error? Algos, created by humans, can also have fat finger days. Last year the New York Stock Exchange fined Credit Suisse Securities $150,000 for failing to control an algorithm that went haywire in 2007 flooding the exchange trading system with hundreds of thousands of erroneous orders.

Honest mistakes are one thing, but there are also an increasing number of incidents of rogue traders, fraud and greed-gone-wrong. One famous rogue was the 2009 so-called drunken trader - a broker at PVM Futures who clocked up a $10 million loss from trading while intoxicated. A much larger and more grandiose deception occurred in 2008 when Jérôme Kerviel was discovered to have hidden losses valued at approximately €4.9 billion at Société Générale. And a commodities trader at MFG lost $141.5 million in 2007 on a big short position in wheat futures because his management had turned off the trading limit controls. They claimed that the controls “slowed things down” in a classic greed-gone-wrong story. 

Looking at this compendium of fat finger or algorithmic errors and fraudulent or rogue trading, I think we have been lucky so far that their impact has not been more serious. The scary thing is that things could have gone a lot more wrong. The flash crash illustrated how quickly things can move and how inter-related the markets are. In 2010 we have seen incidents in equities, futures, FX and oil markets. A cross-asset “splash crash” that cascades across multiple markets is theoretically possible – whether it’s accidental or even premeditated. 

Fears of algorithmic terrorism, where a well-funded criminal or terrorist organization could find a way to cause a major market crisis, are not unfounded. This type of scenario could cause chaos for civilization and profit for the bad guys and must constitute a matter of national security.

So what can be done? Better real-time monitoring and market surveillance, real-time risk and internal policing – by trading firms, trading venues and regulators. The markets should be free – but protected. Real-time visibility as to what is going on and real-time response to make course corrections when needed is crucial. Discovering or predicting problems and then being able to take immediate corrective action may help to save the world. 
 

Monday, February 07, 2011

The Trouble with Algorithms: Wild Children or Reckless Parents?

Posted by Dan Hubscher

Algorithms and high frequency trading have been blamed for everything from the credit crisis to the May 6th flash crash and high speed market abuse, and have attracted unwanted interest from regulators on both sides of the pond. But questions remain whether these tools are really computer models gone wild or whether they are the spoiled children of reckless parents - regulation.

According to Dictionary.com, the definition of reckless is to be utterly unconcerned about the consequences of an action. One could argue that the Regulation National Market System was designed without regard to some of the consequences down the line. Blaming the wild children, algorithms, is to ignore that the parents - RegNMS - were somewhat reckless in designing the system.

In a blog on the TABB Forum on January 24th,  Steve Wunsch of Wunsch Auction Associates explained that the system was working the way it had been designed.

"What really went wrong in the stock market on May 6? Prices aside, all of the plumbing was working fine. Not only were there no fat fingers, rogue algos, manipulators or terrorists at work, there were no significant breakdowns of order routing systems or data systems or any other elements of the stock trading infrastructure," wrote Wunsch.

Meanwhile, the National Commission on the Causes of the Financial and Economic Crisis in the United States released its report (Jan. 27th) and HFT was not mentioned at all. Nor were algorithms, as such, but 'computer models' were vindicated. The report said: "The crisis was the result of human action and inaction, not of Mother Nature or computer models gone haywire."

And it criticized regulators for not doing their jobs: “Widespread failures in financial regulation and supervision proved devastating to the stability of the nation’s financial markets.”

The result of the credit crisis and market meltdown in Sept. 2008 was the Dodd-Frank Act, which attempts to prevent another Sept. 2008.  But the flash crash insinuated itself into the picture, pointing out that no one had baked that possibility into the market reforms.  And, ironically, the market reforms set the stage for more flash crashes.

At the Tabb Forum Derivatives Reform Event a couple of weeks ago, a lot of people commented that Dodd-Frank puts in place a market structure that injects the equities and futures markets model, along with fragmentation, price transparency, streaming  quotes, into other asset classes. This theoretically invites algorithmic and high frequency trading and the threat of more flash crashes. At the event Peter Fisher of BlackRock said that what keeps him up at night is a flash crash in the interest rate market, citing the market structure argument, but specifically pointed out that this possibility was not envisioned in Dodd-Frank. 

With more and more asset classes becoming tradable electronically, partly thanks to mandated swap execution facilities (SEFs), the possibility of truly wild or rogue algos and market abuse becomes increasingly inevitable. And, as we pointed out last week, the very real possibility of a flash crash splashing across asset classes - we call it a "Splash Crash" - rears its ugly head.

Although the evidence against algos gone wild is thus far anecdotal for the most part, the belief that they can and will go wrong permeates the industry. Market abuse such as insider trading and manipulation are undoubtedly more prevalent. Fat finger errors are easier to prove, and are a fact of life in a high speed, high stress electronic marketplace.

Stay Calm and Remain Vigilant

The antonym of recklessness is vigilance. The regulatory parents must be more vigilant when it comes to their arguably brighter and naughtier children - algorithms and HFT. With algorithms and HFT come the possibility of mistakes and abuse. Many more firms outside of the equities world are embracing HFT and their inexperience can cause market disruptions. A flash crash in oil or other commodities - or even foreign exchange - is not to be scoffed at. In fact, many commodities markets are much less liquid and homogenous than equities, and can be even more vulnerable to mistakes or manipulation.

 There are a number of best practices that can be used to mitigate against algos going wild:

  • Diligent backtesting – using historic data and realistic simulation to ensure many possible scenarios have been accounted for. A backtesting process needs to be streamlined of course – as short time to market of new algos is key.  
  • Real-time risk monitoring - building a real-time “risk firewall” into your algo environment. Just like a network firewall stops anomalous network packets reaching your computer, so a risk firewall should stop anomalous trades getting to trading venues.
  • Real-time market surveillance. Even if trades do not breach risk parameters, they may breach compliance rules, regulations or may be perceived by a regulator as market manipulation.

An algorithm is a tool in a trader's toolkit, not a naughty wild child. If the regulator parents are vigilant, and algos are subject to practical controls and monitored constantly for performance and for errors, market participants can sense and respond to market patterns before the aberrations or errors have a chance to move prices.

-Dan

Thursday, November 04, 2010

A postcard to Jeremy Grant

Posted by Giles Nelson

Jeremy Grant, editor of FT Trading Room at the Financial Times, recently asked for explanations "on a postcard" about why speed is a force for good in financial markets, or put another way, to explain what the benefits are of high frequency trading. I've just come back from Mexico where I was addressing the Association of Mexican Brokers and during my visit I thought I'd write that postcard. So here it is:

 

Dear Jeremy

I saw your request for postcards recently, and as I'm travelling I thought I'd drop you one. There's not a lot I like doing more than explaining the benefits of so-called "high frequency trading".

I would suggest that you think of high frequency trading, or HFT, as being just the latest stage in the evolution of electronic trading. And this, as you know, has evolved very rapidly over the last decade because of cheaper and faster computers and networks. It's led to many innovations and benefits: electronic crossing networks, algorithmic trading, online retail trading, smaller order sizes, the overall increase in trading volume, more price transparency, greater trader productivity, more accessible liquidity, spreads between buy and sell prices tightening, broker commissions reducing, competition between exchanges and so smaller exchange fees - none of these things would have happened without electronic trading. MiFiD couldn't have happened; it simply wouldn't have been financially viable for the many alternative European equity-trading venues to launch without cheap access to networks and computers. Without these we would still have greedy, monopolistic exchanges with high transaction prices.

HFT is just the latest step in a technology driven evolution. You can't just look at it in isolation.

"Ah", you exclaim, "but high frequency trading is a step too far. Trades happening far faster than the blink of an eye. Surely that can't be right?"

So what if trades happen quickly? Things "going too fast" is a common concern. In 19th century Britain, people were worried about trains going faster than 30mph. They thought that passengers would suffocate or that as the train reached a corner it would simply come off the rails! And to those that say trading happens too quickly, at what speed should it occur? If not micro or milliseconds, should it be a second, a minute, an hour? Who's going to decide? Any choice is entirely arbitrary anyway; time is infinitely divisible.

There are plenty of things that happen too fast for humans to comprehend - human nerve impulses travel at more than 100m per second, yet we function successfully. Why? Because we have the monitoring systems in place that ensure the information from the nerves is processed correctly. Put a finger on a hot coal and it will be retracted immediately - quicker than we can consciously think. And if a 200mph train goes through a red light then warning bells will ring and the train will be automatically stopped.

And so to the main point. Trading speed, per se, is not the problem. But, yes, problems there are. Markets, particularly in Europe and the US are now very complex. These markets are fast moving, multi-exchange, with different, but closely interlinked asset classes. It is this complexity we find difficult to understand. Speed is only one facet of this. We imagine that an armageddon incident could occur because we know that the markets are not being monitored properly. Regulators freely admit this - Mary Schapiro recently said that the SEC was up to two decades behind in its use of technology to monitor markets. And because we know that the people in charge don't know what's going on, we get scared.

It doesn't have to be like this. The same technological advances that led to the evolution of HFT can be used to ensure that the markets work safely, by ensuring that limits are not exceeded, that an algorithm "going crazy" can't bring down an exchange, that a drunken trader can't move the oil price and that traders are dissuaded from intentionally trying to abuse the markets.

Doing things faster is a human instinct. Faster, higher, stronger. The jet engine, the TGV, the motorway. Would we really go back to a world without these?

Thursday, October 21, 2010

Breaking the Machines

Posted by John Bates

(This piece originally appeared on TabbForum - linked here)

 

The furore over high frequency trading and rogue algorithms is turning these important trading tools into fodder for the mainstream media. As part of the Commodity Futures Trading Commission's Technology Advisory Committee (TAC), I had high hopes that my colleagues and I would be an instrumental part of solving some of these issues that very publically dog our markets, including mechanisms for detecting or even preventing another flashcrash, stopping algorithms going out of control and curbing rogue traders. I believe the vanguard of such market improvements is not draconian restrictions on trading algorithms but rather regulator-led best practices and market policing. If the right measures are implemented and suitably publicized, it would address the market’s nervousness and have algorithms smelling fragrant once more.

 

I was thus a little disheartened by the second TAC meeting last week (Technology: Achieving the Statutory Goals and Regulatory Objectives of the Dodd-Frank Act). Given the public fear that algorithms and high frequency trading are evil, I was concerned when one commissioner even went so far as to ask the question in his opening remarks as to whether algorithms should be banned completely. If this ever did happen in the US, heaven help our economy. I would equate such an action to the Luddites – a group in 19th Century Britain that broke machines to protest against the industrial revolution. Algorithms are not evil; there are many positive aspects of algorithms and HFT. They minimize the market impact of large trades, lower the cost of execution, make more open and efficient markets, allow trading venues to evolve faster, encourage entrepreneurship and increase trader productivity, among many other things. Banning what is essentially the new industrial revolution, and now an integral part of electronic trading, could take us beyond a double dip recession and back into the dark ages.

 

A few key points came out in the flash crash report that really need to be emphasized. A key one is that there is a difference between algorithmic execution strategies and high frequency trading strategies. The former are manually set up and are designed to break up a large trade, typically executed in a broker on behalf of a buy-side customer. The latter are much more automated and continuously look for trading opportunities to act on, typically operated by a prop shop or hedge fund. The latter sounds scarier – but it was actually the former – or one particularly extreme instance of the former - that got the lion’s share of the blame in the joint SEC-CFTC report. HFT was pretty much exonerated. It was really human error in the way the execution algo that traded the E-mini was set up that was at fault. In fact one of the TAC participants actually made the point that many of the HFT algos had smarter monitoring built in – which made them pull back from the market when it started to go haywire. Yes that withdrew liquidity – but the HFT algos behaved sensibly given the circumstances.

 

Commissioner Scott O’Malia asked the question whether a rogue algorithm is the same as a rogue trader. Great question! An algorithm does not “decide” to go rogue, unlike a human rogue trader who is more deliberate. Usually a rogue algo is a mistake – such as Infinium’s algo that went wrong and fired thousands of orders into the market in February (http://www.reuters.com/article/idUSTRE67O2QQ20100825) or CME’s test algo that fired phantom orders into the market in September (http://www.ft.com/cms/s/0/706c45dc-c00a-11df-9628-00144feab49a.html). But rogue algos can threaten the well-being of a marketplace just as a rogue trader can. Rather than banning or restricting HFT and algos it would be much more productive to look at how they and the market can benefit from effective controls.

 

Some suggestions that I made on the TAC as to how we might provide more confidence around algo trading are as follows:

 

Firstly, market participants should be mandated to do better back-testing and algo monitoring to help prevent rogue algorithms and scoundrel traders from entering the market. Testing the execution algo that went wrong on May 6th under realistic market conditions might have prevented it going live. More intelligent monitoring might have made it pull out of the market before it did deep damage. Real-time monitoring can detect and respond immediately to dangerous market conditions, “fat finger” or algo errors and trading risk exposures being exceeded.  As illustrated by the HFT algos that stepped out of the market on May 6th – some firms have better monitoring technology than others! The CFTC and SEC could provide best practices guidance and maybe even recommend data sets, simulators and pre-production processes to help with this.

 

Secondly, Exchanges should continue to enhance their monitoring and surveillance systems. Clearly we’ve not perfected it yet given that a rogue algo within the CME managed to fire in phantom orders as recently as September. Also, to ensure consistent response to market crises, all trading venues in a particular asset class should have consistent circuit breakers, which operate under the same circumstances. This would avoid some of the problems discussed in the flash crash report.

 

And the CFTC (as well as the SEC) needs to be "mission control" to monitor across all markets and provide an early warning system. If firms believe they can be watched in real time they will be much more careful. Unfortunately, the CFTC’s Chairman suggests that there is no budget for such technology and that they will have to rely solely on controls by the exchanges and trading venues. This is unacceptable. The importance of trading to our economy means that ensuring confidence in our markets combined with allowing the world’s most advanced forms of algo trading - with the necessary safety measures to prevent meltdowns - is a matter of national security! The regulators are the US Marshalls to HFT's Wild West. The CFTC should go to Congress and make the case for a bigger budget. And they should strike while the iron is hot.

 

The flash crash may have been a mixed blessing, having pointed out many market structure issues that the regulators should be striving to correct or control. Until that day American stock markets were the envy of the world, the model for modern trading -- fast, stable, efficient and for the most part transparent (http://tinyurl.com/29bpr4r). That perception has changed and the rest of the world is aiming to avoid, not mimic, our model. It is critical that the US take the necessary steps to remain the shining example of capital markets. The technology is there, it simply needs to be used. Most importantly we must not allow negative publicity to lead us into Luddite-style regulation and break the machines that are fuelling this new industrial revolution.

 

Monday, October 11, 2010

When Does a Rogue Become a Scoundrel?

Posted by John Bates

Now that the Dodd-Frank Wall Street Reform Act is signed into law, there lies a mountain of work ahead for regulators. Making sense of the 2,000+ page document and turning it into viable recommendations and regulations will be an arduous process.

 

The Commodity Futures Trading Commission's second Technology Advisory Committee meeting titled “Technology: Achieving the Statutory Goals and Regulatory Objectives of the Dodd-Frank Act,” will be held on October 12, 2010 at 1:00 p.m., in Washington, D.C. (http://tinyurl.com/2vfdp4n). At the meeting, my committee colleagues and I will discuss some of these goals and objectives. Specifically, as a result of the SEC & CFTC's report on the May 6th flash crash, CFTC Commissioner Scott O'Malia has said that he wants to take a look at whether algorithms that cause disruption in markets - rogue algorithms - should be treated as if they were rogue traders.

 

Commissioner O’Malia said in the announcement of the October 12 meeting: “While I do not believe that the flash crash was the direct result of reckless misconduct in the futures market, I question what the CFTC could have done if the opposite were true. When does high frequency or algorithmic trading cross the line into being disruptive to our markets? And, along those same lines, who is responsible when technology goes awry? Do we treat rogue algorithms like rogue traders?"

 

This is an interesting topic. When does an algorithm 'go bad'? Is it the algorithm's fault? Of course not, an algorithm does not decide to go rogue. It is down to human error - either in the programming or the execution thereof. In the case of the flash crash a mutual fund chose a 'dumb' execution algorithm preset with inappropriate parameters to execute a large futures sell order in a market that was - by all accounts - ready to plummet. This circumstance illustrates how rogue algorithms can evolve as an unintended consequence of circumstance and/or human misjudgment.

 

When a trader goes rogue it is more deliberate. It can be because he is losing money and hiding it - as in the case of Jerome Kerviel at SocGen, or maybe he had too much to drink at lunchtime and was feeling invincible - like Steve Perkins at PVM. The former lost the bank over $6bn, the latter lost his brokerage $10m. These were very human errors, effectively the work of scoundrels.

 

What rogue traders and rogue algorithms have in common is that both can, in many circumstances, be detected early - or even prevented - through the use of better technology. Comprehensive pre-trade analysis, including backtesting algorithms under a wide range of circumstances, could have prevented the 'dumb' May 6th algo from having its way with the market. Thorough real-time risk management and monitoring could have spotted Kerviel's limit-busting trading patterns and his hiding the trades. Pre-trade risk controls would have kicked the PVM trader out of the system before he got in too deep.

 

It is no longer acceptable to blame rogues and scoundrels for market anomalies or for banks, brokers and buyside firms losing money. The technology is there, it simply needs to be used.

 

Wednesday, September 15, 2010

I Would’ve Got Away With It Too - It If It Wasn’t for You Pesky Kids….

Posted by John Bates

Strange things are happening in the markets! As you know if you’ve been following this blog, quite a few scares, horrors and apparitions have been sighted in equities, futures, FX and oil markets over the last few months. But fortunately the Mystery Machine has rolled into town and Scooby, Shaggy and the gang are investigating…

 

The latest spooky mystery is “phantom orders” that appeared to scare participants on the CME (read more about it here). For 6 minutes on Tuesday unexplained orders, that could have been caused by a “fat finger” error or an “algo gone wild”, caused CME to use Rule 587 which gives them powers “including without limitation, closing the market, deleting bids and offers, and/or suspending new bids and offers”. So although they weren’t sure what was going on – the CME were at least able to respond.

 

Another report this week shows how those involved in alleged market abuse are starting to be unmasked. Last week I wrote a blog posting called “Algos Can Get Stuffed” which was also featured on the Tabb Forum. In it I talked about the possibility of firing orders into the market with the intention of misleading the market or slowing the market down – and even how some link this phenomenon with the flash crash. This week a trading firm called Trillium was fined by FINRA for using illegal trading practices (read more about it here).  Trillium was fined $1m for sending orders aimed at deceiving investors. Nine traders entered buy and sell orders in patterns that aimed to manipulate the prices of instruments. And they did this 46,000 times! This “layering” enabled Trillium to trade at a profit once they’d manipulated the price.

 

These 2 incidents show that awareness of the problems we’ve been writing about on this blog have increased radically. Trading venues are more aware that algos gone wild and fat fingers can cause market panics and manipulate prices. Regulators are more aware that high frequency trading can be used as a weapon for market manipulation.

 

But we can’t rest on our laurels. Maybe we got lucky this time. As market data volumes continue to increase we need to have more advanced capabilities to detect problems and abuse as it’s happening, and recommend actions to take in response. Let’s ensure the Mystery Machine is fitted with the latest real-time surveillance equipment. Let’s enable the gang to unmask more villains at the haunted “high frequency manor” inspiring the legendary outburst of “I would’ve got away with it if it wasn’t for you pesky kids….”

Wednesday, September 08, 2010

Algos can get stuffed!!

Posted by John Bates

As regulators continue to look into the causes of the May 6th flash crash, some high frequency trading approaches are coming under scrutiny. In particular the concept of “quote stuffing”, where algorithms send so many orders into the order book that the market cannot possibly respond, has come under fire from market participants and the press.

 

The SEC, having reportedly decided that quote stuffing probably did not have a major role in the flash crash, is now taking aim at the practice to see if it puts some investors at a disadvantage by distorting stock prices (http://tinyurl.com/264kr3o). The CFTC is looking at data from database developer Nanex and mulling how to address quote stuffing in futures markets (http://tinyurl.com/3a7w7sv).

 

Meanwhile, concerning incidents continue to happen in the market. As recently as last week there was an incident that caused Christopher Steiner at Forbes to write a story called “Did we dodge another flash crash on September 1st?” The story describes how on September 1st at 10am quote volumes ballooned - as they did on May 6th. In fact quotes reached 275,000 per second, as opposed to 200,000 on May 6th. Unlike the flash crash though there wasn’t a dramatic fall in prices. However, the bids and offers did cross for a time – leading to high frequency traders taking advantage of arbitrage opportunities. This data was exposed again by firm Nanex – and left the market wondering if quote stuffing by high frequency traders was behind the spikes.

 

The world is waking up to the fact that high frequency and algorithmic trading have quietly become part of the market fabric, and the world does not seem to be too happy about it. HFT and algorithms are being "demonized" said the FT article, and I agree. I also think the hype is overblown.

 

Once trading became automated, trading strategies naturally morphed to take advantage of the available technology and higher speeds. High frequency statistical arbitrage techniques can also mean more order cancellations, some of which may - wittingly or unwittingly - fall into the quote stuffing category. Those involved in intentional quote stuffing as a strategy need to be held to task. But to demonize all strategies or call for banning them is a step backward. What is needed is a framework by which to police them - and to prevent them from going wrong. The technology to do this is already available. For example, an area I’ve had a lot of experience in is the use of complex event processing to provide a platform for high frequency, multi-venue market surveillance. With such a system quotes can be monitored to determine how many quotes per second there are on each ticker symbol, the ratio of quotes to trades, when large spikes are emerging and many other interesting real-time analytics and patterns that it’s useful to track in real-time.

 

But regulation of high speed trading practices has fallen short to date. Regulators have not had the funds, the technology, the power or the expertise to follow and control high speed trading. However, it is good to see that progress is now being made. CFTC commissioner Bart Chilton wrote last week in an article entitled Rein in the cyber cowboys: “There may be some cyber cowboys out there and they could be giving respectable traders a bad name”. His colleague CFTC commissioner Scott O'Malia told Reuters last week that, if traders are flooding the market with orders with the intention of slowing others down, the regulator would consider addressing quote stuffing under new rules in the financial regulation bill that deal with disruptive trading practices.

 

It is possible that quote stuffing is causing more problems that just slowing down the natural flow of trades. Trading behavior patterns suggest that these quotes are a distraction to other traders. There are patterns evident where the quote "stuffer" continuously traded first - possibly by distracting others. And the disruption can cause the bid and offer to cross – providing a nice arbitrage opportunity for those who are not distracted! All of this needs to be looked into further. I fully support the CFTC and SEC's efforts to get to the bottom of not just the flash crash, but HFT and algorithmic trading practices. They are now integral in the equities and futures markets, and increasingly so in FX, fixed income and energy. What we need is better policing of the markets to protect the honest ranchers from the cyber cowboys.

Tuesday, August 31, 2010

Taming the Wild Algos

Posted by John Bates

"And now," cried Max, "let the wild rumpus start!"

— Maurice Sendak: Where the Wild Things Are

 

It’s not just equities and futures markets where strange stuff happens! An “algo gone wild” was spotted in the oil market (it actually happened earlier this year) and intrepid Reuters journalists got to the bottom of it.

 

High frequency trading firm Infinium Capital Management is at the center of a six-month probe by CME Group (and reportedly the CFTC) into why its brand new trading program malfunctioned and racked up a million-dollar loss in about a second, just before markets closed on Feb. 3. The algorithm, which was brand new, went live 4 minutes before the end of trading. It fired in 2000-3000 orders per second before being shut off. The oil price surged $1 then slid $5 over the course of the next two days. Read about the full story here:

http://www.reuters.com/article/idUSTRE67O2QQ20100825

 

I know the CEO of Infinium Chuck Whitman from the CFTC technology advisory committee – he’s a good guy and very knowledgeable. I believe him when he says his wild algos had no malicious intent – the algos were just broken and shouldn’t have been put live.

 

With algorithms and HFT comes the possibility of mistakes. Many more firms outside of the equities world are embracing HFT and their inexperience can cause market disruptions such as the Feb 3rd CME issue. A flash crash in oil or other commodities - or even foreign exchange - is not to be scoffed at. In fact, many commodities markets are much less liquid and homogenous than equities, and can be even more vulnerable to mistakes or manipulation. In the case of Infinium, the algo caused a spike in trading volumes by nearly eight times in less than a minute. It was a classic case of the algo running wild until it faltered and 'choked'. This is not how HFT strategies are supposed to work.

 

There are a number of best practices that can be used to mitigate against algos going wild:

 

The first best practice is diligent backtesting – using historic data and realistic simulation to ensure many possible scenarios have been accounted for. What does the algo do in a bull market, a bear market, at the open, at the close, when unexpected spikes occur, during a flash crash, when non-farm payrolls or other economic news is released etc. etc.? Of course there’s always the possibility of a “black swan” scenario – but then there’s always the possibility of an earthquake in London – but I bet the buildings aren’t built to withstand one – it’s a matter of covering likely possibilities as best you can. A backtesting process needs to be streamlined of course – as short time to market of new algos is key.

 

A second best practice is building a real-time risk firewall into your algo environment. Just like a network firewall stops anomalous network packets reaching your computer, so the risk firewall should stop anomalous trades getting to trading venues. These anomalous trades might be human or computer generated – such as “fat finger” errors, risk exposures (for a trader, a desk or an institution) being breached, or even algos gone wild (e.g. entering spurious loops and spitting out anomalous orders). Real-time risk monitoring is a second level protection for those problems you don’t catch in backtesting.

 

A third best practice is to use real-time market surveillance in your algo environment. Even if trades do not breach risk parameters, they may breach compliance rules, regulations or may be perceived by a regulator as market manipulation (by accident if not design). Detecting these patterns as they happen enables good internal policing by trading firms, rather than investigation or prosecution by regulators.

 

An algorithm is a tool in a trader's toolkit, and it needs to be taken care of as such. If it is well-oiled and the trader or quant or risk manager monitors its progress then the algo will do its job quickly and nicely. If the trader/quant/risk manager doesn’t properly prepare the algo or ignores the algo and lets it get rusty, so to speak, it could lose its edge and run amok. Algorithms must be monitored constantly for performance and for errors, and sometimes tweaked on-the-fly to ensure best results. A good algorithmic trading platform will enable trading firms to do just that.

 

Trading firms are not the only ones who need to be on guard for possible algos gone wild. In the case of Infinium, the regulators and the exchange were also slow on the uptake. This shows that everyone needs to be proactive in using the correct tools to monitor algorithmic trading. Sensing and responding to market patterns before the aberrations or errors have a chance to move prices is the right thing to do - in all asset classes. Be like Max and tame the wild things!

Thursday, August 26, 2010

The Risks of Being Caught Naked

Posted by John Bates

According to the FT, the Financial Industry Regulatory Association is undertaking a “sweep” of broker-dealers that offer direct market access to high-frequency traders to find out if they have pre-trade risk management controls in place for their algorithmic customers. (Read the full story here: http://tinyurl.com/28rg287). Here at Progress Software we have been advocating the use of pre-trade risk management tools for some time. The prospect of High Frequency Trading (HFT) firms running trades directly through broker algorithms (naked access) to exchanges/ECNs without adequate supervision has always concerned us. Brokerage firms simply give their clients an exchange ID number, which the clients then enter into their algorithms so that they can trade directly with the exchange.

Regulators are right to be concerned. Unfiltered access to trading destinations can end up causing trading errors or worse. Anything can happen with naked access; from fat fingered errors, trading limits being breached, even fraud - all of which can cost brokers, traders and regulators big money.

Although the SEC proposed banning naked access in January, and the Committee of European Securities Regulators (CESR) is likely to follow in its footsteps, there has been considerable pushback from brokers and trading firms. They say that adding a pre-trade risk management step would cause a "latency hop" in the trading process, claiming it would make them uncompetitive.  A layer of risk management often adds a third party into the mix, and - consequently - a delay.

But if everyone is required to add real-time, pre-trade risk management, then the competitive disadvantage is moot. The benefits of being able to pro-actively monitor trades before they hit an exchange or ECN far outweigh any microscopic latency hops in any event. The naked access playing field is hardly level. There are trading systems out there that claim to throughput a trade in 16 microseconds from outside an exchange's firewall, while exchange and ECN systems range anywhere from 250 to 700 microseconds (according to measurements by latency management firm Corvil).

Latency is clearly an important issue and complex event processing offers a solution. Brokers can achieve ultra-low-latency pre-trade risk management without compromising speed of access.  An option is a low latency "risk firewall" utilizing complex event processing as its core, which can be benchmarked in the low microseconds.  With a real-time risk solution in place, a message can enter through an order management system, be run through the risk hurdles and checks, and leave for the exchange a few microseconds later.

It is the ideal solution to a tricky question - how do I manage risk before my trade hits the exchange without compromising speed? The benefits are clear - a latency hop of a few microseconds not only saves money, it can also prepare your firm for the regulatory future.