Current Affairs

Tuesday, August 31, 2010

Taming the Wild Algos

Posted by John Bates

"And now," cried Max, "let the wild rumpus start!"

— Maurice Sendak: Where the Wild Things Are

 

It’s not just equities and futures markets where strange stuff happens! An “algo gone wild” was spotted in the oil market (it actually happened earlier this year) and intrepid Reuters journalists got to the bottom of it.

 

High frequency trading firm Infinium Capital Management is at the center of a six-month probe by CME Group (and reportedly the CFTC) into why its brand new trading program malfunctioned and racked up a million-dollar loss in about a second, just before markets closed on Feb. 3. The algorithm, which was brand new, went live 4 minutes before the end of trading. It fired in 2000-3000 orders per second before being shut off. The oil price surged $1 then slid $5 over the course of the next two days. Read about the full story here:

http://www.reuters.com/article/idUSTRE67O2QQ20100825

 

I know the CEO of Infinium Chuck Whitman from the CFTC technology advisory committee – he’s a good guy and very knowledgeable. I believe him when he says his wild algos had no malicious intent – the algos were just broken and shouldn’t have been put live.

 

With algorithms and HFT comes the possibility of mistakes. Many more firms outside of the equities world are embracing HFT and their inexperience can cause market disruptions such as the Feb 3rd CME issue. A flash crash in oil or other commodities - or even foreign exchange - is not to be scoffed at. In fact, many commodities markets are much less liquid and homogenous than equities, and can be even more vulnerable to mistakes or manipulation. In the case of Infinium, the algo caused a spike in trading volumes by nearly eight times in less than a minute. It was a classic case of the algo running wild until it faltered and 'choked'. This is not how HFT strategies are supposed to work.

 

There are a number of best practices that can be used to mitigate against algos going wild:

 

The first best practice is diligent backtesting – using historic data and realistic simulation to ensure many possible scenarios have been accounted for. What does the algo do in a bull market, a bear market, at the open, at the close, when unexpected spikes occur, during a flash crash, when non-farm payrolls or other economic news is released etc. etc.? Of course there’s always the possibility of a “black swan” scenario – but then there’s always the possibility of an earthquake in London – but I bet the buildings aren’t built to withstand one – it’s a matter of covering likely possibilities as best you can. A backtesting process needs to be streamlined of course – as short time to market of new algos is key.

 

A second best practice is building a real-time risk firewall into your algo environment. Just like a network firewall stops anomalous network packets reaching your computer, so the risk firewall should stop anomalous trades getting to trading venues. These anomalous trades might be human or computer generated – such as “fat finger” errors, risk exposures (for a trader, a desk or an institution) being breached, or even algos gone wild (e.g. entering spurious loops and spitting out anomalous orders). Real-time risk monitoring is a second level protection for those problems you don’t catch in backtesting.

 

A third best practice is to use real-time market surveillance in your algo environment. Even if trades do not breach risk parameters, they may breach compliance rules, regulations or may be perceived by a regulator as market manipulation (by accident if not design). Detecting these patterns as they happen enables good internal policing by trading firms, rather than investigation or prosecution by regulators.

 

An algorithm is a tool in a trader's toolkit, and it needs to be taken care of as such. If it is well-oiled and the trader or quant or risk manager monitors its progress then the algo will do its job quickly and nicely. If the trader/quant/risk manager doesn’t properly prepare the algo or ignores the algo and lets it get rusty, so to speak, it could lose its edge and run amok. Algorithms must be monitored constantly for performance and for errors, and sometimes tweaked on-the-fly to ensure best results. A good algorithmic trading platform will enable trading firms to do just that.

 

Trading firms are not the only ones who need to be on guard for possible algos gone wild. In the case of Infinium, the regulators and the exchange were also slow on the uptake. This shows that everyone needs to be proactive in using the correct tools to monitor algorithmic trading. Sensing and responding to market patterns before the aberrations or errors have a chance to move prices is the right thing to do - in all asset classes. Be like Max and tame the wild things!

Thursday, August 26, 2010

The Risks of Being Caught Naked

Posted by John Bates

According to the FT, the Financial Industry Regulatory Association is undertaking a “sweep” of broker-dealers that offer direct market access to high-frequency traders to find out if they have pre-trade risk management controls in place for their algorithmic customers. (Read the full story here: http://tinyurl.com/28rg287). Here at Progress Software we have been advocating the use of pre-trade risk management tools for some time. The prospect of High Frequency Trading (HFT) firms running trades directly through broker algorithms (naked access) to exchanges/ECNs without adequate supervision has always concerned us. Brokerage firms simply give their clients an exchange ID number, which the clients then enter into their algorithms so that they can trade directly with the exchange.

Regulators are right to be concerned. Unfiltered access to trading destinations can end up causing trading errors or worse. Anything can happen with naked access; from fat fingered errors, trading limits being breached, even fraud - all of which can cost brokers, traders and regulators big money.

Although the SEC proposed banning naked access in January, and the Committee of European Securities Regulators (CESR) is likely to follow in its footsteps, there has been considerable pushback from brokers and trading firms. They say that adding a pre-trade risk management step would cause a "latency hop" in the trading process, claiming it would make them uncompetitive.  A layer of risk management often adds a third party into the mix, and - consequently - a delay.

But if everyone is required to add real-time, pre-trade risk management, then the competitive disadvantage is moot. The benefits of being able to pro-actively monitor trades before they hit an exchange or ECN far outweigh any microscopic latency hops in any event. The naked access playing field is hardly level. There are trading systems out there that claim to throughput a trade in 16 microseconds from outside an exchange's firewall, while exchange and ECN systems range anywhere from 250 to 700 microseconds (according to measurements by latency management firm Corvil).

Latency is clearly an important issue and complex event processing offers a solution. Brokers can achieve ultra-low-latency pre-trade risk management without compromising speed of access.  An option is a low latency "risk firewall" utilizing complex event processing as its core, which can be benchmarked in the low microseconds.  With a real-time risk solution in place, a message can enter through an order management system, be run through the risk hurdles and checks, and leave for the exchange a few microseconds later.

It is the ideal solution to a tricky question - how do I manage risk before my trade hits the exchange without compromising speed? The benefits are clear - a latency hop of a few microseconds not only saves money, it can also prepare your firm for the regulatory future.

Tuesday, July 27, 2010

Smart - but is it Smart Enough?

Posted by John Bates

Nasdaq liked the idea of real-time market surveillance so much that it bought one of the companies that specialize in it.

 

Nasdaq OMX announced this week that it will buy Smarts Group, an Australia-based market surveillance business that helps exchanges monitor compliance with trading rules.  You can read the full story here: http://online.wsj.com/article/BT-CO-20100727-712964.html.

 

The market moves a lot faster than it used to thanks to algorithmic trading. What has not kept pace is monitoring high speed trading. Smarts is one commercial approach that aims to enable such monitoring. However, there is a big problem with Smarts - the time it takes to develop a new surveillance scenario. I have spoken to a number of venues around the world, including the Australian Stock Exchange, who have told me they are totally dependent on Smarts to add new rules when they need one – and it takes 6 months to a year – if they’re lucky. In fast-moving markets we need to evolve in hours not years!!

 

Despite shortcomings with Smarts, the Nasdaq acquisition is an indicator of the importance of real-time surveillance in a post-flashcrash world. Maybe the flash crash of May 6th has a silver lining if lessons learned are leading exchanges to better use surveillance and monitoring. In the aftermath of the crash, exchanges scrambled to recover trading data and do some forensic investigation into the causes.  This proved extremely difficult probably because of inadequate analysis capabilities to pinpoint what had happened.

 

Exchanges, ECNs, brokers, traders and regulators all must take an intelligent approach to monitoring and surveillance in order to prevent rogue trades and fat fingers. Transparency is the key. Regulators in the US and Europe are concerned about the lack of transparency in markets where high frequency algorithmic trading takes place, as well as in dark pools.

 

We ran a survey at SIFMA this year where we asked 125 attendees about high frequency trading and market surveillance. A staggering 83 percent said that increased transparency is needed to effectively deal with market abuse and irregular market activity, such as the flash crash. However, only 53 percent of firms surveyed currently have real-time monitoring systems in place.

 

Nasdaq says that Smarts will be used to expand broker surveillance solutions, which I take to mean monitoring scenarios such as sponsored access. This would be a smart move (forgive the pun). With naked access, high frequency traders can plug straight into an exchange through their broker – and it’s critical that pre-trade risk and surveillance is in place to prevent a crisis in which wild algos could cause havoc.

 

The detection of abusive patterns or fat fingered mistakes must happen in real-time, ideally before it has a chance to move the market. This approach should be taken on board not just by the regulators, but by the industry as a whole. Only then can it be one step ahead of market abuse and trading errors that cause a meltdown (or up).

 

As many market participants have pointed out, technology can't solve all of the problems, but it can help to give much more market transparency. To restore confidence in capital markets, organizations involved in trading need to have a much more accurate, real-time view on what's going on. In this way, issues can be prevented or at least identified much more quickly.

 

While I applaud Nasdaq's initiative and dedication to improving market surveillance buying Smarts, I must point out that you don't have to go quite that far to get the same results. Progress provides market-leading real-time monitoring, surveillance and pre-trade risk – powered by Complex Event Processing – enabling complex real-time monitoring of the fastest moving markets. Unlike Smarts, Progress includes the ability for business users to customize and create new scenarios rapidly (in hours rather than Smart’s months). And you don’t have to buy and integrate our company to get access to it!!

Wednesday, July 21, 2010

Defending Against the Algo Pirates

Posted by John Bates

It was an honor to sit on the CFTC Technology Advisory Committee (TAC) last week. I was very impressed with the presentations and discussion, chaired ably by Commissioner Scott O’Malia. I was also impressed by the other Commissioners and with my fellow committee members. This week the CFTC has been discussing new rules to handle disaster recovery and has also received further coverage on one topic discussed at the TAC – that of pirate algos attacking algos going about their normal trading business and aiming to manipulate the market.

 

Further coverage can be seen in this article “CFTC posits new disaster recovery rules as regulators probe 'algo price pirates'”

 

http://www.finextra.com/news/fullstory.aspx?newsitemid=21610

 

The CFTC has a sensible proposal on the table to require exchanges and clearing houses to have effective disaster recovery plans in order to quickly recover from any market-wide disruption. After 9/11 it became clear that many NYC-based financial services firms were not prepared for a disaster of that magnitude, and subsequently took disaster recovery (or business continuity as it came to be known) very seriously. Now it is time for those virtual businesses - exchanges and ECNs - to do the same.

 

Operational risk is a very real issue in today's fast moving markets, where anything can go wrong. Being able to recover and quickly start trading again - across all exchanges and destinations - is paramount. The May 6th 'flash crash' gave us a glimpse of what can happen if something went wrong at one exchange and the rules across other exchanges were not harmonious.

 

The flash crash was a man-made event exacerbated by machines. Algorithms are programmed to do as they are told, and if one destination is not responding they will hunt down and ping, scrape and trade on whatever others they can find. Sometimes this can have unfortunate consequences for the market as a whole. This is why there must be consistency across trading venues in how they respond to crises.

 

At the CFTC's Technology Advisory Committee meeting last week, there were several interesting observations about high frequency trading and algos. We heard new analysis of the flash crash from trade database developer Nanex LLC. The Nanex report suggested that predatory practices such as "quote stuffing", where algos try to prevent others high-frequency traders from executing their strategies, may have contributed to the crash. Commissioner Chilton of the CFTC (who I had the pleasure of sitting next to at the TAC last week), the TAC and the SEC are taking these claims very seriously. Commissioner Chilton expressed his concern that there are algorithms out there hunting down and interfering with other algorithms, calling them 'algo price pirates' that may trigger a new enforcement regime. Now I believe that firms and their algos are going to be monitoring the market with the goal of figuring out how your algos work and devising a strategy to capitalize – that’s just the natural order of capitalism. However, that’s different from using algo terrorism to bully the market into behaving a particular way. That’s something we need to watch for and prevent for it causes damage.

 

If such 'pirates' are to be policed and caught, the regulators will have to sail with the pirates in shark-infested high frequency waters. Surveillance and monitoring are critical, as is the need for speed. The speed at which algorithms can pump quotes into a destination is daunting, so the policemen will also need to work at ultra high velocity. I was a little concerned when Commissioner Chilton said at the TAC meeting: "Just because you can go fast it doesn't mean you should." I know where he’s coming from but would modify the statement to say that in HFT it is critical to go fast to be competitive – but you need the proper best practices, training and safety precautions. High frequency trading, if properly monitored, need not be scary or evil. It can contribute to liquidity and market efficiency, and provide alpha generation. To truly address the HFT issue, real time market surveillance technology must be adopted to monitor and detect patterns that indicate potential market abuse such as insider trading or market manipulation. Or pirate algorithms trying to board your strategy ship and take off with the gold doubloons. 

Thursday, June 03, 2010

Optimism in the world of financial services regulation

Posted by Giles Nelson

It seems that we’re finally making some progress on making the financial markets function more safely. 

After the “flash-crash” of 6 May, US equity market operators have agreed to bring in coordinated circuit-breakers to avoid a repeat of this extreme event. There is widespread agreement on this. Industry leaders from brokers and exchanges yesterday made supportive statements as part of submissions to the SEC.

Regulators are going public with their use of real-time monitoring technology. Alexander Justham, director of markets at the Financial Services Authority, the UK regulator, told the Financial Times that the use of complex event processing technology will give the FSA “a more proactive machine-on–machine approach” to market surveillance (the FSA is a Progress customer). Other regulators are at least admitting they have a lot of work to do. Mary Schapiro, the SEC chair, believes that the technology used for monitoring markets is “as much as two decades behind the technology currently used by those we regulate”. Scott O’Malia, a commissioner at the Commodity Futures Trading Commission admitted that the CTFC continues to receive account data by fax which then has to be manually entered. 

The use of real-time pre-trade risk technology is likely to become much more widespread. “Naked” access, where customers of brokers submit orders directly to the market without any pre-trade checks, is likely to be banned. This is an important change as late last year Aite Group, an analyst firm, estimated that naked access accounted for 38% of the average daily volume in US stocks. The SEC is also proposing that regulation of sponsored access is shorn up – currently it has evidence that brokers rely upon oral assurances that the customer itself has pre-trade risk technology deployed. The mandated use of pre-trade risk technology will level the playing field and will prevent a rush to the bottom. Personally I’ve heard of several instances of buy-side customers insisting to brokers that pre-trade risk controls are turned off as they perceive that such controls add latency and therefore will adversely affect the success of their trading.

The idea of real-time market surveillance, particularly in complex, fragmented markets as exist in the US and Europe is gaining credence. The SEC has proposed bringing in a “consolidated audit trail” which would enable all orders in US equity markets to be tracked in real-time. As John Bates said in his previous blog post, it’s likely that the US tax-payer will not be happy paying the $4B the publically funded SEC estimates that such a system would need to get up and running. Perhaps the US could look at the way the UK’s FSA is funded. The FSA reports to government but is paid for by the firms it regulates.

As I mentioned in my last blog our polling in April at Tradetech, a European equities trading event, suggests that market participants are ready for better market monitoring. 75% of respondents to our survey believed that creating more transparency with real-time market monitoring was preferable to the introduction of restrictive new rules.

CESR, the Committee of European Securities Regulators, is currently consulting on issues such as algorithmic trading and high frequency trading. It will be interesting to see the results of their deliberations in the coming months.

I’m so pleased the argument has moved on. This time last year saw a protracted period of vilifying “high frequency trading” and “algo trading”. Now, there is recognition of the benefits as well as the challenges that high frequency trading has brought to equity markets and regulators seem to understand that to both prevent disastrous errors and deliberate market manipulation occurring it is better for them to get on board with new technology rather than try to turn the clock back to mediaeval times. 

New approaches are sorely needed. Yesterday saw the conclusion of another investigation into market manipulation when the FSA handed out a $150,000 fine and a five-year ban to a commodity futures broker.

Tuesday, April 20, 2010

Predictions for increased transparency in Capital Markets

Posted by Giles Nelson

  It is my view that one of the most significant causes of the global financial crisis was a lack of transparency in financial markets.  Put simply, that means no one, not regulators or market participants, knew what the size of certain derivatives markets (like credit default swaps) was, who held what positions, or what the consequences of holding positions could be.  If financial reform brings nothing else, it should at least hold banks accountable for the business they conduct, and that means full disclosure and constant monitoring by responsible regulators.  

This action would help provide the basis for preventing future crises. No matter how inventive financial products may become, if regulators have complete and detailed information about financial markets and banks’ activities there, better assessments of risk can be made. This means that if necessary, banks’ activities can be reigned in through higher capital requirements or similar measures.  Simply limiting banks’ ability to conduct certain business is a blunt instrument that does not resolve the lack of transparency and likely will hamper economic growth.

Market transparency exhibits itself in many forms. Particularly relevant is that related to electronic trading. Therefore, I predict that regulators will require banks to implement relevant stronger pre-trade risk mechanisms. Regulators, such as the FSA & SEC, will ultimately bring in new rules to mitigate against, for example, the risk of algorithms ‘going mad’. This is exemplified by Credit Suisse, which was fined $150,000 by the NYSE earlier this year for “failing to adequately supervise development, deployment and operation of proprietary algorithms.”

Furthermore, volumes traded via high frequency trading will increase, although at a much slower pace than last year, and at the same time the emotive debates about high frequency trading creating a two-tier system and an unfair market will die down.

In addition, with regards to mid market MiFID monitoring, greater responsibility for compliance will be extended from exchanges to the banks themselves. Banks and brokers will soon be mandated to implement more trade monitoring and surveillance technology. There will also be no leeway on Dark Pools; they just simply have to change and be mandated to show they have adequate surveillance processes and technology in place. They will also have to expose more pricing information to the market and regulators.

This year will see a definite shift to an increasingly transparent – and therefore improved – working environment within capital markets. The ongoing development of market surveillance technologies and changes in attitudes to compliance will drive this forward, creating a more open and fairer marketplace for all.

Monday, March 08, 2010

Rumblings in the Cloud

Posted by Louis Lovas

Rumblings in the Cloud
Cloud computing... it's on everyone's mind these days. Personally I think it's a term that has attained such aggrandized acclaim that vendors, analysts, bloggers and anyone with marketing muscle has pulled and stretched its definition to such and extent that it could mean just about anything hosted. Cloud Computing Journal polled twenty-one experts to define Cloud Computing.  Just the fact they had to ask the question of twenty-one experts is rather telling in itself.  Well I read what the experts had to say.

So armed with my newly minted (yet fully stretched, but not of my own making) Cloud definition I happened upon this commentary about CEP in the Cloud or the lack thereof.  There's a great quote in the article: "I don’t care where a message is coming from and I don’t care where it’s going”. Correctly indicated, this in a sense defines a key aspect of CEP. Event-based applications should be transparent to messages (or events to which messages transform) origin and destination (sans a logical or virtual name).  However, unlike the author Colin Clark, I do believe the current crop of vendor products, most notably Progress Apama maintain this separation of the physical from the virtual.

The rationale behind the lack of CEP-based applications in the Cloud (ok, there's that word again) are found in other factors. To explain my reasoning I'll start by dividing CEP-based applications into two categories. Of course there are many ways to categorize CEP-based applications, but for the sake of this discussion, I'll use these two:

CEP-based Application Categories
  1. Those that do things
  2. Those that observe other applications doing things
Not sure I could make a simpler layman-like description, but needless to say it warrants further explanation (or definition in sticking with our theme)

CEP-based applications that do things
This category is best explained by example. Typical of event processing applications that do things are those in Capital Markets like algorthmic trading, pricing and market making. These applications perform some business function, often critcal in nature in their own right. Save connectivity to data sources and destinations, they are the key ingredient or the only ingredient to a business process.  In the algo world CEP systems tap into the firehose of data, and the data rates in these markets (Equities, Futures & Options, etc.) is increasing at a dizzying pace. CEP-based trading systems are focused on achieiving the lowest latency possible. Investment banks, hedge funds, and others in the arms race demand the very best in hardware and software platforms to shave microseconds off each trade. Anything that gets in the (latency) way is quickly shed.

In other verticals, an up and coming usage of CEP is location-based services. This is one that leveraging smart mobile devices (i.e "don't care where the message is going") to provide promotions and offers.  
    • Algo Trading, Pricing, Market Aggregation
    • Location Based Services (providing promotional offers and alerts)
CEP-based applications that observe other applications doing things
Conversely, event-based applications that observe other applications doing things are classified as providing visibility or greater insight into some existing business function. These event-based applications overlay business processes to take measures to improve their effectiveness. As is often the case critical business applications provide little visibility or the information is silo’ed. There is a need to provide a broader operational semantic across a heterogeneous mix of business applications and processes.  Here are a few typical examples of event-based visibility applications observing other business systems.
    • Telco Revenue Assurance
    • Click Stream Analysis
    • Fraud Detection
    • Surveillance
Of  course the demarcation line between these two classifications is not clear cut. Providing greater visibility is just a starting point, monitoring for opportunities to take action is just as important such as kicking-off a fraud watch if a suspected wash-trade occurred  (so in a sense they are doing things).

Where for art thou oh CEP
When considering the Cloud, an important point to consider is dependency. Specifically, there is a dependency that the underlying applications and business processes exist in the Cloud for (observing) CEP to overlay them.  I would offer that Enterprise business has not yet migrated their key business processes to the Cloud on a widespread scale just yet. Why not? What are the barriers? Security, regulatory compliance, DR, investment costs, limited skill sets are just a few of the challenges mentioned in this ITProPortal article.  I suspect these barriers are far reaching, keeping the pace of Cloud deployment in check to the point where it's not as yet strategic to many.
 
One of key things that makes the Cloud a reality is virtualization, it has clearly revolutionized PaaS as the Cloud. Virtualization does come at a cost, there is a latency penality for the conveinence, no matter how small for some use-cases that cost is too great.

Make no mistake, I am certain the Cloud with all it's twenty-one definitions is the future of computing. It's an imperative that will knock down the barriers and change the face of the Enterprise and when it reaches critical mass CEP will be there.

Once again thanks for reading, you can follow me at twitter, here.
Louie




Monday, February 22, 2010

Peas and Carrots

Posted by Louis Lovas

In the words of the auspicious Forrest Gump some things go together like peas and carrots. Truer words were never spoken. Some things just do go together well, sometimes by design, often by accident. I don't think anyone actually planned milk and cookies or popcorn at the movies but nonetheless these things are made for each other.  When it comes to technology the same harmonious relationships exist.

In the recent Aite report on High Performance Databases (HPDB),  the market for specialized databases is surveyed along with a handful of vendors in this space.  This is a cottage industry where the big database vendors don't play. It's hard to imagine in this day and age where database technology is so standardized and mature and a multitude of choice abounds from commercial products to open source that any other database technology and a gang of vendors would have a chance. Yet it is happening and it's thriving.  

I believe it has to do with a synergistic relationship to event processing. If CEP is the "peas" then HPDB's are the "carrots". These two technologies share two fundamental precepts:

  •  A focus on Extreme Performance
  •  Temporal Awareness

I. Extreme Performance, Speeds and Feeds
These HPDB's which are often referred to as Tick databases, are found in the same playground as event processing technologies. In the Capital Markets industry they connect to the same market sources, consume the same data feeds. Both technologies are designed to leverage modern multi-core hardware to consume the ever-increasing firehose of data. By the same token, once that data is stored on disk, database query performance is equally important.  The massive amount of data collected and is only as good as the database's ability to query it efficiently thus creating another (historical) firehose of data which an event processing engine would be the consummate consumer.  

II. Temporal Awareness, when is the data
Time is a basic principle in event processing technology, applications typically have as a premise to analyze data-in-motion within a window of time. HPDB's design center is to store and query time series data. Some of the database vendors even bring time to a higher level business function. They understand the notion of a business Calendar, knowing business hours, business week, holidays, trading hours, etc.  Imagine the simplicity of a query where you want 'Business hours Mon-Fri for the month of February' and the database itself would know the third Monday was Presidents Day, skipping over that, thus preventing analytic calculations from skewing erroneously.

Leveraging the Synergy
These two fundamental shared principles provide the basis for a unique set of business cases that are only realized by leveraging Event Processing platforms and High Performance Databases

  • Back testing algorithms across massive volumes of historical data compressing time
What if you could test new trading algorithms against the last 6 months or 1 - 2 years of historical market data but run that test in a matter of minutes? What if you could be assured that the temporal conditions of the strategies (i.e. timed limit orders) behaved correctly and deterministically matching the movement of time in complete synchronicity with the historical data? These are just a few of the characteristics that define the harmony between event processing and high performance (Tick) databases.
  • Blending live and historical data in real-time
Querying historical data in-flight to obtain volume curves, moving averages, the latest VWAP and other analytics calculations are possible with these high performance databases. Leading edge trading algorithms are blending a historical context with the live market and even News. The winners will be those that can build these complex algo's and maintain ultra low-latency.
  • Pre-Trade Risk Management
Managing positions, order limits and exposure is necessary, doing it in real-time to manage market risk is a mandate.  In addition to market data, these high performance databases can store pre and post trade activity to complement event-based trading systems and become the basis for trade reporting systems.

In the Trading LifeCycle, Event Processing and High Performance databases are partner technologies harmoniously bound together to form a union where the whole is greater than the sum of the parts. They are the peas and carrots that together create a host of real-world use-cases that would not be possible as individual technologies.

Myself along with my colleague Dan Hubsher we are doing a 3-part Webinar series entitled "Concept to Profit". The focus is on event processing in the trade lifecycle, but we include cases that touch upon high performance databases. You can still register for part 2: Building Trading Strategies in the Apama WorkBench where I will focus on the tools for strategy development aimed at the IT developer.

Once again thanks for reading, you can follow me at twitter, here.
Louie

Tuesday, November 10, 2009

Event Processing in Location-based Services

Posted by David Olson

Business is event-driven. No. Wait. Life is event-driven, and if it wasn’t, we’d be walking into walls and every sentence would start with “Oops.” Life would be a string of missed opportunities. We’ve done a masterful job of using technology to transform our business processes into software, but one tenet that’s been missing is that business should imitate life. Sense and respond is what’s been missing.

We recently announced that match2blue (http://web.progress.com/inthenews/match2blue-stands-ou-11092009.html) will be using the event processing capabilities of Apama to provide location-based services in social networking. Sense and respond is crucial for their ability to enable like-minded people to connect in real-time. Traditional data processing technology and its normal rhythm of “capture, store, analyze” can’t, well, keep up. And in a world where latency leads to missed opportunities, match2blue is proving that through the right technology business can imitate life.

Responding to business events as they happen is what will define your competitive advantage.

Business is event-driven, indeed.

Thursday, November 05, 2009

In defence of high frequency trading

Posted by Giles Nelson

The high frequency trading (HFT) debate seems to have entered a new and worrying phase in the UK. On Tuesday this week in an interview with the BBC, Lord Myners, the UK’s financial services minister, warned that high frequency trading had “gone too far” and that share ownership had “now lost its supporting function for the provision of capital to business”. (You can find the original interview here and reports of it in the Financial Times and The Independent yesterday).

 

 Mary Schapiro, head of the SEC, signalled at the end of October that a number of electronic trading areas were going to be looked into – naked access (where a broker sponsors a firm to have direct electronic access to an exchange), dark pools and high frequency trading.

 

It does seem now that on both sides of the Atlantic, governments and regulators are steeling themselves to act and softening the markets up to be able to accept the fact that electronic trading might have some limits.

 

The concern is that governments and regulators are going to come down too hard on electronic trading and the benefits that it gives investors will be damaged.

 

It all started with the flash order issue in the US a few months ago. Commentators were linking together various different, although related issues, in an inappropriate way. Flash orders seemed to be viewed sometimes as being synonymous with HFT, both of which were sometimes reported as forms of market abuse. All three topics are quite different. In my opinion, there are legitimate questions over the use of flash orders and a proposal to ban them is now being considered.

 

Dark pools, where large blocks of stock are traded off exchange to minimise market impact, have been the next targets. There are, again, legitimate issues. Dark pools, by their very nature, do not have good price transparency. Regulators have become concerned with their use because more and more trading is going through dark pools. Some estimates put this at between 10% and 30% in Europe and the US. This lack of knowledge about what exactly is the proportion is part of the problem itself. No one really knows what proportion of trading dark pools is taking. If a significant proportion of the market has no price transparency then this undermines the notion of a fair market for all. Regulators are looking at this and its likely that they will force dark pool operators to disclose far more information about what is being traded than they do currently. The SEC is considering limiting the proportion of a stock that can be traded through dark pools to a small percentage.

 

These legitimate issues however risk skewing the whole HFT debate to one where people will conclude that “HFT is bad”.

 

What people are now describing as HFT – the very fast and frequent, computer assisted trading of, usually, equities – is an evolution of something that has been happening in the market place for at least the last 10 years. In this time electronic trading has proliferated, not just in equities but also in all asset classes such as derivatives, bonds and foreign exchange. Far more venues for trading have been created. There are now many places where a company’s stock can be traded both in the US and Europe. This has brought competition and choice. Prices have been lowered, improving access to retail investors. Spreads have narrowed. Arbitrage opportunities are harder to find, which mean that market information is disseminating faster which, in turn, means that price transparency has improved. Because there is more trading going on, there is more liquidity available, which also means keener prices.

 

A key part of the HFT trend has been the use of algorithmic trading (the most prevalent use of complex event processing technology). Algo trading models fall broadly into one of two camps: alpha seeking, where market prices are examined to find a trading opportunity that will make money, and execution where orders are, usually, split up into smaller parts and then traded automatically in the market in an intelligent way to find good prices and to ensure those prices are not overly influenced by the trades being made themselves. For each type of model it can be very useful to react very quickly to market information, either to take advantage of a price discrepancy or to quickly pickup liquidity at a good price. Algorithmic trading is enormously beneficial for those who use it and its use is not limited to specialist hedge funds. Most algorithmic trading uses execution models that find liquidity, good prices, help minimise market impact and, lastly, increase significantly a trader’s productivity. Instead of wasting time executing several simple orders in the market over the course of many minutes or hours, the trader can simply ask a machine to do it. The trader can then spend time either covering more of the market (useful in straitened economic times) or spend more time actually delivering real value to a client.

 

Algorithmic trading and HFT have brought very significant benefits. It is these benefits that must not be threatened.

 

Trading has always involved cunning and guile, whether human or computer based. Competition has always existed in who’s got the best traders and trading systems. Organisations investing in ultra low-latency infrastructure to ensure orders arrive at an exchange in microseconds (not nanoseconds as sometimes claimed by the way – light travels 30cm in 1 nanosecond which isn’t far enough to be very useful) are part of this competitive world. Competition leads to innovation and it is this innovation that has brought so many of the benefits described above. Computer-based models can somtimes be used abusively. There are many forms of market abuse that regulators and exchange operators look for. Some exchanges and regulators have been investing in real-time surveillance technology (Progress counts Turquoise and the UK Financial Services Authority as customers using Apama) to ensure that they can spot abusive patterns of behaviour quickly.

 

We can’t start slowing trading down. We can’t go backwards and put the electronic trading genie back in the bottle. We don’t want to lose all the benefits that have come. Rather, regulators and exchanges should concentrate on ensuring maximum transparency in how markets operate and ensure that those attempting to maliciously abuse the markets are dissuaded or caught.