Energy Trading

Monday, February 07, 2011

The Trouble with Algorithms: Wild Children or Reckless Parents?

Posted by Dan Hubscher

Algorithms and high frequency trading have been blamed for everything from the credit crisis to the May 6th flash crash and high speed market abuse, and have attracted unwanted interest from regulators on both sides of the pond. But questions remain whether these tools are really computer models gone wild or whether they are the spoiled children of reckless parents - regulation.

According to Dictionary.com, the definition of reckless is to be utterly unconcerned about the consequences of an action. One could argue that the Regulation National Market System was designed without regard to some of the consequences down the line. Blaming the wild children, algorithms, is to ignore that the parents - RegNMS - were somewhat reckless in designing the system.

In a blog on the TABB Forum on January 24th,  Steve Wunsch of Wunsch Auction Associates explained that the system was working the way it had been designed.

"What really went wrong in the stock market on May 6? Prices aside, all of the plumbing was working fine. Not only were there no fat fingers, rogue algos, manipulators or terrorists at work, there were no significant breakdowns of order routing systems or data systems or any other elements of the stock trading infrastructure," wrote Wunsch.

Meanwhile, the National Commission on the Causes of the Financial and Economic Crisis in the United States released its report (Jan. 27th) and HFT was not mentioned at all. Nor were algorithms, as such, but 'computer models' were vindicated. The report said: "The crisis was the result of human action and inaction, not of Mother Nature or computer models gone haywire."

And it criticized regulators for not doing their jobs: “Widespread failures in financial regulation and supervision proved devastating to the stability of the nation’s financial markets.”

The result of the credit crisis and market meltdown in Sept. 2008 was the Dodd-Frank Act, which attempts to prevent another Sept. 2008.  But the flash crash insinuated itself into the picture, pointing out that no one had baked that possibility into the market reforms.  And, ironically, the market reforms set the stage for more flash crashes.

At the Tabb Forum Derivatives Reform Event a couple of weeks ago, a lot of people commented that Dodd-Frank puts in place a market structure that injects the equities and futures markets model, along with fragmentation, price transparency, streaming  quotes, into other asset classes. This theoretically invites algorithmic and high frequency trading and the threat of more flash crashes. At the event Peter Fisher of BlackRock said that what keeps him up at night is a flash crash in the interest rate market, citing the market structure argument, but specifically pointed out that this possibility was not envisioned in Dodd-Frank. 

With more and more asset classes becoming tradable electronically, partly thanks to mandated swap execution facilities (SEFs), the possibility of truly wild or rogue algos and market abuse becomes increasingly inevitable. And, as we pointed out last week, the very real possibility of a flash crash splashing across asset classes - we call it a "Splash Crash" - rears its ugly head.

Although the evidence against algos gone wild is thus far anecdotal for the most part, the belief that they can and will go wrong permeates the industry. Market abuse such as insider trading and manipulation are undoubtedly more prevalent. Fat finger errors are easier to prove, and are a fact of life in a high speed, high stress electronic marketplace.

Stay Calm and Remain Vigilant

The antonym of recklessness is vigilance. The regulatory parents must be more vigilant when it comes to their arguably brighter and naughtier children - algorithms and HFT. With algorithms and HFT come the possibility of mistakes and abuse. Many more firms outside of the equities world are embracing HFT and their inexperience can cause market disruptions. A flash crash in oil or other commodities - or even foreign exchange - is not to be scoffed at. In fact, many commodities markets are much less liquid and homogenous than equities, and can be even more vulnerable to mistakes or manipulation.

 There are a number of best practices that can be used to mitigate against algos going wild:

  • Diligent backtesting – using historic data and realistic simulation to ensure many possible scenarios have been accounted for. A backtesting process needs to be streamlined of course – as short time to market of new algos is key.  
  • Real-time risk monitoring - building a real-time “risk firewall” into your algo environment. Just like a network firewall stops anomalous network packets reaching your computer, so a risk firewall should stop anomalous trades getting to trading venues.
  • Real-time market surveillance. Even if trades do not breach risk parameters, they may breach compliance rules, regulations or may be perceived by a regulator as market manipulation.

An algorithm is a tool in a trader's toolkit, not a naughty wild child. If the regulator parents are vigilant, and algos are subject to practical controls and monitored constantly for performance and for errors, market participants can sense and respond to market patterns before the aberrations or errors have a chance to move prices.

-Dan

Monday, October 11, 2010

When Does a Rogue Become a Scoundrel?

Posted by John Bates

Now that the Dodd-Frank Wall Street Reform Act is signed into law, there lies a mountain of work ahead for regulators. Making sense of the 2,000+ page document and turning it into viable recommendations and regulations will be an arduous process.

 

The Commodity Futures Trading Commission's second Technology Advisory Committee meeting titled “Technology: Achieving the Statutory Goals and Regulatory Objectives of the Dodd-Frank Act,” will be held on October 12, 2010 at 1:00 p.m., in Washington, D.C. (https://tinyurl.com/2vfdp4n). At the meeting, my committee colleagues and I will discuss some of these goals and objectives. Specifically, as a result of the SEC & CFTC's report on the May 6th flash crash, CFTC Commissioner Scott O'Malia has said that he wants to take a look at whether algorithms that cause disruption in markets - rogue algorithms - should be treated as if they were rogue traders.

 

Commissioner O’Malia said in the announcement of the October 12 meeting: “While I do not believe that the flash crash was the direct result of reckless misconduct in the futures market, I question what the CFTC could have done if the opposite were true. When does high frequency or algorithmic trading cross the line into being disruptive to our markets? And, along those same lines, who is responsible when technology goes awry? Do we treat rogue algorithms like rogue traders?"

 

This is an interesting topic. When does an algorithm 'go bad'? Is it the algorithm's fault? Of course not, an algorithm does not decide to go rogue. It is down to human error - either in the programming or the execution thereof. In the case of the flash crash a mutual fund chose a 'dumb' execution algorithm preset with inappropriate parameters to execute a large futures sell order in a market that was - by all accounts - ready to plummet. This circumstance illustrates how rogue algorithms can evolve as an unintended consequence of circumstance and/or human misjudgment.

 

When a trader goes rogue it is more deliberate. It can be because he is losing money and hiding it - as in the case of Jerome Kerviel at SocGen, or maybe he had too much to drink at lunchtime and was feeling invincible - like Steve Perkins at PVM. The former lost the bank over $6bn, the latter lost his brokerage $10m. These were very human errors, effectively the work of scoundrels.

 

What rogue traders and rogue algorithms have in common is that both can, in many circumstances, be detected early - or even prevented - through the use of better technology. Comprehensive pre-trade analysis, including backtesting algorithms under a wide range of circumstances, could have prevented the 'dumb' May 6th algo from having its way with the market. Thorough real-time risk management and monitoring could have spotted Kerviel's limit-busting trading patterns and his hiding the trades. Pre-trade risk controls would have kicked the PVM trader out of the system before he got in too deep.

 

It is no longer acceptable to blame rogues and scoundrels for market anomalies or for banks, brokers and buyside firms losing money. The technology is there, it simply needs to be used.

 

Wednesday, September 15, 2010

I Would’ve Got Away With It Too - It If It Wasn’t for You Pesky Kids….

Posted by John Bates

Strange things are happening in the markets! As you know if you’ve been following this blog, quite a few scares, horrors and apparitions have been sighted in equities, futures, FX and oil markets over the last few months. But fortunately the Mystery Machine has rolled into town and Scooby, Shaggy and the gang are investigating…

 

The latest spooky mystery is “phantom orders” that appeared to scare participants on the CME (read more about it here). For 6 minutes on Tuesday unexplained orders, that could have been caused by a “fat finger” error or an “algo gone wild”, caused CME to use Rule 587 which gives them powers “including without limitation, closing the market, deleting bids and offers, and/or suspending new bids and offers”. So although they weren’t sure what was going on – the CME were at least able to respond.

 

Another report this week shows how those involved in alleged market abuse are starting to be unmasked. Last week I wrote a blog posting called “Algos Can Get Stuffed” which was also featured on the Tabb Forum. In it I talked about the possibility of firing orders into the market with the intention of misleading the market or slowing the market down – and even how some link this phenomenon with the flash crash. This week a trading firm called Trillium was fined by FINRA for using illegal trading practices (read more about it here).  Trillium was fined $1m for sending orders aimed at deceiving investors. Nine traders entered buy and sell orders in patterns that aimed to manipulate the prices of instruments. And they did this 46,000 times! This “layering” enabled Trillium to trade at a profit once they’d manipulated the price.

 

These 2 incidents show that awareness of the problems we’ve been writing about on this blog have increased radically. Trading venues are more aware that algos gone wild and fat fingers can cause market panics and manipulate prices. Regulators are more aware that high frequency trading can be used as a weapon for market manipulation.

 

But we can’t rest on our laurels. Maybe we got lucky this time. As market data volumes continue to increase we need to have more advanced capabilities to detect problems and abuse as it’s happening, and recommend actions to take in response. Let’s ensure the Mystery Machine is fitted with the latest real-time surveillance equipment. Let’s enable the gang to unmask more villains at the haunted “high frequency manor” inspiring the legendary outburst of “I would’ve got away with it if it wasn’t for you pesky kids….”

Wednesday, September 08, 2010

Algos can get stuffed!!

Posted by John Bates

As regulators continue to look into the causes of the May 6th flash crash, some high frequency trading approaches are coming under scrutiny. In particular the concept of “quote stuffing”, where algorithms send so many orders into the order book that the market cannot possibly respond, has come under fire from market participants and the press.

 

The SEC, having reportedly decided that quote stuffing probably did not have a major role in the flash crash, is now taking aim at the practice to see if it puts some investors at a disadvantage by distorting stock prices (https://tinyurl.com/264kr3o). The CFTC is looking at data from database developer Nanex and mulling how to address quote stuffing in futures markets (https://tinyurl.com/3a7w7sv).

 

Meanwhile, concerning incidents continue to happen in the market. As recently as last week there was an incident that caused Christopher Steiner at Forbes to write a story called “Did we dodge another flash crash on September 1st?” The story describes how on September 1st at 10am quote volumes ballooned - as they did on May 6th. In fact quotes reached 275,000 per second, as opposed to 200,000 on May 6th. Unlike the flash crash though there wasn’t a dramatic fall in prices. However, the bids and offers did cross for a time – leading to high frequency traders taking advantage of arbitrage opportunities. This data was exposed again by firm Nanex – and left the market wondering if quote stuffing by high frequency traders was behind the spikes.

 

The world is waking up to the fact that high frequency and algorithmic trading have quietly become part of the market fabric, and the world does not seem to be too happy about it. HFT and algorithms are being "demonized" said the FT article, and I agree. I also think the hype is overblown.

 

Once trading became automated, trading strategies naturally morphed to take advantage of the available technology and higher speeds. High frequency statistical arbitrage techniques can also mean more order cancellations, some of which may - wittingly or unwittingly - fall into the quote stuffing category. Those involved in intentional quote stuffing as a strategy need to be held to task. But to demonize all strategies or call for banning them is a step backward. What is needed is a framework by which to police them - and to prevent them from going wrong. The technology to do this is already available. For example, an area I’ve had a lot of experience in is the use of complex event processing to provide a platform for high frequency, multi-venue market surveillance. With such a system quotes can be monitored to determine how many quotes per second there are on each ticker symbol, the ratio of quotes to trades, when large spikes are emerging and many other interesting real-time analytics and patterns that it’s useful to track in real-time.

 

But regulation of high speed trading practices has fallen short to date. Regulators have not had the funds, the technology, the power or the expertise to follow and control high speed trading. However, it is good to see that progress is now being made. CFTC commissioner Bart Chilton wrote last week in an article entitled Rein in the cyber cowboys: “There may be some cyber cowboys out there and they could be giving respectable traders a bad name”. His colleague CFTC commissioner Scott O'Malia told Reuters last week that, if traders are flooding the market with orders with the intention of slowing others down, the regulator would consider addressing quote stuffing under new rules in the financial regulation bill that deal with disruptive trading practices.

 

It is possible that quote stuffing is causing more problems that just slowing down the natural flow of trades. Trading behavior patterns suggest that these quotes are a distraction to other traders. There are patterns evident where the quote "stuffer" continuously traded first - possibly by distracting others. And the disruption can cause the bid and offer to cross – providing a nice arbitrage opportunity for those who are not distracted! All of this needs to be looked into further. I fully support the CFTC and SEC's efforts to get to the bottom of not just the flash crash, but HFT and algorithmic trading practices. They are now integral in the equities and futures markets, and increasingly so in FX, fixed income and energy. What we need is better policing of the markets to protect the honest ranchers from the cyber cowboys.

Tuesday, August 31, 2010

Taming the Wild Algos

Posted by John Bates

"And now," cried Max, "let the wild rumpus start!"

— Maurice Sendak: Where the Wild Things Are

 

It’s not just equities and futures markets where strange stuff happens! An “algo gone wild” was spotted in the oil market (it actually happened earlier this year) and intrepid Reuters journalists got to the bottom of it.

 

High frequency trading firm Infinium Capital Management is at the center of a six-month probe by CME Group (and reportedly the CFTC) into why its brand new trading program malfunctioned and racked up a million-dollar loss in about a second, just before markets closed on Feb. 3. The algorithm, which was brand new, went live 4 minutes before the end of trading. It fired in 2000-3000 orders per second before being shut off. The oil price surged $1 then slid $5 over the course of the next two days. Read about the full story here:

https://www.reuters.com/article/idUSTRE67O2QQ20100825

 

I know the CEO of Infinium Chuck Whitman from the CFTC technology advisory committee – he’s a good guy and very knowledgeable. I believe him when he says his wild algos had no malicious intent – the algos were just broken and shouldn’t have been put live.

 

With algorithms and HFT comes the possibility of mistakes. Many more firms outside of the equities world are embracing HFT and their inexperience can cause market disruptions such as the Feb 3rd CME issue. A flash crash in oil or other commodities - or even foreign exchange - is not to be scoffed at. In fact, many commodities markets are much less liquid and homogenous than equities, and can be even more vulnerable to mistakes or manipulation. In the case of Infinium, the algo caused a spike in trading volumes by nearly eight times in less than a minute. It was a classic case of the algo running wild until it faltered and 'choked'. This is not how HFT strategies are supposed to work.

 

There are a number of best practices that can be used to mitigate against algos going wild:

 

The first best practice is diligent backtesting – using historic data and realistic simulation to ensure many possible scenarios have been accounted for. What does the algo do in a bull market, a bear market, at the open, at the close, when unexpected spikes occur, during a flash crash, when non-farm payrolls or other economic news is released etc. etc.? Of course there’s always the possibility of a “black swan” scenario – but then there’s always the possibility of an earthquake in London – but I bet the buildings aren’t built to withstand one – it’s a matter of covering likely possibilities as best you can. A backtesting process needs to be streamlined of course – as short time to market of new algos is key.

 

A second best practice is building a real-time risk firewall into your algo environment. Just like a network firewall stops anomalous network packets reaching your computer, so the risk firewall should stop anomalous trades getting to trading venues. These anomalous trades might be human or computer generated – such as “fat finger” errors, risk exposures (for a trader, a desk or an institution) being breached, or even algos gone wild (e.g. entering spurious loops and spitting out anomalous orders). Real-time risk monitoring is a second level protection for those problems you don’t catch in backtesting.

 

A third best practice is to use real-time market surveillance in your algo environment. Even if trades do not breach risk parameters, they may breach compliance rules, regulations or may be perceived by a regulator as market manipulation (by accident if not design). Detecting these patterns as they happen enables good internal policing by trading firms, rather than investigation or prosecution by regulators.

 

An algorithm is a tool in a trader's toolkit, and it needs to be taken care of as such. If it is well-oiled and the trader or quant or risk manager monitors its progress then the algo will do its job quickly and nicely. If the trader/quant/risk manager doesn’t properly prepare the algo or ignores the algo and lets it get rusty, so to speak, it could lose its edge and run amok. Algorithms must be monitored constantly for performance and for errors, and sometimes tweaked on-the-fly to ensure best results. A good algorithmic trading platform will enable trading firms to do just that.

 

Trading firms are not the only ones who need to be on guard for possible algos gone wild. In the case of Infinium, the regulators and the exchange were also slow on the uptake. This shows that everyone needs to be proactive in using the correct tools to monitor algorithmic trading. Sensing and responding to market patterns before the aberrations or errors have a chance to move prices is the right thing to do - in all asset classes. Be like Max and tame the wild things!

Wednesday, July 21, 2010

Defending Against the Algo Pirates

Posted by John Bates

It was an honor to sit on the CFTC Technology Advisory Committee (TAC) last week. I was very impressed with the presentations and discussion, chaired ably by Commissioner Scott O’Malia. I was also impressed by the other Commissioners and with my fellow committee members. This week the CFTC has been discussing new rules to handle disaster recovery and has also received further coverage on one topic discussed at the TAC – that of pirate algos attacking algos going about their normal trading business and aiming to manipulate the market.

 

Further coverage can be seen in this article “CFTC posits new disaster recovery rules as regulators probe 'algo price pirates'”

 

https://www.finextra.com/news/fullstory.aspx?newsitemid=21610

 

The CFTC has a sensible proposal on the table to require exchanges and clearing houses to have effective disaster recovery plans in order to quickly recover from any market-wide disruption. After 9/11 it became clear that many NYC-based financial services firms were not prepared for a disaster of that magnitude, and subsequently took disaster recovery (or business continuity as it came to be known) very seriously. Now it is time for those virtual businesses - exchanges and ECNs - to do the same.

 

Operational risk is a very real issue in today's fast moving markets, where anything can go wrong. Being able to recover and quickly start trading again - across all exchanges and destinations - is paramount. The May 6th 'flash crash' gave us a glimpse of what can happen if something went wrong at one exchange and the rules across other exchanges were not harmonious.

 

The flash crash was a man-made event exacerbated by machines. Algorithms are programmed to do as they are told, and if one destination is not responding they will hunt down and ping, scrape and trade on whatever others they can find. Sometimes this can have unfortunate consequences for the market as a whole. This is why there must be consistency across trading venues in how they respond to crises.

 

At the CFTC's Technology Advisory Committee meeting last week, there were several interesting observations about high frequency trading and algos. We heard new analysis of the flash crash from trade database developer Nanex LLC. The Nanex report suggested that predatory practices such as "quote stuffing", where algos try to prevent others high-frequency traders from executing their strategies, may have contributed to the crash. Commissioner Chilton of the CFTC (who I had the pleasure of sitting next to at the TAC last week), the TAC and the SEC are taking these claims very seriously. Commissioner Chilton expressed his concern that there are algorithms out there hunting down and interfering with other algorithms, calling them 'algo price pirates' that may trigger a new enforcement regime. Now I believe that firms and their algos are going to be monitoring the market with the goal of figuring out how your algos work and devising a strategy to capitalize – that’s just the natural order of capitalism. However, that’s different from using algo terrorism to bully the market into behaving a particular way. That’s something we need to watch for and prevent for it causes damage.

 

If such 'pirates' are to be policed and caught, the regulators will have to sail with the pirates in shark-infested high frequency waters. Surveillance and monitoring are critical, as is the need for speed. The speed at which algorithms can pump quotes into a destination is daunting, so the policemen will also need to work at ultra high velocity. I was a little concerned when Commissioner Chilton said at the TAC meeting: "Just because you can go fast it doesn't mean you should." I know where he’s coming from but would modify the statement to say that in HFT it is critical to go fast to be competitive – but you need the proper best practices, training and safety precautions. High frequency trading, if properly monitored, need not be scary or evil. It can contribute to liquidity and market efficiency, and provide alpha generation. To truly address the HFT issue, real time market surveillance technology must be adopted to monitor and detect patterns that indicate potential market abuse such as insider trading or market manipulation. Or pirate algorithms trying to board your strategy ship and take off with the gold doubloons. 

Wednesday, June 30, 2010

What do you do with the drunken trader?

Posted by John Bates

The news that Steven Perkins, (former) oil futures broker in the London office of PVM Oil Futures, has been fined 72,000 pounds ($108,400) by the FSA and banned from working in the industry is no surprise, see article here:

 

https://www.telegraph.co.uk/finance/newsbysector/energy/oilandgas/7862246/How-a-broker-spent-520m-in-a-drunken-stupor-and-moved-the-global-oil-price.html.

 

It could have been worse given that the broker, after a few days of heavy drinking, took on a 7.0 million barrel long position on crude oil in the middle of the night. The fine seems miniscule since it cost PVM somewhere in the vicinity of $10 million - after unwinding the $500+ million position.

 

The surprising thing about this incident is that it happened at all. Perkins was a broker, not a trader. He acted on behalf of traders, placing orders on the Intercontinental Exchange among other places. That he could go into the trading system and sneak through 7.0 million barrels without a customer on the other side is unbelievable.

 

Heavy drinking is practically a job requirement in the oil industry, my sources tell me, so this kind of thing could be a real issue going forward. As algorithmic trading takes hold in the energy markets, trading may approach the ultra high speeds seen in equities markets.  This is a recipe for super high speed disaster, unless there are proper controls in place - especially if there were a way for the broker or trader in question to enrich himself in the process.

 

One powerful way to prevent this kind of accident or fraud is through the use of stringent pre-trade risk controls. The benefits of being able to pro-actively monitor trades include catching "fat fingered" errors, preventing trading limits from being breached, and even warning brokers and regulators of potential fraud - all of which cost brokers, traders and regulators money. PVM is a good example of this.

 

Ultra-low-latency pre-trade risk management can be achieved by brokers without compromising speed of access.  One solution is a low latency "risk firewall" utilizing complex event processing as its core, which can be benchmarked in the low microseconds.  Errors can be caught in real-time, before they can reach the exchange. Heaving that drunken trader right overboard, and his trades into the bin.

 

Monday, January 11, 2010

10 Reasons Why Progress Chose Savvion

Posted by John Bates

Today Progress announced the acquisition of Savvion https://web.progress.com/inthenews/progress-software-co-01112010.html

The reason that Progress chose to enter the BPM market is clear. Businesses are increasingly turning to BPM to implement and improve their business processes. Why? Firstly because no other solution can help enterprises achieve real-time visibility, agility, efficiency and business empowerment the way BPM does. Secondly BPM enables this to be achieved with low Total Cost of Ownership (TCO) and ease of use.

But why did Progress choose Savvion? Here are 10 reasons to start off with…

  1. Savvion is a trailblazer and industry leader – Savvion is a pioneer in BPM but is also still at the cutting edge. We wanted the best BPM thinkers at Progress. 
  2. Savvion has been proven to work at the enterprise level. Some BPM systems only work at the departmental level, but Savvion works at either departmental level or enterprise levels.
  3. Savvion offers System-centric and Human-centric BPM – Savvion can orchestrate processes but can also involve human users in workflow.
  4. Savvion is event-enabled – so business processes can respond to events. Progress has a lot of momentum behind event-driven business systems through our Actional and Apama solutions – and Savvion will work seamlessly in event-driven business solutions.
  5. Savvion offers vertical industry solutions – Analogous to Progress’ Solution Accelerators, Savvion offers out-of-the-box vertical solutions in industries including Financial Services and Telecommunications.
  6. Savvion offers an integrated Business Rules Management System – Expressing logic in terms of rules can often be very important. Savvion have developed a rules engine, integrated with their BPM system, enabling decision-oriented BPM – modifying the process flow based on rule conditions. This is a powerful capability.
  7. Savvion offers an integrated Analytics Engine – Business Intelligence has proved its worth but it is a “rear view mirror” technology – analyzing facts that have already happened. Savvion’s analytics engine enables continuous analytics to augment business processes and human user with advanced real-time analytics, enabling better decision-making.
  8. Savvion offers an integrated Document Management System (DMS) – Savvion’s integrated DMS enables rich document handling and empowers document-centric BPM.
  9. Savvion BPM suite is highly scalable, high performance and highly available – At Progress we pride ourselves on the strength of our underlying technology. We want to offer our customers a complete solution that embodies scalability, performance and availability. Thus selecting a BPM vendor in-keeping with this philosophy was key – and Savvion is just such a vendor.
  10. Savvion is a great cultural fit with Progress – An often-overlooked point is that cultural fit is key to acquisition and integration success. The Savvion team pride themselves on being innovative, customer-focused and fun - just like the Progress team. We’re looking forward to working together. 

Tuesday, December 22, 2009

My Baby Has Grown Up

Posted by John Bates

20090625_7172 copy_2 I was proud to recently be appointed CTO and head Corporate Development here at Progress Software https://web.progress.com/en/inthenews/progress-software-ap-12102009.html. But I don’t want anyone to take that as an indication that I won’t still be involved with event processing – au contrair. Event processing (whether you call it CEP or BEP) is now a critical part of enterprise software systems – I couldn’t avoid it if I tried!!

But taking a broader role does give me cause to reflect upon the last few years and look back at the growth of event processing and the Progress Apama business. Here are some observations:

  • It’s incredibly rare to have the pioneer in a space also be the leader when the space matures. I’m really proud that Progress Apama achieved that. Our former CEO Joe Alsop has a saying that “you don’t want to be a pioneer; they’re the ones with the arrows in their backs!” Usually he’s right on that one – but in the case of Progress Apama, the first is still the best! Independent analysts, including Forrester and IDC, all agree on it. Our customers agree on it too.
  • It’s tough at the top! I had no idea that when you are the leader in a space, many other firms’ technology and marketing strategies are based completely around you. I have met ex-employees of major software companies that have told me that there are Apama screenshots posted on the walls of their ex firms’ development centers – the goal being to try to replicate them or even improve on them. Other firms’ marketing has often been based on trying to criticize Apama and say why they are better – so their company name gets picked up by search engines when people search for Apama.
  • Event processing has matured and evolved. Yes it is certainly used to power the world’s trading systems. But it’s also used to intelligently track and respond to millions of moving objects, like trucks, ships, planes, packages and people. It’s used to detect fraud in casinos and insider trading. It’s used to detect revenue leakage in telecommunications and continually respond to opportunities and threats in supply chain, logistics, power generation and manufacturing. It enables firms to optimize their businesses for what’s happening now and is about to happen – instead of running solely in the rear view mirror.
  • Despite all the new application areas, Capital Markets remains a very important area for event processing. Critical trading operations in London, New York and around the world are architected on event processing platforms. The world’s economy is continually becoming more real-time, needs to support rapid change and now needs to support the real-time views of risk and compliance. We recognize the importance of Capital Market. My congratulations to Richard Bentley who takes on the mantle of General Manager of Capital Markets to carry on Progress Apama’s industry-leading work in this space. With his deep knowledge and experience with both Apama and Capital Markets, Richard is uniquely placed to carry on the solutions-oriented focus that has been the foundation to Progress Apama’s success.
  • Even in a terrible economy, the value of event processing has been proven – to manage costs, prevent revenue leakage and increase revenue.  Progress announced our fourth quarter results today https://web.progress.com/en/inthenews/progress-software-an-12222009.html which saw a double digit increase for Apama and triple digit for Actional. Apama and Actional are used, increasingly together, to gain visibility of business processes without modifying applications, to turn business process activity into events and to respond to opportunities and threats represented by event patterns – enabling the dynamic optimization of business performance.
  • But one thing I do believe: that soon there will be no such thing as a pure-play CEP vendor. CEP is part of something bigger. We’ve achieved the first mission, which is to raise the profile of event processing as a new technique that can solve hitherto unsolvable problems. Now the follow on mission is to ensure event processing finds its way into every solution and business empowerment platform. It is one of a set of key technologies that together will change the world.

I wish everyone Happy Holidays and a successful and profitable 2010 !!!