« March 2008 | Main | May 2008 »

April 2008

Saturday, April 26, 2008

Judgement Day

Posted by Richard Bentley

Earlier this week I was told by a client (a proprietary trading shop) that they were switching brokers - the bank to which they submit their (cash equities) orders for execution across a range of European and US Equity markets. Now this was intriguing to me - as I happen to know that their new broker implements their client-facing Direct Strategy Access (DSA) offering using Apama. So Apama will be generating orders and sending them to ... Apama. (Via FIX protocol as it happens)

Brought a smile ... but then they told me the rest. They were planning to go via this broker to a range of markets to run some new cross-market arb strategies - apparently their new broker provides access to more European markets with very low latency. One of their primary markets will be the new Turquoise exchange, which will launch in September. At this point you might see where this is going; at Turquoise their orders will be subject to surveillance by the Turquoise Market Abuse Detection system, built on top of ... Apama.

I am smiling no longer. Didn't Skynet start like this?

Wednesday, April 23, 2008

On (Complex) Event Processing

Posted by Richard Bentley

I'm sure this has been said before, but I had cause to think again today on what is actual "complex" about CEP. I've never been particularly comfortable with the term CEP (of course, my personal comfort is not required ...) as it suggests that CEP is in some ways "hard". Well, those of us in the business of building CEP technologies know that the challenges of processing 10s of 000s of events per second with sub-millis latency against 000s of event patterns with temporal and logical constraints introduces a shed-load of complexity under the hood - but that's the point really - it is (or should be) all under the hood; we don't call databases "tricky databases" just because they have some fancy background indexing and schema evolution capabilities inside them.

Anyway, that ship has sailed - and I digress. The thing that got me thinking about this (again) was the recent press release regarding our launch - with our partner Detica - of a Solution Accelerator for Market Surveillance. That press release gives some examples of the kinds of surveillance strategies - "front running", "washing" ec. - that the Solution Accelerator provides (along with a handy definition of what this jargon actually means). The point is that the event processing logic required for these types of applications is not in any way "complex" - e.g. "detect a spike in trading volume or rapid price move within 10 minutes prior to a news release regarding a particular instrument" - despite the machinations under the hood that might be required to do this in real-time for all trades and news articles for all listed instruments on an exchange.

No, what is key in supporting these types of applications or strategies is the *lack* of complexity - and the provision of tools allowing strategy builders - here, operational teams at the exchange - to quickly express the business logic, generate a Dashboard to visualise the alerts it generates, and get all this into production before it becomes irrelevant. As markets and traders become ever more sophisticated it is the ease with which strategies can be modified and new strategies deployed that determines whether CEP is the right technology or not, not how clever it might be under the hood.

The term CEP seems to be here to stay, and I'm certainly not volunteering to be the flag carrier for a terminology battle ("Event Processing"? Anyone?). But let's be clear that neither the use case nor the hoops that need to be jumped through to deploy it need be complex for "CEP" to be an effective technology solution.

The key to effective CEP technology is to keep as much of the complexity as possible away from the people who have to use it.

Tuesday, April 22, 2008

Asia Report: Fighting White Collar Crime

Posted by John Bates

Titchy_johnHello from Hong Kong. As always it is fascinating to see how CEP is evolving in Asia. One trend I am observing is the huge interest in Hong Kong in rogue traders and white collar crime – and how CEP can be used to detect and prevent this – before it moves the market. Obviously the original rogue trader, Nick Leeson, is well known here. But there has been a great deal of interest in more recent goings-on, at firms such as SocGen. Amazingly, until a couple of years ago, insider trading was not illegal in Hong Kong! Now we have a highly volatile market, with a lot of uncertainty, huge event volumes and a real problem of seeking out and preventing rogue trading activities, as well as managing risk exposure proactively.

Of course CEP provides a compelling approach. In market surveillance - the ability to monitor-analyze and act on complex patterns that indicate potential market abuse or potential dangerous risk exposure can allow a regulator, trading venue or bank to act instantly. Banks want the reassurance that they are policing their own systems. Regulators need to protect the public. The media and public here find this fascinating.

On the topic of a different kind of white collar crime – consider using CEP to detect abuse in the gaming industry. The gambling phenomenon that has propelled Macau to overtake Las Vegas as the world’s biggest gambling hub is also an exciting opportunity for CEP. We have customers using CEP to monitor and detect various forms of potential abuse in casinos. Events that are analyzed to find these patterns include gamblers and dealers signing on at tables, wins and losses, cards being dealt etc. It is possible to detect a range of potential illegal activities, ranging from dealer-gambler collusion to card counting.

As a final thought - having met with some of our customers that operate both in Hong Kong and mainland China, it is clear that China is a massive market opportunity for CEP. Exciting times ahead for CEP in Asia.

Saturday, April 19, 2008

CEP down under

Posted by John Bates

Titchy_john_4I’m sitting here at Melbourne Airport in Australia on my way to Hong Kong. I’ve been delayed by a typhoon – probably a good reason to delay. After a very successful week in Sydney and Melbourne visiting customers, I thought I’d report that the CEP market is hotting up down under! As you would expect financial services is an early adopter and Apama has had several customers in Australia in this space for a few years now. But the demand is increasing. This is driven by factors such as increasing competitive pressures in the trading space and the impending fragmentation of the Australian market. Just like in Europe and North America, it is likely that several new trading venues will join the Australian Stock Exchange in offering liquidity in Australia. My diagram shows some of these in the form of Chi-X, AXE and Liquidnet.

Complex Event Processing offers a powerful way of monitoring, aggregating and analyzing the liquidity across all of these markets, as well as making real-time routing decisions. This of course can work in parallel with traders and algorithms. In fact it is becoming very interesting to see trading decision algorithms routing messages to execution algorithms, routing messages to liquidity tracking algorithms, routing trades to the market, which are being checked by market surveillance algorithms -- and all part being implemented in CEP. I am biased of course, but what other technology can offer the seamless federation of such systems. Events provide a powerful and low latency mechanism for such interoperation. Each component can be built independent of the other - but yet they can work together seamlessly. But I am getting off topic!

Over the last few years Australia has mainly been interested in equities algorithms, but now the interest in FX, futures, bonds and commodities is growing. While I was in Sydney, I was pleased to deliver the keynote address at the Trading Technology conference and met many interesting sellside and buyside participants with a variety of trading interests. It was fascinating to see how the market is developing.

And it is not just financial services where CEP is being applied down under. I also met with organizations in a number of other spaces including travel, transportation and location-based services. I hope to report more on these in the near future.

And now I look forward to finding out what is happening in Hong Kong and Asia beyond. Hopefully I can avoid the typhoon!



Thursday, April 17, 2008

RAD tools for CEP: the good, the bad and ...

Posted by Louis Lovas

On my drive home from work one evening I was doing my usual auto-pilot routine, listening to the news on the radio when I heard an interesting technology-related story. It was a discussion on how technology jargon seems to slowly creep into our every day language. Most of the story slipped by me, but there was that one phrase that stuck with me. Just imagine 10 or 15 years ago if you said to someone "I Googled his Blog on my Blackberry", what sort of reaction would you get? A blank stare? Deer in the headlights? Made me chuckle to myself.

Over the past decade (or two) a plethora of new technology has become part of our every day lives. Over that same period, the software business has witnessed incredible innovations, the maturing of long-standing technologies and new spins on tried and true ideas.

Rapid Application Development or RAD as it's been known has been around for many years and applied to the application development process in all sorts of ways. These generally graphical tools have been layered over traditional development languages and platforms all with the lofty goal of shortening the application development cycle. RAD tools have been given a number of different names like Visual Programming, Application Designers, Scenario Modelers, etc. RAD tools for building commercial database applications have been around in various forms for over 20 years, and they seem to resurface each time a new technology for application development surfaces. CEP is no exception; most if not all CEP vendors have a RAD tool as part of their product offering. RAD tools offer big promises, but do they deliver?

RAD tools are not a panacea, nor are they simply demo building toys either. Keeping the right perspective is important to getting the most out of them. RAD tools come in all shapes and sizes. For CEP the visual programming paradigm seems to have hit home. Given a designer's canvas one can visually wire together abstract elements or components to form an application. These components called smartblocks or operators are dropped on to a canvas and allow the programmer to wire the outputs of one to the inputs of another. CEP applications typically start with connections to one or more raw streams of data. These raw stream operators are then wired to other operators to perform a number of tasks – such as a derived event stream with a set of temporally sensitive statistics. Some CEP RAD designers also allow the programmer to add rules and execute actions on these streams of data in a work-flow like manner, again in that visual programming style.

RAD tools are often on the front lines when vendors show their wares to prospects and in the hands of an expert they're a thing of beauty. Whipping up a prototype application is always a big win and generally provides proof points of the technology. By their very nature they promote a certain style of programming. Unfortunately that style while great for quickly churning out functioning applications is somewhat of an opposing force to good application design methodology. As I mentioned, the value (or point) of using RAD tooling is to shorten the development cycle. Customer's initial reactions to this are always positive. Software development as we all are painfully aware is a labor intensive task. Anything to shorten the time and therefore reduce costs is clearly perceived to be a bonus. But RAD tools do not always promote the best practices. Please don't read this the wrong way; I'm not saying RAD tools inherently create badly designed apps. It's more the mindset and environment in which they're used. I'm sure we've all been there; first we build a prototype of something. Next thing you know it's pre-production and then deployed live (ahhh!). If you're lucky all's well that ends well. But applications need constant care and feeding. Even if the application's platform provides many of the 'ilities (i.e. scalability, reliability, high-availability, etc.) apps still need a good dose of proper design to ensure they can stand the test of time (i.e. long-term maintenance).

RAD tool's visual programming metaphor typically bears little resemblance to traditional development languages and therefore the techniques to development and debug tend to be equally unconventional. Due to their ability to rapidly try out ideas, RAD promotes a trial and error style of code development (i.e. try something, if it doesn't work try something else and so on and so forth…). I like to think of this as stimulus-response or behavioral code development. The developer simply observes the behavior of the RAD-generated application based on how the various abstract components are assembled. If it does not behave as expected, it's generally quick and easy to make a change and try again. The very short path from coding to testing promotes this style of development.

There are a number of other aspects of RAD that present challenges to application development. The visual programming metaphor is more restrictive in its ability to fully express semantic logic. There are always portions of applications too complex to express in that visual metaphor. Being able to seamlessly integrate with or connect to components written in standard languages helps to avoid working against the RAD tool instead of leveraging its power.  RAD tools are often code generators, meaning the assemblage of abstract components are run through a parser that spits out source code in a traditional language or in the case of CEP, the vendor's EPL.  Debugging RAD-generated applications can therefore present a challenge.  It would be ideal to be able to debug from the same high-level operator view point used to create them, instead of trying to decipher machine-generated code.

If RAD was really a bad idea, I doubt it would have survived as a staple in software development for two decades. That short development cycle is a major benefit in fast moving business climates. Competitive pressures, especially in tight margin environments like Capital Markets have strained IT organizations to the limits. Tools to quickly push new and updated applications to the forefront can make a huge difference to the bottom line. But in the end, employing good design methodologies is a key part of the application development cycle. Rapid Application Development tools are just one means to breathe life into that design.

Monday, April 14, 2008

CEP: Taking SOA into the "Front Office"

Posted by Matt Rothera

SOA has become mainstream in today’s IT environments, with the promise of a more flexible and agile IT infrastructure. One area, however, that seems to have been left behind in this movement is the role of SOA in “front office” environments. We typically think of the “front office” in Capital Markets trading applications, but for the purposes of this discussion, I would like to expand the definition to include any environment that deals with real-time feeds or real-world, physical objects. What is the prospective role for SOA in providing infrastructure support for operations that are outside the traditional IT strongholds. For example:

  1. In the manufacturing industry, this includes the plant operational infrastructure, such as the assembly line or associated control systems. It could also be proximity to the physical location of inventory, perhaps tracked with SCADA or RFID technology.

  2. In the telecommunications industry, this includes the physical services that manage telephone calls. For traditional telecommunications providers, these would typically be central offices. For newer service providers, it means being close to the origination and termination of different types of communication (IM, Voice over IP, cable, e-mail).

  3. In the energy industry, this means getting close to the actual source of power generation and distribution, such as energy grids, gas pipelines, etc.

  4. In the gaming industry, this could be close to the actual gaming floor (slots, tables, etc.)

  5. In the defense industry, this could be close to the actual battlefield (personnel, equipment)

Due to the potentially large number of objects and the real-time information that is often generated, these types of environments have typically not been suitable for SOA-based infrastructures. Or they have required custom solutions to deal with the extreme real-time nature of their behavior. However, we see increasing interest in using complex event processing infrastructure as a “bridge” to a SOA based infrastructure that can help mediate between the two environments. This can take place in a variety of ways:

  1. Real-Time BAM – There are new opportunities to track business activity at a much finer-grained level, and correlate that information with reference data or long-running business processes that typically reside in the back-office. This new breed of applications helps provide deeper visibility than ever before, and allows business to respond more quickly to changing business conditions at the “source” of business activity.

  2. Event Filtering – As the “front office” environments become more intelligent with sensors, the volume and scale of events will dramatically increase. A CEP infrastructure will be able to take in these large and complex streams of information, and create meaningful business events which can be forwarded to the back-office through the SOA infrastructure. 

  3. Complex Control Operations -  As the “front office” environments become more intelligent, there is an opportunity to control these environments in response to changing business conditions. As events occur within the front office, the CEP infrastructure can take real-time, complex actions to control the devices, potentially within milliseconds of the actual event condition. CEP can also kick-off longer running workflows using BPM technologies, traditionally found in back-office environments. 

The CEP infrastructure can become a valuable piece of an overall SOA infrastructure by bridging the physical world (the “front office”) with the back-office, allowing organizations to create a more agile infrastructure that spans the entire enterprise. CEP can help traditional organizations build new, innovative applications that reside in front-office environments using proven, real-time infrastructure as the foundation.