« August 2007 | Main | October 2007 »

September 2007

Friday, September 28, 2007

Unus per utilitas ut fulsi perspicuus ero laurifer

Posted by John Trigg

The Gartner CEP show in Orlando last week demonstrated a number of interesting things about CEP platforms

1. The story of core CEP is common across all vendors

2. Aside from the nature of how CEP is expressed within a platform, true differentiation starts to emerge in tooling and user constituencies that are using CEP

3. Future state as described by Dr Luckham indicates that CEP logic becomes available inside reusable libraries/repositories that can be folded into applications.

 

The premise of Apama has always been that you must embrace the full spectrum of users – encompassing IT, business analyst, end user, and senior management, to gain true productivity and application acceleration with CEP development.  Perhaps that is true for any application development platform.  But this has been a core tenet of the Apama platform since day 1 when the idea of having a platform that incorporates both a core event language aimed at programmers, as well as a metaphor for expressing & implementing event logic for the non-programmer who owns the core IP of the process.  Being able to express this information graphically - thus making it accessible and understandable, and then take action, completes the user stack we aim for with our Apama platform.

 

We have blogged about this before here and here, but what got my interest this time is the pitching of pure CEP programming approaches that are supposedly open to the core business user.  To construct a complete CEP application within a comprehensive platform you should be able to exercise the skills and knowledge of different users in a collaborative environment. Iterative and componentized development of interfaces, business logic, presentation and action come from the minds of many and in CEP all 4 elements are core to rapid, real time execution and adaptiveness.

 

The point that Dr Luckham makes about the accessibility and reuse of CEP components in CEP application construction as a future state is one which can be realized now.  The use of Smartblocks within Apama allows for the encapsulation of reusable logic that can subsequently be incorporated in other CEP processing.  For instance, logic to represent common trading algorithms or known air traffic congestion patterns or network intrusion patterns or supply chain metrics or internally developed analytic - all can be expressed as a CEP pattern.  Creating any of these as an Apama Smartblock, organizing them into meaningful catalogs for analysts to interrogate and select from, speeds application development and eliminates replication of core CEP logic within a larger implementation.

 

The mainstream adoption of CEP won't be based solely on it being cool technology (it is, but then again so was this). Being able to accelerate and redefine application construction in a real time world will further its adoption.  And maybe so will a solid and well thought out set of standards.  But that is for another time …

Tuesday, September 25, 2007

Thank You Gartner - Event Processing Conference #1 In the Books

Posted by John Trigg

Between September 19th and 21st, Gartner held its first conference on Complex Event Processing and Business Activity Monitoring in Orlando, Florida.  Some 200 people (by my estimation) came together to meet others interested in these technologies, as well as see and hear presentations from a range of Gartner analysts, CEP vendors, educators and thought leaders, and most importantly users of CEP.  The conference was bookended by impressive presentations from Roy Schulte and Bill Gassman on Day One setting out the current state of the CEP and BAM market, and by Dr David Luckham who closed the conference with a thoughtful and insightful look at the future of CEP. 

 

We’ll blog entries about different aspects of the conference over the coming days and weeks.  But for now it is important to stress how vital the timing of this conference was and how its attendees have shown that the principles of CEP are beginning to take hold in a wide array of industries and solutions.  Between the 3 conference tracks organized by Gartner (Event Processing, Business Activity Monitoring and Event Processing in Banking and Capital Markets) and the vendor sponsored sessions, we heard descriptions of applications of CEP in a variety of scenarios ranging from algo trading to clickstream analysis to content distribution to manufacturing and many more.

 

Architectural presentations were also prevalent with many sound ideas being put forward on the relationship between the ever evolving alphabet soup of CEP, BAM, SOA, EDA, BPM, OLTP, ESB and I am sure, many others.  Bringing together an audience such as this to discuss both practical implementations and more theoretical research allows insight to flow around the CEP community and to understand the ramifications for when CEP is seen as more than just event feeds and event processing speeds. For true application infrastructures to be built on the principles and technologies of CEP, a wide understanding of how we can evolve the relationships between these disciplines will be key.  And that understanding will come from the continued holding of conferences such as this one (already looking forward to next year in
New York) and interplay between the many disciplines, vendors and consumers of these technologies.

 

Dr Luckham posited that CEP will become a ubiquitous infrastructure technology in some 30 years.  For that to be true - indeed for it to happen sooner - we all have a lot of work to put in … but you can be sure that it will be worth it.

 

Monday, September 17, 2007

To be or not to be (deterministic) ...

Posted by Richard Bentley

A fundamental requirement for an effective CEP engine is performance. Performance can be measured in many ways - the most common of which is throughput, or events per second (eps). Marketing literature from CEP vendors abounds with increasingly impressive eps numbers, but the lack of accepted industry benchmarks for CEP makes it difficult to understand how these numbers translate to real-world use cases - in terms of whether you can obtain these figures whilst doing non-trivial work with the events once you've got them ...

The twin of throughput is latency; how quickly can you process an event? In Algorithmic Trading for example, latency might be measured as the elapsed time from a market data event coming into the CEP engine to an order being sent out, triggered by the market data event. With the increasing deployment of Algo Trading engines in Capital Markets, fractions of a millisecond matter; the first to detect the opportunity and get the order to market wins the prize. Ultra-low latency may not make for good headlines of the "bazillion events per second" variety, but is often of more relevance to the success or failure of a CEP application.

In our experiences over the last 8 years developing a CEP engine we have seen ceiling latency requirements come down hugely – the benchmark in Algo Trading for example now being single digit milliseconds including any adapter transformation to normalise messages from Market Data Feeds and Order Management Systems. At the same time we have seen the complexity of trading algorithms increasing. Testing of algorithms is becoming more and more important, but testing can never cover all the corner cases that can occur when an algorithm is released into the white water of real market data, cf the turmoil in the markets caused by the events of the last few weeks …

So when an algo does not seem to be behaving as it should, how can we diagnose what’s going on? We could get the algo to generate detailed logs – but this is hardly going to help us meet our latency requirements described above, nor give us an easy way to recreate the situation in a controlled environment. We want our CEP engine to give us ultra low latency, but we also need full disclosure to aid later analysis and optimisation.

The Apama CEP Engine solves this with a hybrid approach; rather than generate extensive logging in the application, the Apama platform can capture every event input to the algo in an integrated event store, with miniscule additional overhead. This “replay log” can then be played back in a test environment, at different playback speeds, with application logging turned way up to investigate exactly what an algorithm did and why it did it. It can also be used as a means to validate a fix once the algo has been tweaked.

The playback approach relies on a fundamental property of the CEP engine, namely determinism. The engine will always produce the same results, in the same order, when presented with the same stream of events. This needs to extend to temporal logic – allowing behaviour to be accurately reproduced even with data played 1000 times faster than it occurred in reality.

With this kind of determinism we can have our cake and eat it – and gain the ultra-low latencies required of a CEP engine whilst enabling full diagnostics for application refinement.