Standards

Monday, January 11, 2010

10 Reasons Why Progress Chose Savvion

Posted by John Bates

Today Progress announced the acquisition of Savvion http://web.progress.com/inthenews/progress-software-co-01112010.html

The reason that Progress chose to enter the BPM market is clear. Businesses are increasingly turning to BPM to implement and improve their business processes. Why? Firstly because no other solution can help enterprises achieve real-time visibility, agility, efficiency and business empowerment the way BPM does. Secondly BPM enables this to be achieved with low Total Cost of Ownership (TCO) and ease of use.

But why did Progress choose Savvion? Here are 10 reasons to start off with…

  1. Savvion is a trailblazer and industry leader – Savvion is a pioneer in BPM but is also still at the cutting edge. We wanted the best BPM thinkers at Progress. 
  2. Savvion has been proven to work at the enterprise level. Some BPM systems only work at the departmental level, but Savvion works at either departmental level or enterprise levels.
  3. Savvion offers System-centric and Human-centric BPM – Savvion can orchestrate processes but can also involve human users in workflow.
  4. Savvion is event-enabled – so business processes can respond to events. Progress has a lot of momentum behind event-driven business systems through our Actional and Apama solutions – and Savvion will work seamlessly in event-driven business solutions.
  5. Savvion offers vertical industry solutions – Analogous to Progress’ Solution Accelerators, Savvion offers out-of-the-box vertical solutions in industries including Financial Services and Telecommunications.
  6. Savvion offers an integrated Business Rules Management System – Expressing logic in terms of rules can often be very important. Savvion have developed a rules engine, integrated with their BPM system, enabling decision-oriented BPM – modifying the process flow based on rule conditions. This is a powerful capability.
  7. Savvion offers an integrated Analytics Engine – Business Intelligence has proved its worth but it is a “rear view mirror” technology – analyzing facts that have already happened. Savvion’s analytics engine enables continuous analytics to augment business processes and human user with advanced real-time analytics, enabling better decision-making.
  8. Savvion offers an integrated Document Management System (DMS) – Savvion’s integrated DMS enables rich document handling and empowers document-centric BPM.
  9. Savvion BPM suite is highly scalable, high performance and highly available – At Progress we pride ourselves on the strength of our underlying technology. We want to offer our customers a complete solution that embodies scalability, performance and availability. Thus selecting a BPM vendor in-keeping with this philosophy was key – and Savvion is just such a vendor.
  10. Savvion is a great cultural fit with Progress – An often-overlooked point is that cultural fit is key to acquisition and integration success. The Savvion team pride themselves on being innovative, customer-focused and fun - just like the Progress team. We’re looking forward to working together. 

Friday, October 16, 2009

Apama 4.2 release - Cruising in the fast lane

Posted by Louis Lovas

Apama 4.2 release - Cruising in the fast lane
The Apama engineering team has done it once again. True to our record of releasing significant new features in the Apama product every 6 months, the v4.2 release is hot off the presses with major new functionality. The Apama roadmap is driven by a keen sense of our customer requirements, the competitive landscape and an opportunistic zeal. The engineering team is a dedicated R&D team driven to excellence and quality. We are dedicated to delivering value to our customers. A consistent comment we've heard from analysts and customers alike is the maturity of the Apama product.  

The current v4.2 release, the third in the v4.x family adds significant enhancements in three concurrent themes - Performance, Productivity and Integration. This consistent thematic model is one we've held for a number of years. Below I've touched upon the highlights of the current release along these themes:


  • Performance
High Performance Parallelism for Developers.  The Apama Event Processing Language (EPL) provides a set of features uniquely suited to build scalable event-driven applications.  The language natively offers capabilities for event handling, correlating event streams, pattern matching and defining temporal logic, etc. Equally important, the language provides a flexible means to process events in parallel.  For this we provide a context model and a new high performance scheduler. Contexts can be thought of as silos of execution, where CEP applications run in parallel. The scheduler's role is to manage the runtime execution in an intelligent high-performance way, and to leverage the underlying operating system threading model. It’s via the context architecture that the Apama Correlator squeezes the most out of operating system threads to achieve maximum use of multi-core processors for massive vertical scalability. For IT developers, this is a effective and efficient means to build high performance, low latency CEP applications without the pitfalls of thread-based programming, such as deadlocks and race conditions.

High Performance Parallelism for Business Analysts.  Not to be left out of the race, we've also ensured the scalable parallelism provided in the Apama CEP engine is available through our graphical modeling tool, the Event Modeler. We've had this graphical modeling capability since the very first release of Apama. This tool designed for analysts, quantitative researchers and of course developers, allows you to design and build complete CEP applications is a graphical model.  Parallelism is as easy as an automatic transmission, simply select P for parallel.

  • Productivity

Real men do use Debuggers (and Profilers too). The Apama Studio now sports major new functionality for development, a source level debugger and a production profiler. Building applications for an event-driven world presents new programming challenges. Having state-of-the-art development tools for this paradigm is a mandate. The Apama EPL is the right language for building event-driven applications - now we have a source-level debugger designed for this event paradigm. Available in the Eclipse-based Apama Studio it provides breakpoints to suspend applications at specific points, examine contents of program variables and single stepping. It works in concert with our parallelism as well. Profiling is a means to examine deployed Apama applications to identify possible bottlenecks in CPU usage.

Jamming with Java. We've enhanced our support for Java for building CEP applications. The Apama Studio includes a complete set of wizards for creating monitors, listeners, and events to improve the development process when building java-based CEP applications in Apama.

  • Integration

The (relational) world plays the event game. While we have provided connectivity to relational databases for many years we've made a significant re-design in the architecture of how we do it with the new Apama Database Connector (ADBC). The ADBC provides a universal interface to any database and includes standard connectors to ODBC and JDBC.  Through the ADBC, Apama applications can store and retrieve data in standard database formats using general database queries, effectively turning these relational engines into timeseries databases. The data can be used for application enrichment and playback purposes. To manage playback the Apama Studio includes a new Data Player that enables back-testing and event playback from a range of data sources via the ADBC. One can replay at varying speeds event data and time itself. The tested CEP applications behaves temporally consistent even as data is replayed at lightening speed.

Cruising at memory speed with MemoryStore. The MemoryStore is a massively scalable in-memory caching facility with in-built navigation,  persistence and visualization functionality.  This allows CEP applications, which typically scan, correlate and discard data very quickly to retain selected portions in memory for later access at extreme speed. This could be for managing a financial Order Book, Payments or other data elements that the application needs to be able to access at user’s requests quickly. Furthermore, if required the in-memory image can be persisted to a relational database for recovery or other retrieval purposes, and lastly the MemoryStore allows selected portions of the in-memory cache to be automatically mapped to dashboards.

Well that's the highlights. There were also about a dozen other features within each of these three themes, just too numerous to mention.

We are committed to improving the Apama product by listening to our many customers, paying close attention to the ever-changing competitive landscape and researching new opportunities.

Again thanks for reading, you can also follow me at twitter, here.
Louie



Thursday, October 08, 2009

If You Build It They Will Come, An Apama Algo Webinar

Posted by Louis Lovas

IF You Build It They Will Come
My colleague Dan Hubscher and I just finished the first of a two part Webinar entitled "Build Quickly, Run Fast". In this Webinar we explained and demonstrated Apama as an Algo platform for high frequency and order execution algorithms.

As I've blogged in the recent past it is an arms race in High Frequency trading.  The need to build quickly is a demanding requirement to keep ahead in the race. Being armed with the right tools is paramount. Rapid development and customization of strategies using graphical modeling tools provides the leverage necessary to keep pace with fast moving markets.

To that point, in this webinar I demonstrated a couple of algo examples. The first was a complete strategy that incorporates an alpha element with multiple order execution options. In  designing and building strategies the trading signal detection is just the first part of the problem. This typically involves an analytic calculation over the incoming market data within some segment or window of time. For example a moving average calculation smooths out the peaks and valleys or the volatility of an instrument's price. Once the signal is detected it's time to trade and manage the order's executions. This is a key distinction between other CEP products and the Apama platform for building trading strategies. While it's possible to define an Event Flow in most or all CEP products for data enrichment and data analysis (i.e. the signal detection), for most other CEP products you have to switch out to some other environment & language to build the rules to manage the executions. The Apama platform is about building complete event-driven applications. So trade signal detection and order executions, whether it's a simple iceberg execution or something much more complex it can easily be designed, built and backtested in the same Apama graphical modeling environment (Of course for those more inclined to traditional development tools and methodologies, Apama offers a full suite of developer tools, an EPL, debugger, profiler and java support).

MovingCrossover Image


The second example in the Webinar demonstration was to build a small, but working strategy from scratch. I did this live in full view of the attendees. For this I did a basic price momentum strategy. This tracked the velocity of price movements. The trading signal was a parameterized threshold which indicated when that price moved up (or down) a specific amount for a specific duration.

This webinar is focused on highlighting the ever-present challenges investment firms face in high frequency trading:
  • Fears of the Black Box
  • The simple fact that markets are continually evolving
  • First Mover Advantage
  • Customization is king
Along with my colleague Dan Hubscher,  the Build Quickly webinar describes how the Apama platform delivers solutions to the Capital Markets industry to meeting these needs and challenges. 

Stay tuned for a link to the recording and don't forget to dial in to part II where we focus on performance requirements and characteristics. Again thanks for reading (plus watching the webinar), you can also follow me at twitter, here.

A follow up note, here's the link to the recordings for both part I and part II on Build Quickly Run Fast.

Louie



Wednesday, September 30, 2009

EPTS, the Symposium of Trento

Posted by Louis Lovas

EPTS, the Symposium of Trento
How many angels can dance on the head of a pin? I suppose that was a question debated at the Council of Trent that took place in Trento, Italy back in the 16th century. However, the Event Process Technical Society's (EPTS) annual symposium just last week took up residence in Trento to discuss and debate a host of lofty topics on event processing.

  • CEP's role and relationship to BPM (or more appropriately event-driven BPM)
  • Event Processing in IT Systems management
  • Event-based systems for Robotics
  • EPTS Working Groups ...
While the sessions and discussions on event processing did not have the global significance of angels on pin heads or the Counter Reformation it did provide a clear indication of just how broadly and deep event based systems can reach. Whether it's a business application monitoring mortgage applications, IT management systems in a Network Operation Center, bedside monitoring systems in a hospital or a robot packing pancakes into boxes they all have a common underpinning, consuming and correlating streaming event data.

Granted, not everyone approaches it with the same viewpoint. IT Systems Management people don't think about processing and correlating events, they think about device management, KPI's, Alerts and the like. Someone building, managing a business process is likely concerned with managing Orders - validating them, stock allocations, warehouses and shipments. Nonetheless, a common framework model behind these systems is event processing.

Two of my favorite sessions at the EPTS Symposium were a panel session on the EPTS Mission and an open forum on Grand Challenges, a brainstorming session focused on identifying barriers to the adoption of CEP.

EPTS Mission

Four panelists, myself included presented their expectations of the EPTS and it's role as an industry consortium, it's goals and what improvements can be made. As a baseline, the EPTS does have a existing mission statement defined as ...

To promote understanding and advancement of Event Processing technologies, to assist in the development of Standards to ensure long-term growth, and to provide a cooperative and inclusive environment for communication and learning.


Given this mission statement and my own expectations there are a number of basic intentions the EPTS should provide to the uninitiated to event processing:

Awareness   Provide commercial business and industry the necessary knowledge of event processing as a technology supported by numerous vendors with continuing research in academia.
Definition Provide a concise and definitive meaning of event processing,  a Taxonomy of Event Processing so to speak. This is both from the horizontal technology perspective and also a vertical focus for a handful of specific industries. It's often difficult for business people to understand technology without the context of a business or application focus.
Differentiation  Provide a clear distinction that defines event processing and distinguishes it from other technologies. Event processing is available is many forms, this symposium provided evidence of that.  Much of it is available in specialized form as in IT Systems management. There are also pure play event processing (CEP) vendors, such as Progress/Apama. But there are also Rules engines, Business Intelligence platforms, Analytic platforms, etc. This easily presents a bewildering world filled for choice, conflicting and overlapping marketing messages. The EPTS is in the perfect position to provide that clarity behind defining what is CEP and what isn't.
Cooperative Event Processing rarely operates in a vacuum. There are many synergistic technologies that closely pair with CEP. Often this can have a specific vertical business flavor, but often it's other platform technology such as BPM and temporal databases.


The EPTS has four working groups that have been active for the last year: Use-cases, Reference Architecture, Language Analysis and Glossary. To a large extent the working groups have provided and are working towards the definition of CEP that is clear. However, there still a need to highlight the salient value of event processing. For specific vertical domains, the value of CEP is clear-cut simply because the fit and function is tailor made. In Capital Markets, for example algo trading has all the hallmarks of a CEP application - high performance, low latency, temporal analytics and a streaming data paradigm fit-for-purpose. However, there are other application domains where CEP is equally viable but much more subtle.  I believe the EPTS can provide a vendor-neutral taxonomy of event processing - from the basics to the advanced. Explain why it's unique and different, why language is important and how it is synergistic with a host of other technologies. To this end, the group has decided to form two new working groups to focus on many of these areas. Clearly a forward thinking move.

The Event Processing Technical Society is an organization made of up both vendors and academics. We're held together by a common thread, a goal that the whole is greater than the sum of the parts and our collective will benefit all even as many of us are undeniably competitors.

Once again thanks for reading,  you can also follow me at twitter, here.
Louie



Sunday, August 09, 2009

Riding the Crest of the Wave... the Forrester Wave

Posted by Louis Lovas


In just a few short days of its announcement news of the Forrester CEP Wave has spread to all corners of the globe. From trade magazines, online journals and blogs to Facebook and Twitter, the headlines are everywhere. A Google search yields thousands of hits.  "Independent Research names Progress® Apama® as a Standout Leader in CEP ..."

The Forrester team of Mike Gualtieri and John Rymer state 'The Fledgling CEP Platform Market Is Vibrant, Competitive, And Dynamic'. Of course those of us that have been immersed in event processing for the past few years already knew that. It was our job to convince Mike and John. On behalf of Progress Apama and the CEP community, I would like to extend a word of thanks and appreciation to both of them for their efforts, diligence and patience in putting this Wave together. An enormous task given they reviewed 9 CEP products and vendor strategies in depth. Considering this was the first CEP Wave they also had to define an initial blueprint on CEP by which to evaluate vendors, they did a commendable job. Well done. You can get a complimentary copy of the pdf version from us here.  

It was quite a few months ago when I and a few of my esteemed colleagues began the CEP Wave process. In the abstract it was not too much different from responding to the questions in a prospect's RFP/RFI, for which I and my colleagues have much practice. However, a difference that I found unique was the format. A client proposal is generally a Word document where one can provide plenty of written detail, and diagrams to depict product architecture and function. Forrester Waves are MS Excel spreadsheets. Vendor's responses to the Wave's questions are to fit into an Excel cell. Being a long-winded person, it was a challenge to have the necessary succinctness dictated by the confines of a cell.  My colleagues were quite helpful to this end. 

In short order, it became clear as to the benefits of the spreadsheet format. While many documents - proposals, reviews, evaluations or other become static paper the moment they're published that is not the case with Forrester's Wave. There is a clear intent behind Forrester's use of the spreadsheet format; it creates a living/dynamic document for their clients.  Spreadsheet's by their very nature can be interactive. Spreadsheet formula's can accept user input and recalculate. This capability is exactly what Forrester leverages in the CEP Wave.

The Forrester CEP Wave is divided into three categories:
  • Current Offering: A platform feature breakdown, development and deployment tools and performance characteristics.
  • Strategy: The vendors investment for the future.
  • Market Presence: Customer base.
Within each of these categories is an entire litany of subcategories containing features and criteria by which the product and vendor are measured. Each is assigned a weight as deemed appropriate by Forrester in reviewing the CEP industry at large.  Each vendor is then judged by their merits and scored. The most important aspect of this is the weighting. This is the key that gives the Wave that dynamic nature. From a client perspective, the weighting can be adjusted to suit your specific requirements. If for example, your shop is Windows-only you don't need to have a high weight on multi-platform support, you can lower that value. Likewise, if you have strong need for high availability/disaster recovery you can increase that weighting. Making these adjustments will tune the Wave for your specific requirements. You will then see how vendors stack up against each other with your customized weights. By doing so, what you will find is that the Apama platform pops to the top of the list all too often.

Once again thanks for reading, you can follow me at twitter, here.
Louie



Thursday, April 09, 2009

Scalable concurrency, a design pattern in the Apama EPL

Posted by Louis Lovas


This is my final installment in a series devoted to a specific example in the Apama EPL. I began this example by describing the basic design pattern of a  consumer/producer.  Further enhancements enabled multiple consumers and as a result the instance idiom.  Finally below, I will again enhance this consumer/producer by illustrating how one can leverage multi-core processors for massive scalability and parallelism.

As I have mentioned before, instances or 'sub-monitors' as they're often referred to in the Apama EPL define a discrete unit of work. That unit of work represents a set of business logic however large (a complete application scenario) or small (a simple analytic).  Instances are created on demand using the spawn operator in the language. Each scenario instance is invoked with a unique set of input parameters that represent that occurrence. Each instance can then uniquely maintain its own reference data, timers and event streams, in effect its own state.  In general programming patterns this is known as a factory behavioral model but we've extended it to include an execution model.

To provide a means to leverage multi-core processors, the Apama EPL provides a syntax and a simple semantic to allow those instances to execute in parallel. We do this with a language feature called contexts. These are silos of execution which take the factory model to the next level. A context defines a logical container that holds and executes instances of a scenario (of the same or differing types). The EPL provides a semantic for inter-context communication, there is no need for mutexes, semaphores or other locking schemes thus avoiding common deadlock code patterns typical of imperative languages such as java. Each context in effect has it's own logical input queue to which events are streamed from external sources or other contexts.  Behind contexts our CEP engine squeezes the most out of operating system threads to leverage maximum use of multi-core processors.

The same CEP engine can create multiple contexts (a context pool as you'll soon see in the code example below), they can be used to hold and execute multiple scenario instances, additionally those instances can create sub-contexts for additional parallelism. If for example, these instances are an application for pricing Options and require a compute-intensive calculation such as Black Scholes, additional contexts can be spawned for these calculations. Furthermore, sub-contexts can be designed as shared compute services to be leveraged by multiple scenario instances running in different (parallel) contexts.

Contexts take the factory model and extend it to include a parallel execution model with a few simple keywords in the EPL as you'll soon see below.

The enhancements to the Item consumer/producer include a Context Pool which I've listed the code for below and the enhanced Item Producer that leverages it. The interface is unchanged except for one new event and the Consumer (client) has a minor revision  (thus adhering to my belief that an EPL should follow the principles of structured programming of modularity and encapsulation that I've blogged on at the start of this series).  The complete example for this revision is available here and requires Apama version 4.1 (or later of course).





The Context Pool
.

package com.apamax.sample;


event ContextPool {
    integer numContexts;
    sequence<context> contexts;
    integer idx;
   
    action create(integer nc, string name) {
        self.numContexts := nc;
        while(nc > 0) {
            contexts.append(context(name, false));
            nc := nc - 1;
        }
    }
   
    action getContext() returns context {
        context c:= contexts[idx];
        idx := idx + 1;
        if(idx=numContexts) then {
            idx := 0;
        }
        return c;       
    }
}


The ContextPool as implemented here is a general-purpose utility that provides a pool of contexts via a create method (i.e. action) and a means to distribute a workload across them in a simple round-robining technique each time the getContext action is called.

As I mentioned above contexts are mapped to operating system threads, so judicious use of the create action is expected. The basic rule-of-thumb is that number of total contexts should equal the number of cores on a server.  One noteworthy point, contexts can be public or private. A public context means that event listeners running within it can receive event streams from external sources (i.e. adapters), listeners within a private context can only receive events that are directed  to the context via the enqueue statement in application logic running in another context. For my example, this context pool utility creates private contexts: context(name, false)

I've leveraged another general capability of the Apama EPL in the implementation of this context pool, that of actions on events. You'll notice these two actions are enclosed in an event definition which is part of our com.apamax.sample package.

In keeping with it's charter of structured programming,  actions on events provides a means to promote code modularity by encapsulating reusable utility functions (like a context pool).


 


The (parallel) Item Producer
.
package com.apamax.sample;


monitor ItemService {
   
  event ClearUserID {
      integer id;
  }

            
  integer count := 0;
  float price := 0.0;
   
  action onload {
      ContextPool cf:=new ContextPool;
      cf.create(4, "ClientService");
   
      // list of subscriber (user) identifiers
      sequence<integer> ids := new sequence<integer>;
       
      SubscribeToItems s;
      on all SubscribeToItems():s {
          if ids.indexOf(s.subscriberId)= -1 then {
              context c:= cf.getContext();
              ids.append(s.subscriberId);
              route SubscriptionResponse(s.subscriberId, c);
              on completed SubscriptionResponse() {
                  spawn startSubscriptions(s.subscriberId, s.item_name,
                                           context.current()) to c; 
              } 
          }
      }
       
      ClearUserID c;
      on all ClearUserID():c {
          log "in " + c.toString();   
          integer index := ids.indexOf(c.id);
          if index != -1 then {
              ids.remove(index);
          }
      }
  }

  action startSubscriptions(integer this_subscriberId, string name,
                            context mainContext) {
      log "in startSubscriptions";
       
      on all wait(0.1) and not UnsubscribeFromItems(subscriberId =
                                               this_subscriberId) {
          route Item(this_subscriberId, name, count, price);
          count := count + 1;
          price := price + 0.1;
      }

      on UnsubscribeFromItems(subscriberId = this_subscriberId){
          enqueue ClearUserID(this_subscriberId) to mainContext;
      }       
  }
 
}



To get a general sense of what the multi-instance Item Producer code is intended to do, I suggest a quick scan of my last installment, this revision does not change that basic foundation it only parallelizes it. It is worth pointing out how little the code and design has changed yet this implementation has the ability to scale massively to tens of thousands of instances across multiple processor cores.  Clearly this is just a simple example that does very little real work (producing Item events). However structurally, it's a model that represents how one would design such a scalable service in the Apama EPL.

The parallel Item Producer (like it's previous incarnation) manages multiple uniquely identified Consumers. For that it must maintain a list of identifiers, one for each Consumer.  But this time, the Producer instance created on behalf of the Consumer is spawned into a context:  spawn startSubscriptions(s.subscriberId, s.item_name, context.current()) to c; We're still passing the subscriberID and item_name, (the instance parameters) but we also pass the context handle of the main context (context.current()).   This is necessary for the inter-context communication.  

The Consumer implementation has undergone a minor change to support this parallelized execution mode to match the Producer.  A good design pattern is to ensure that monitors that frequently pass events operate within the same context. This is not a hard-fast rule, only one that limits the amount of inter-context communication (i.e. enqueueing).  I've enhanced the interface slightly, there is a new event, SubscriptionResponse  that is used as a response to subscription requests (on all SubscribeToItems()) .  This event is used to communicate back to the client the context handle of the Producer spawned on its behalf. Once the Consumer receives this event, it also spawns into this same context. By doing so, both the Producer and Consumer operate as they always did sending Item events (route Item(this_subscriberId, name, count, price)) and handling termination (on UnsubscribeFromItems).  Within each context, the producer/consumer still adheres to that single-cast event passing scheme where it creates and sends uniquely tagged Item events. The Consumer and the Interface are included in the download (not shown here for brevity's sake).

Two additional noteworthy points to highlight in this Producer implementation.

1) The on completed SubscriptionResponse() listener.  The completed  keyword indicates that this listener wakes up after the SubscriptionResponse  event has been delivered.  This way we can guarantee that our Consumer has received this event and has the context handle before spawning the Producer.

2) To process UnsubscribeFromItems events, the statement: enqueue ClearUserID(this_subscriberId) to mainContext; is executed.  This statement is used to send an event to the listener (on all ClearUserID) which executes in another context. Recall, that the action startSubscriptions is the target of the spawn operator. So this is the main body of code for which multiple instances are parallelized running in contexts (from the pool). The onload action, which is controlling all of this spawning is logically considered the main context. Due to the strong semantic for inter-context communication, events must be enqueued to another context's input queue. Each context in effect has its own input queue and with the context handle the inter-context communication mechanism is defined. So to communicate the client termination request from the spawned instance running in a private context the ClearUserID event must be enqueued to the main context where the appropriate listener is waiting.

Routing (i.e. route Item(...)) is still possible, but routed events stay within the boundaries on the context where the Producer and it's corresponding Consumer reside.  To logically expand the example, multiple Consumers could reside in the same context (i.e. a multi-cast design pattern as I described in the previous revision of this example).

 

This example is designed to illustrate the simplicity of parallelism in the Apama EPL. With just a few simple statements, one can quickly and easily leverage multi-core processor technologies for massive scalability.

As I mentioned earlier this is the final entry for this specific example, if you're just seeing this for the first time you can start from the beginning (only three short segments) here. I hope this has been informative and provided some insight into the Apama EPL, I plan to have many more code examples in the future on various use cases.

You can download the complete example here with the consumers, interface and producer. Any questions or comments, just let me know,
Louie


Monday, March 23, 2009

We're going on Twitter

Posted by Giles Nelson

Louis Lovas and myself, Giles Nelson, have started using Twitter to comment and respond to exciting things happening in the world of CEP (and perhaps beyond occasionally!).

The intent is to complement this blog. We'll be using Twitter to, perhaps, more impulsively report our thinking. We see Twitter as another good way to communicate thoughts and ideas.

We would be delighted if you chose to follow our "twitterings" (to use the lingo), and we'll be happy to follow you too.

Click here to follow Louis and here to follow Giles (you'll need to signup for a Twitter account).

Monday, September 22, 2008

Reflections on the Gartner Conference and EPTS4

Posted by Louis Lovas


Like many of my colleagues in the event processing community, I thought I would share a few reflections on the recent happens at the two back-to-back technology conferences of the past week. Gartner sponsored their annual vendor-fest known as the Event Processing Summit, and the EPTS had their fourth annual symposium. This being my first EPTS, I've had some initial thoughts and reactions which I've shared over the weekend.  For this, I'll delve more into the conference's content.

I attended a number of the sessions at the Gartner conference. I did not have any set agenda so I picked the sessions more on a personal appeal rather than some well thought out plan. While I do work in an engineering team, I have a customer focus so I attended all the customer sessions. I always find it valuable to understand how customers are deploying event processing technology in real-world use cases. Their efforts clearly infiltrate the product roadmap of vendors.

     
  • Lou Morgan of HG Trading, a lively speaker described his use of event processing technology in high frequency trading. Lou has been an Apama user for quite a few years and we've invited him to speak on our behalf on a number of occasions. He's an entertaining soul with a clear understanding of the Capital Markets business. We're delighted he presented his use of Apama at this conference.
     
  • Albert Doolittle of  George Weiss Associates Inc. gave a talk on using event processing technologies in this firm.  Albert described his technique to pick a vendor for his CEP project, which if I were to paraphrase was a coin flip.  Towards the end of his talk, he digressed from CEP technologies to present a short discourse on high performance computing (HPC). The idea of leveraging supercomputing-like technologies and FPGA's for compute intensive operations like Black-Sholes Options pricing certainly has caught Mr. Doolittle's attention. Typically CEP and compute intensive tasks don't mix well because of latency considerations. However, a marriage of CEP and HPC is possibly one made in heaven. I was intrigued.
     
  • The ebullient Marc Alder gave his brusque, no-holds-barred perspective on the CEP project he embarked on at Citi. Marc did a great job of explaining the challenges of introducing a new technology at a large corporation, one with a well entrenched bureaucratic IT organization.  I think most of us have faced the bureaucratic fortress at some time or another in our careers. Knowing how to play the game is a skill only a few master well, kudos to Marc for his successful venture.  As Marc unfolded his project's architecture he wisely chose a course to prevent vendor lock-in.

The juxtaposition of these three use-cases was most curious. Lou Morgan jumped deep into CEP technology and bet-the-ranch on it. Albert Doolittle took a gamble with a coin flip in choosing a vendor and Marc Alder kept his choice of a CEP product isolated and contained within his overall system architecture. A safeguard in case he felt the need to replace it.  Nonetheless all great examples of how CEP is gaining momentum in main stream business.

One session I thoroughly enjoyed was Don  DeLoach's "Extending the range of CEP". Don is the CEO of Aleri. I'm not sure I enjoyed this session more for its content or for Don's presentation skills. As is usually the case at technology conferences, it's death-by-Powerpoint. Slideware is typically jammed with an overabundance of barely readable text and dazzling graphics.  Don's slides however had a clear minimalist slant. A plain monotone background with either a single word or a (very) short phase well choreographed with his oration. He spoke of CEP as an evolving technology from the simple ability to filter streaming data to managing complex application state. He used an example that has become the Pièce de résistance of Aleri, order book consolidation.

There were many sessions on SOA and Event Driven Architectures - so many I lost count. 

I attended the panel discussion on low-latency messaging protocols. This was a Q&A session moderated by Roy Schulte of Gartner. The panelists were the crop of high-speed/low-latency message vendors. TIBCO-killers as I've affectionately referred to them. Vendors such as 29West, RTI, Solace Systems, IBM and even TIBCO themselves (apologies to those vendors I've not mentioned). Each described how they have defied physics to achieve incredible speeds yet still provide reliable delivery, management tools and even application level services (i.e. RTI's last value cache).  However, its noteworthy to contrast these low-latency vendors, all focused on shaving microseconds off message delivery via proprietary, even hardware-based schemes, to the many standard-based messaging systems trumpeted in other sessions. Those SOA and EDA sessions paraded a whole barrage of Web Services based standards models (i.e. WSDL, WS-Eventing, WS-Notification, WSDM, the list goes on and on) as the right way to build applications. These certainly seem like opposing forces that will only foster confusion in the eyes of those who have a clear business need for low-latency yet desire to adhere to a standards approach.

The EPTS Symposium began its first day with a keynote address from a VC which had funded Event Zero.  I had first met with Event Zero about a year ago, they have appeared to recast themselves from an adapter/connectivity vendor to one delivering an Event Processing Network (EPN). An EPN can be defined as an infrastructure platform for event processing agents or services. Those CEP agents performing both independently and in concert with other agents (or services) act upon streaming data sources. Together the whole becomes greater than the sum of the parts. Such is the grandiose vision of an EPN.  SRI was also promoting a similar notion of event processing as a service, which I would argue is a variation on this same theme.  Unfortunately, I think there is trouble ahead. The problem is simply timing, maturity and standards (or lack thereof).  I don’t think customers will buy into EPN's or Event Zero's vision until there is a clear establishment of standards for CEP. As a perspective, Application Server vendors tried this and failed (anyone remember SilverStream? Apptivity?). It was not until the J2EE specification established a uniform model that created true viability for a network or service infrastructure platform for AppServers.  Until we see the formation of CEP standards for interoperability and integration, the appeal of CEP will remain as basically a standalone application platform and vendors will continue to market a solutions approach, just look at any CEP vendor's website for proof of this. Nonetheless, Event Zero has embarked on a bold initiative and I wish them all the best.

Speaking of standards, moving slightly up the stack one could clearly detect the prevailing wind blowing against streaming SQL as the language of choice for CEP.  Going back to the Gartner conference there were a few noticeable comments to that effect. Marc Adler, described streaming SQL as making the simple things difficult to do.  Don DeLoach, downplayed the SQL language in Aleri in favor of the SPLASH enhancements. The renowned Dr. Luckham in his closing keynote address outlined Holistic Event Processing as the future implied it required a language beyond streaming SQL. 

At the EPTS Alex Koslenkov from Betfair castigated the streaming SQL approach for his use case in managing complex long-running state. Alex is an advocate of the RuleML approach to CEP languages, as such it stands to reason he doesn't have a high regard for streaming SQL and it showed.

Susan Urban from Texas Tech University presented a research project on a language they've dubbed StreamCEDL. Susan denounced streaming SQL as lacking the algebraic expressiveness necessary to move beyond simple stream processing to true complex event processing. One example, she mentioned in the description of StreamCEDL is its support of an APERIODIC operator.  The intent is to process irregular or out-of-order data streams.

Lastly, Chris Ferris from IBM presented on Industry Software Standards. This was a great session that portrayed the far reaching impact of adopting standards across our industry.  He stressed the importance in making every attempt to get broad vendor agreement, customer validation and to be sure the adopted technology serves the needs of the community because you'll have to live with it for years to come.  This is such an important message in the quest for standardization of CEP. Open, widely accepted standards are exactly what the CEP community needs; the sooner we embark on this journey the better.

Friday, September 19, 2008

A Truce at the CEP Front

Posted by Louis Lovas

                                        <p>A Truce at the CEP Front</p>                


I am a bit of a history buff and often times I'm reminded of some historical event when reading about current events. This inclination I have can easily be applied to the meltdown of the global financial markets we see happening all around us. The lessons of the past should be constant reminders of how we should behave now and in the future. I've always thought a degree in history should be a prerequisite to a political life, armed with such knowledge would clearly provide guidance to govern wisely. Maybe our business leaders should follow a similar career path.

I've just attended my first EPTS Symposium. It was the Technical Society's 4th annual get together. If you're unfamiliar with this organization, it's purpose is to promote event processing technologies through academic research and industry participation. The organization has a number of working groups that have contributed greatly to the overall awareness of event processing.  You can read more about the EPTS at their website.


The symposium was well attended by members of both academia and industry. All the major CEP vendors were there and it was the first time I've been in a setting where the atmosphere was completely non-competitive. It was a truce of sorts. While we typically wage war in the virtual battlefield in a land-grab for customers, for 2 days we discussed vision, standards and use-cases. We debated ideas, but we also laughed, ate and drank together. It was general camaraderie. As I mentioned, history is one of my interests and these two days reminded me of the 1914 Christmas Truce where the Germans and the Brits crawled out of their trenches and met in the no man's land to celebrate Christmas together. The guns fell silent that night in 1914 and for the 2 days of the symposium the virtual guns of competition also fell silent.

Come Monday we'll all be back at the war again. But for a short while it was fun. To see the face of the enemy unmasked, to get to know him, to share an idea and a drink was genuinely uplifting. We found common ground in our desire to see event processing become a main stream technology.

Monday, September 08, 2008

SQL Standards - an impedance mismatch with reality

Posted by Louis Lovas

SQL standard for CEP - an impedance mismatch with reality
Well the hype train has left the station. As I'm sure the whole of the CEP community knows by now StreamBase has teamed up with Oracle announcing a Streaming SQL standard. I am certainly in favor of standards in software technology, they clearly represent the tide that raises all boats. Customers and vendors alike are benefactors from communal standards. From ANSI standard programming languages like C and C++  to open standards like XML. Many a consortium of vendors and customers have labored arduously to define well-known technology standards for the collective benefit of the greater worldwide community.    However, this recent announcement by StreamBase and Oracle is nothing more than the practice of the crafty art of deception and diversion.  While I see nothing wrong with StreamBase and Oracle teaming up to work on enhancing the streaming SQL language for their CEP products, to tout it as representing an emerging industry standard is simply brazen.

The streaming SQL language in today's CEP products finds its roots in academia.  The Aurora project is one such academic endeavor. SQL was the language of choice for this project for good reason. Streaming data shares a number of common attributes with static data, why not use a well known data access, data filtering, data manipulation language. The Auoroa authors clearly had this in mind when they chose SQL. I'm sure they also had an expectation that streamingSQL and the future products based on it would evolve in a manner similar to database or other backend data service technology.

However, CEP platforms have matured into application platforms. This in no small measure is due to Progress Apama and our solutions approach to the market.  The Apama stack easily lends itself to the rigors and demands of the solutions or application environment. The Apama EPL, Monitorscript has the expressiveness of syntax to describe the semantics of  complex logic in today's CEP applications.  As the saying goes, imitation is the sincerest form of flattery, many of our competitors have followed our lead by introducing a solutions approach themselves. But as a result, they've faced a challenge with SQL being the underpinnings of their EPL.  SQL was never intended to be an application language, therefore they've chosen either to build application solutions in the mixed-language environment or extend their base EPL to include procedural constructs to support the needs of application semantics. In either case, something has to give.  Reading the fine print of a StreamBase solutions datasheet: "Incorporates algorithms as Java or C++ plugins" is an indication of the inefficacy of streamSQL for the intended purpose.  With each new release of Coral8 and Aleri they announce features in their SQL-based EPL adding procedural constructs and imperative scripting constructs similar to Apama Monitorscript.  These language enhancements or mixed-mode development requirements clearly validate that CEP has evolved into an application platform and not just a back-end data service engine.  From a language standards viewpoint this has only served to fracture.  Each vendor has carved their own course in this brave new world.

As a cautionary note, standards can be the opiate of the masses. They give customers a sense that they are protected against vendor lock-in.  Even the perception of an emerging standard can be hypnotic. This is all under false pretense.  Real standards provide benefits to customers and vendors alike covering a broad swathe not just a select few.  As the CEP community ventures into the standards world we should focus in those same areas where standardization has a proven track record in other technologies, interoperability and integration.  There is plenty of fodder here and I'm sure it will unfold in the coming months.