Matt Rothera

Wednesday, November 11, 2009

Putting the Smart in Smart Grid

Posted by Apama Audio

Listen to learn about the critical role of Event Processing in reshaping the delivery of energy and services to your customers.

Tuesday, April 14, 2009

Empower the Business, but Keep Control

Posted by Matt Rothera

Mrothera-web I was reading a recent blog from Opher Etzion at IBM in which he made reference to some of the challenges with innovation and the IT department.  He states:

Richard's question about innovation and IT department is more complicated one, however, there is some truth in his observation, based on my own experience in an IT department of a large organization, IT departments may be more conservative than their users, and typically will be cautious about be anything that is giving a "programming capabilities" to the users (and is conceived as "losing the control"). Since many event processing applications' value to the business users is doing exactly this (giving more power to the business user to do their own programming) it is sometimes a challenge to get it through the IT department, but this is a longer discussion...


While I have had many great conversations over the past couple of years with IT visionaries about CEP and event processing, I would agree with Opher that in the general sense, there is “conservatism” when it comes to exposing “programming capabilities” to their users.  There is a paradox with CEP, because the knowledge of the actual useful CEP rules for the business usually lie with the “Line of Business” or the “End User/Partner”.  While it definitely makes sense to push management and creation of the CEP rules out to those with the most domain knowledge (such as business analysts and operational business users), the thought of completely losing control of this environment sometimes “trumps” the benefits that can be made from such an agile CEP environment.  I find this very similar to the early days of SOA, when web services were just coming of age.  There was a notion that end users or business users would simply be able to consume these services at their desktop and orchestrate their own business processes.  However, the reality of “infinite loops”, “complex data mediation”, and the overall requirement to still fundamentally write “programming logic” let reality and pragmatism win the day.


At Progress Apama, we provide tools for 3 different sets of users to get involved creation and management of CEP rules:  IT, Business Analysts, and End Users.  However, the architecture recognizes the fact that as more power is exposed to business analysts and end users, there is more a chance of “loss of control”.  For this reason, we provide more of a “delegated model”, in which each layer of user (starting with IT) selectively exposes capabilities to the business analysts, who in turn selectively exposes parameters to the end user.   


For the business analyst, IT creates very simple “Smart Block Interfaces”, which capture the necessary programming logic and exposes information to the modeling environment in a straight-forward manner.  IT keeps control by exposing the data, analytics, and operations that offer the best combination of power and control for the analyst.   Apama provides a platform for IT to in effect create a custom environment for the analyst, or create a Domain Specific Language for the analyst, completely “fit for purpose” for the task at hand.  This not only allows the analyst to be extremely productive for their specific area of expertise, but allows IT to limit the risk as opposed to exposing all of the raw CEP capabilities to this class of user.


For the end user, the business analyst follows a similar process.  Based on the model, the analyst exposes parameters with constraints to the end user.  This allows the end user simply to set parameters within the given constraints to either change existing CEP rule instances, or create new instances “on the fly” with the desired parameters.  Allowing the end user to set the parameters through highly contextual dashboards provides again the right balance of power and control when getting end users involved in the process.


Finally, the back-testing environment provides the final piece of the puzzle to allow business analysts to validate their new models against historical production data.  By running the models through the historical event data, the model and the outputs can be verified prior to deploying into production.  Similarly, user based parameters can also be validated against the historical data in the same way, providing assurances that the parameters will have the anticipated effect.


In summary, I believe that the business benefits of CEP will largely be recognized “closer to the end user and the business”.  However, I don’t believe that this means needing to find clever methods of bypassing IT.  A pragmatic approach incorporates a variety of users within the organization, and makes the most out of each skill set in the enterprise.  The total result is measured by the collaboration of all the users, rather than the efforts of any specific user type within the organization.

Monday, March 23, 2009

We're going on Twitter

Posted by Giles Nelson

Louis Lovas and myself, Giles Nelson, have started using Twitter to comment and respond to exciting things happening in the world of CEP (and perhaps beyond occasionally!).

The intent is to complement this blog. We'll be using Twitter to, perhaps, more impulsively report our thinking. We see Twitter as another good way to communicate thoughts and ideas.

We would be delighted if you chose to follow our "twitterings" (to use the lingo), and we'll be happy to follow you too.

Click here to follow Louis and here to follow Giles (you'll need to signup for a Twitter account).

Monday, March 31, 2008

CEP ACTIONS Part 2: Separation Anxiety vs. Implementation Concerns

Posted by Matt Rothera

As a follow-on to a prior post on "CEP: What about the Action", as well as a few other related posts on this subject, I thought I would continue the discussion on this topic. Several blogs suggest that separation of action is an architectural goal when addressing the issue of whether actions should be performed from within a CEP system.  I agree that In theory this might be a worthy architectural goal, but in the real-world, things are never that simple.

In a recent discussion with an organization, the subject arose as to how CEP could detect impending exception conditions, and handle them in a variety of ways. For example, if the action required some type of process requiring human intervention, it might be that the CEP engine could kick -off a long-running workflow controlled by a BPM tool. Clearly this action would not be handled by the CEP platform, but the platform should make it easy to call out to these types of facilities and correlate responses with the original event.

But what if a real-time response is required? For example, if monitoring flight operations, CEP could dynamically re-adjust flight routes in response to an impending collision. One might recoil from delegating the action to an external component – and risking chance to adjust flight paths - purely for the need of "architectural separation". Or should the "head of desk" in a bank’s proprietary trading group be content to absorb trade execution latency in deference to the architectural notion that CEP doesn’t do action?

In addition to latency considerations, one must also consider the accessibility of the programming model. Speed is not simply a matter of how fast something executes; speed of deployment is equally important. Having a CEP development model that allows the developer to model the application holistically as opposed to "piece parts” some of which one designs within CEP and others of which are handled elsewhere can be an important consideration. A holistic approach does not preclude exposing "actions" as callable services that might be delivered elsewhere. CEP infrastructures should provide the ability to build event-driven applications within its own confines, and then selectively expose actionable components as services or events as needs arise. 

For a similar, non-CEP, example consider J2EE application servers that are used to build traditional portal applications. Strictly speaking, every service in a portal would be implemented as a service, either as a pure web service or as an exposed service on an ESB. However, we know that real-world portal implementations do not strictly adhere to these principles, due to the ease of use of the portal programming model (and to some extent speed of execution). However, most J2EE applications do offer the ability to expose EJB's as web services, which negate any risk to the overall reusability of the services in a SOA environment.

Actions are fundamental to CEP systems. I believe the decision as to whether the CEP system executes the action or hands it off to another system is not a matter of architectural abstraction, but rather a matter driven by the requirements of the business, with recognition of the systems already in place.

Friday, February 15, 2008

CEP: What about the action?

Posted by Matt Rothera

In recent posts, much has been made of various language and technology choices for CEP (SQL versus an Imperative EPL, such as Progress Apama’s Monitor Script). As also was pointed out, the specific constructs of the CEP language have benefits for certain use cases. For example, the SQL variants of CEP focus on real-time event filtering . Use cases such as these put more emphasis on the “event streaming and filtering” piece of the CEP, but less on the “act” portion. There is typically little or no ability to perform complex actions in response to the events, outside of building these pieces of CEP applications using conventional languages.

So what are the drawbacks of the SQL based approach? Imperative based EPL provides additional value beyond the initial event detection, and provide the ability to solve more complex problems. For example, for certain applications, the actions themselves may incur the need to spawn new rules on the event streams. There may be a need to call out to an external system, loop through the results, and conditionally spawn new rules, each having additional cascading actions which require similar treatment. The actions themselves become new instances of event detection, and must operate in the context of the original initiating event.

Is a filtering-centric CEP approach potentially a “dead-end road”? The great promise of an event-driven enterprise is the ability to begin to automate business processes in response to potentially complex events. For this to become a reality, the CEP engine must be able to not only detect, filter, and analyze a high volume of events, but also be able to orchestrate complex actions within milliseconds from initial detection. Some of those actions may themselves involve complex correlation, as responses are coordinated between multiple systems to achieve the end result.

With an imperative EPL, not only can you perform the front-end event pattern detection, but you can also create arbitrarily complex actions that run in real-time. A CEP platform must strive to not only make the event detection performant, reliable, and easy from a development point of view, but also provide facilities to build the corresponding real-time actions (which could be arbitrarily complex). As was mentioned in previous posts, lower level facilities such as looping and conditional constructs are important, but also higher level facilities and constructs would be important. For example, the ability to model any arbitrary FSM (Finite State Machine) to handle the complex action portion of CEP is important, especially when higher level, visual modeling tools are available to help developers build these types of applications.

A related blog post entitled “CEP as the EDA Application Server”, the author refers to a CEP engine as a potential choice for an implementing an event-driven application. If this be the case, the CEP engine must have the characteristics described above. In the event where certain tasks are offloaded to other components such as a BPEL engine within an ESB environment, the CEP engine still must be able to correlate one or more responses in the context of the originating event, and conditionally launch additional actions based on the results. It seems an imperative EPL would be an absolute requirement to meet the needs of this use case.

At Progress Apama, we strive to create the complete complex event processing platform, which involves providing facilities to not only detect patterns in one or more high volume event streams, but also to respond with complex actions in milliseconds. One classic event driven application that demonstrates this is Algorithmic Trading, where the complex action is orchestrating a trade. However, other types of event driven applications are clearly emerging, such as:

  1. Operations Center Monitoring and Control: Monitor a vast gas pipeline, energy grid, or water treatment plant and provide reactive control to events in real-time.

  2. Communications Enabled Business Process: Based on complex events within the enterprise, setup voice or chat sessions between key parties to ensure that the issue is resolved quickly.

  3. Battlefield Operations: Monitor a vast number of objects (tanks, personnel, equipment), and take directed actions and orders within milliseconds based on the exact nature of the battlefield conditions.

These are just some of the examples. There are many others.


The point in calling them out is to suggest that we must not become so transfixed by the ability to tap into event streams that we presume that a real event-driven application ends there. Arguably more important – is what you can do with the information.