BAM

Friday, October 19, 2007

The Opportunity for Business Intelligence: Is it Evolution or Revolution?

Posted by John Trigg

Some recent news on improvements and changes in approaches to BI architectures caught my eye. New technologies suggest that there maybe alternatives to traditional BI architectures (see the recent posting by Curt Monash on in-memory BI and Philip Howard of the Bloor Group on data warehouse appliances).  Though I am not intimately familiar with these new approaches, they seem to suggest the kind of blazing speed and application to some areas, (for instance in-memory analytics and activity monitoring) that overlap with the capabilities of CEP applications.

Maybe a new turf war is on the horizon.

In an article in DM Review earlier this year, Larry Goldman of AmberLeaf took on the daunting task of whether a new event processing technology is required to support a more responsive BI architecture. Larry posed a series of questions for determining whether you should go the CEP route or can make do with existing technology. In light of the new commentary referenced above, I’d like to augment/question some of the thoughts in the Goldman article to show that there are other criteria that argue for going the CEP platform route and that, as we are fond of saying, it’s not just about ‘feeds and speeds.’

(Excerpted from DM Review January 2007, Customer Intelligence: Event Processing Alphabet Soup) with comments interspersed:

1. Do I already have competencies in real-time messaging and streaming? If you do, you may not need an application [specifically designed for CEP}. If you don't, these products may decrease the learning curve.

Agreed that one may have competencies in real time messaging and streaming in terms of accepting the data and storing it, but are you processing it as it arrives?  You must also consider what benefit you can draw from handling this data ‘in flight’ vs. persist, query and analyze?

2. Can my reporting infrastructure handle operational BI, scaling to hundreds or thousands of users? If it cannot, these tools may be able to scale without forcing you to be a performance guru.

Can my infrastructure handle operational BI?  What is operational BI? I believe it’s the notion that traditional BI tools do great at mining vast quantities of captured, processed and transformed data to produce graphs, charts and metrics.  But how do you transform those graphs and charts and metrics into actions – this is what operational BI is looking at.  And this is where the intersection with BAM, CEP, and EDA comes into play.

3. Can users easily identify or specify events to track? If they can't, these tools may help you identify and monitor events without IT involvement.

Can users easily identify or specify events to track?  One of the things that I think is on the forefront in CEP is technology that can determine or detect meaningful patterns, rather than be programmed or setup to react to known/defined patterns.  We see this as a major wave for CEP evolution.

4. What does real time mean to me? How fast do I need to make decisions? Do I have the people or the processes to react in real time?

I don’t disagree with that.  This was central to the recent Roy Schulte presentation on BAM at the Gartner CEP conference in Orlando (September 2007).  Roy has created strata to show that there are different applications and verticals that have different perceptions of real-time, ranging from those measured in milliseconds (e.g. trading) to those measured in minutes and hours (e.g. supply chain management).

5. Perhaps there is a 5th question here and that is one that presents the unique capabilities of CEP to the audience.  Do I need to monitor event data across time windows (A and B happen within X of one another [or not])?  Do I need to monitor large numbers of permutations of each rule simultaneously?  Do I need to derive or infer activity from my event flows?  Traditional query based approaches struggle with these issues especially if the demand or query refresh rate is high.

As the world of traditional BI architecture evolves and users look to determine whether CEP based architectures are appropriate, it is important to note that there may be additional benefits to the use of CEP rather than just ‘trading up’. Why not look at the two technologies as two parts to a greater solution? Augmenting an existing BI infrastructure with CEP is one approach (in which one applies event processing logic to the streams before they are passed into the data warehouse/analysis layer) as is augmenting a CEP solution with analytics/KPI from an existing BI infrastructure. There are opportunities for both sets of technology and collaboration in this instance may help to clarify rather than obfuscate for the target user.

 

Tuesday, September 25, 2007

Thank You Gartner - Event Processing Conference #1 In the Books

Posted by John Trigg

Between September 19th and 21st, Gartner held its first conference on Complex Event Processing and Business Activity Monitoring in Orlando, Florida.  Some 200 people (by my estimation) came together to meet others interested in these technologies, as well as see and hear presentations from a range of Gartner analysts, CEP vendors, educators and thought leaders, and most importantly users of CEP.  The conference was bookended by impressive presentations from Roy Schulte and Bill Gassman on Day One setting out the current state of the CEP and BAM market, and by Dr David Luckham who closed the conference with a thoughtful and insightful look at the future of CEP. 

 

We’ll blog entries about different aspects of the conference over the coming days and weeks.  But for now it is important to stress how vital the timing of this conference was and how its attendees have shown that the principles of CEP are beginning to take hold in a wide array of industries and solutions.  Between the 3 conference tracks organized by Gartner (Event Processing, Business Activity Monitoring and Event Processing in Banking and Capital Markets) and the vendor sponsored sessions, we heard descriptions of applications of CEP in a variety of scenarios ranging from algo trading to clickstream analysis to content distribution to manufacturing and many more.

 

Architectural presentations were also prevalent with many sound ideas being put forward on the relationship between the ever evolving alphabet soup of CEP, BAM, SOA, EDA, BPM, OLTP, ESB and I am sure, many others.  Bringing together an audience such as this to discuss both practical implementations and more theoretical research allows insight to flow around the CEP community and to understand the ramifications for when CEP is seen as more than just event feeds and event processing speeds. For true application infrastructures to be built on the principles and technologies of CEP, a wide understanding of how we can evolve the relationships between these disciplines will be key.  And that understanding will come from the continued holding of conferences such as this one (already looking forward to next year in
New York) and interplay between the many disciplines, vendors and consumers of these technologies.

 

Dr Luckham posited that CEP will become a ubiquitous infrastructure technology in some 30 years.  For that to be true - indeed for it to happen sooner - we all have a lot of work to put in … but you can be sure that it will be worth it.

 

Monday, August 20, 2007

BAM Myth #4: Limit BAM to Monitoring Simply KPIs

Posted by Giles Nelson

Here’s another in our “BAM Myths” series, exploring some of the preconceptions behind BAM.

An uncontroversial definition of BAM’s role is “to provide real-time business visibility into important business data and processes”. Take the example of the monitoring of client behaviour on a Web site. Perhaps we wish to understand how end-users are interacting with the Web site and also aim for a certain service level to be delivered. A candidate KPI (key performance indicator) to measure is the average response time between a request being received and the response dispatched back to the client.

This is certainly pretty straightforward to measure and put on a dashboard. We could also have a graph that goes red when the average response time goes above 1s. All very useful, but we should be able to go much, much further to give more relevant visibility. What about, for example, if we could predict when our service level might be exceeded in the future based upon current trends and therefore give us time to provide more computing resource? And if we could also use past activity levels at the relevant time of day to determine when the response time goes beyond two standard deviations from the historical average? We could also start correlating response times and an increase in clickstreams which failed to go all the way through to order placement. Lots of sophistication is possible by having the capability to properly correlate and analyse multiple streams of information coming from our underlying systems. Very few BAM projects get anywhere near delivering this though.

By not taking this approach, valuable business context is lost. Instead, simple, technical, KPIs are monitored which are probably only interesting and suitable for IT. It is surely preferable that the people who are responsible for business performance should have a dashboard in front of them that gives them the information directly.

Such requirements are required throughout an organization. Therefore organisations should ensure they use technology which can cope with a wide variety of different situations and which is agile enough so the BAM rules which are being applied can evolve as the organisation evolves.

Managers need to take their decisions faster with trust, consistency and depth. There is often no time for analysis of historical data to find out what happened. The decisions must be taken now, with a clear assessment of their potential organisational or business impact. This is why solutions that goes beyond simple monitoring with real time analysis and action capacities are required. And this is also why solutions that are supposedly BAM oriented, but are in fact just capable of simple KPI analysis and alerting, fall short.

Wednesday, May 16, 2007

Complex Event Processing at CERN

Posted by Giles Nelson

This week I visited CERN in Switzerland, the European Organisation for Nuclear Research, who is a customer of Progress. It was an astonishing and inspiring visit. CERN is in the final stages of building the Large Hadron Collider (LHC) which is due to go into production late this year. The LHC consists of a 27km loop in which protons will be accelerated and collided at unprecedented power levels to give us new insights into the building blocks of matter. In particular the search is on for the Higg's Boson, predicted originally in a paper dating from the 1960s. Finding this will fill a gap in the Standard Model of elementary particles and forces, and will help in furthering a "theory of everything". A particular highlight was to go down nearly 100m underneath the ground to look at the ATLAS experiment - a truly massive particle detector. Its enormous size consists of a number of different elements which detect different types of particles - muons, gluons and many others. The huge magnets which form part of the detector are cooled with liquid helium down to -269 degrees C to make them superconducting (and therefore more powerful). Viewing all this brought home what a remarkable engineering effort it all is.

Anyway, what has all this got to do with events? Well, through a number of presentations that CERN staff were kind enough to give us throughout the day it became apparent that their whole world is to do with events and the processing of them. The term "event" is one which they used often, to describe the information gathered by the detectors which sit around the collider. Every time a set of protons collides sets of events are created which need to be analysed and filtered to determine which are of real interest. For example, there are two ways in which a proton can decay to produce two Z particles (check). One is predicted to involve a Higg's Boson so the set of events to look for is something like "proton collision followed by a Higg's Boson followed by two Z particles". To identify such sets of temporally correlated events the raw events are propagated up through three levels of filter to be finally sent through to a central computing resource for further research and analysis. Up to 40 million collisions per second take place. These are firstly analysed in FPGA hardware reducing the 40 million collisions to a few thousand of interest. These are further filtered in software to produce finally a few hundred. These few hundred are then sent to other computing systems for further analysis.

It's not only collider events that CERN needs to handle. CERN also has a newly built central control centre, part of which is used to monitor CERN's technical infrastructure. About 35,000 separate sensors exist to monitor everything from fire, to electricity substations, to coolant plants. All these sensors are currently producing about 1.6M events per day all of which have to propagated to a central point for analysis. In turn these 1.6M are reduced to 600K events which are overviewed by human operators. Most are inconsequential (for example the 18KeV power supply is still producing 18KeV) but some will require attention. By appropriately analysing these CERN can ensure that the colliders are running as smoothly and as safely as possible. With billions of euros invested so far in the LHC, keeping the collider up and running as continually as possible is a top priority.

The visit proved a fascinating insight into the world of particle physics and the data processing challenges it produces. It really showed event processing at its most extreme.