« March 2007 | Main | May 2007 »

April 2007

Sunday, April 29, 2007

Can Events Yield Eternal Life?

Posted by John Bates

Titchy_john The quest for eternal life has fascinated people through the ages. Theorizing on the topic has usually considered the physical body and how either it can be continuously repaired, the brain transplanted into another body or the body preserved cryogenically until medical science can repair it. Of course this discounts the many religious theories about “eternal life” after death – but I don’t intend to get into that one here!


However, consider a scenario in that rather than preserving the actual physical person, you could preserve a multi-dimensional digitized record of that person. So detailed a record, in fact, that it could be used to “reconstruct” the person. What do I mean?


Well the closest we have come to this in recent history is a combination of physical evidence, still and moving images and historical writings. Consider Lenin; His body was preserved in Red Square in Moscow, there are numerous writings about his behavior at certain points in his life and even some early video. Probably not enough to reconstruct Lenin – but enough to understand something about his behavior and motivations.


So how do events fit into this scenario? Well, start by considering an event as the “digitization of a real-world occurrence”. For example, a portable sensor combining GPS and wireless communication can digitally capture the changing location of an object and communicate it as events describing the X,Y and Z coordinates of a particular object. Other finer-grained technologies could track movements inside buildings. Overlaid on the coordinate system can be geospatial databases that interpret where the coordinate actually are – such as “Mark’s living room”. Applying this to a person on a continuing basis and you have captured one dimension of their life – where they are. All you have to do is record the events in time-order to have a historic view of their movements. Other forms of digitized recordings can include digitally capturing what the person is typing on a computer, whatever a person says, whatever a person hears, where a person’s eyes are looking, what the weather conditions are etc. Each sequence can be captured as events and recorded in time-order, for example, at a particular point in time, Mark heard John say “event processing”.


Where things really get fun, though, is using the power of event processing on top of this. All sorts of interesting information can be discerned by correlating simple events recorded about a person. For example, event rules can determine that “at 9am, on a sunny Thursday April 19th 2007, Mark and John discussed event processing in Mark’s office” -- because it knew that both Mark and John were together in a room, the room was Mark’s office, that they were meeting between 9am and 10am and the topic of the conversation was “event processing”.


Some early projects (such as one that I ran with Mark Spiteri at Cambridge and another that my friends Mick Laming and William Newman ran at Xerox Research) tried to capitalize on this fact -- that by recording activities, complex things could be automatically “remembered”. Imagine, for example, that John couldn’t remember a key piece of information he wanted to use in a paper; All he could remember is “that he discussed it with Mark in an early morning meeting on a sunny day within the last 3 months”. Using complex event capture and query techniques, it is possible to retrieve event sequences for every early-morning meeting in the last 3 months between John and Mark when the weather was sunny (that narrows it down a bit in New England J) . John could then observe the event sequences for those meetings – or narrow the search down further. In the end he was able to find the relevant information, by correlating the relationships between a set of multi-dimensional events.


So, I know this isn’t quite eternal life! But it’s a start. Recording events from every possible angle and then being able to correlate them is a much richer way of recording a person than 2 dimensional video. It enables all sorts of previously unanticipated scenarios and thought processes to be reconstructed about a person. But more importantly, that person’s interactions with the rest of the world – in particular other people, can be determined. Unlike in early experiments, sensor technologies are becoming ubiquitous and non-intrusive (no more wearing a range of Robo-cop-style equipment J). Of course this kind of capture opens up all sorts of privacy issues – but let’s park those for now.


And of course we haven’t even considered the real-time aspects of this technology. We used to have great fun with event-based rules, such as “When it’s coffee time and Mark and Scott are together, then play this video message on the nearest terminal to them”.


But back to eternal life for a moment…. If you capture events about an individual from a rich enough number of dimensions, have you captured that individual’s soul? Could you recreate that individual by modeling their response to events? This is very much the “black box” approach. In other words, rather than actually understanding how a system works, we model it from its inputs and outputs. I typed the title of this article very much “tongue in cheek” – and I’m skeptical about whether we could ever model anything as complex as a human. However, at a very minimum, we can use event capture, replay and correlation to reconstruct a historical view of an individual from any “angle”. Your legacy could be preserved, even if your body couldn’t. However, history often lends a dusting of romanticism to the imperfect individual – this is something that event processing can’t do. It just gives you the facts ma’am.

Wednesday, April 11, 2007

Apama Partners with ULLINK

Posted by Chris Martins

As those familiar with Progress may already know, the ability to work with partners is a longstanding core competency of the company.  Further evidence is this week’s announcement of a partnership with ULLINK, a French firm that is integrating Apama to extend its order management system to incorporate algorithmic trading (see Apama-ULLINK announcement)  We think this is a nice validation of the Apama value proposition, particularly the rapid application development capabilities and multi-asset class support – both attributable to the flexibility of the underlying Apama architecture. 

 We are excited about the potential and confident that this will be a real win-win for both companies.  Watch this space to see how that proves out.

 

Tuesday, April 10, 2007

C++/Java vs Apama MonitorScript

Posted by Mark Spiteri

A question we get asked regularly - why would you develop in MonitorScript as opposed to Java/C++/C#?

MonitorScript is Apama’s high level event-driven programming script language. There are two key rationales for using it as opposed to a general purpose language like C++ or Java.

1. First of all, MonitorScript is a high level language as its design is optimised for specific tasks. This makes it quicker and easier to go through the development cycle and testing cycle, leading to faster software delivery.

2. Secondly, it is an event driven language, which makes it much better suited to carrying out computation in situations where the flow of the program is controlled by messages being received by other programs. You can write event driven programs in most languages, but in some it’s easier than in others – and MonitorScript was designed as an event driven language first and foremost.

As a starting point it’s correct to say that pretty much anything that can be written in MonitorScript can also be written in C++ or any other general purpose programming language. So it’s not a competitor in the generic sense. What it is though, is a means to get much faster results, in a manner that is much more acceptable and risk free to business management, in specific circumstances that coincide with what it was designed for.

Algorithmic trading, business activity management, complex event processing – this is where it excels. First of all, you get a program execution environment that is designed to seamlessly handle the streaming in of tens of thousands of structured event messages. These are time stamped, ordered and processed for you with deterministic semantics that your application logic can rely on. Then you get a high level event driven programming language on top which at its core enables you to create long lived entities whose purpose is to seek complex sequences of events based on content and temporal constraints (both relative and absolute).

So you get MonitorScript, a dynamic runtime environment into which you can load and unload programs at will, a very efficient message processing sub-system, a very scalable proprietary multi-dimensional indexing data structure which empowers seeking tens of thousands of distinct event patterns concurrently, and you end up with a pretty focused package. And of course, key in all of this is the fact that 90% of this is done behind the scenes for you in the most efficient way possible, allowing you to concentrate on doing what the business really cares about; developing your trading strategy.

Could you reproduce all of that in C++ for a one off algorithmic trading application? Of course you could. After all, the Apama correlation software is itself written in C++. Could you write it faster? Unlikely but theoretically possible. Why wouldn’t you write your own? Well, it would take a long time to write, would be extremely expensive to debug and test, and it would inevitably be focused on the application in hand. If you take the time and trouble to make it generic and reusable – well that’s effectively what Apama’s team of CompSci PhDs have done over a few years.

So it’s really all about coming up with a package that makes life easier in specific circumstances.

Sunday, April 01, 2007

Sentient CEP and the Rights of Algorithms

Posted by John Bates

Titchy_john_3I have just returned from the UK where, as part of my duties for the week, I spoke at a conference on Fixed Income and another on Foreign Exchange. At both these events, the delegates were interested to hear of the latest trends in the industry – which include aggregating multiple market data streams from disparate trading venues into a single view and using rules-based trading techniques to rapidly detect complex patterns and make and execute trading decisions. In these areas and beyond, Complex Event Processing is being successfully used by Investment Banks and Hedge Funds to enable such complex and low latency requirements.


While I was in the UK, one of the items in the news was the marking of the 200th anniversary of the abolition of the slave trade by Britain with a service in Westminster Abbey, attended by the Queen. I hope none of my comments seem to in any way belittle this significant incident, but it did rekindle some thoughts about “will algorithms ever have rights?”. We look back on the past and find it inconceivable that any class of people could be considered as less human than others. Society has adopted that principle, quite rightly, as part of our programming. However, we’re quite happy to turn a blind eye to the suffering of animals in unnecessary cosmetic testing and in horrific factory farm conditions. In the same way that most people in the 18th Century were quite happy to receive cheap cotton and sugar, now we are quite happy to receive cheap cosmetics and food. History suggests, however, that this will change eventually.


So what of the rights of the algorithm? Now you may think this is totally mad – and you’d probably be right. However, consider for a moment the field of algorithmic trading. While speaking at both of the conferences this week, I illustrated the changing role of the trader. Rather than the trader watching the markets for key trading opportunities and then manually executing, he/she now can deploy an army of algorithms to look for and respond to the key opportunities. The algorithms are the trader’s slaves. As a customer of Apama’s, Lou Morgan of HG Trading, put it recently – “…. They don’t need a lunch-break and they don’t need a bonus…!”. Of course these algorithms are not sentient, and therefore they don’t have any rights – but what if they were!?

Together with my colleague and mentor at Cambridge - Professor Andy Hopper, I ran a research group that looked into what we termed “Sentient Computing”. This is a very grandiose title for what Gartner might describe in commercial terms as the “Enterprise Nervous System”. It was all about how complex asynchronous stimuli in a wide-area computing environment could be detected and used to automate responses. There were lots of fun hardware in this environment, like “Active Badges” - that could detect your location to within 2 centimeters in 3 dimensions, “Active surfaces” – that provided a high bandwidth network to your wearable computers when in contact with the surface, and a variety of other ubiquitously deployed sensors, actuators, compute terminals and devices.


But the whole glue that made Sentient Computing possible was Complex Event Processing and Event-Driven Architectures – as they are called today. Sensors generated events – that needed to be analyzed; Actuators could be triggered – but needed something to do the triggering. Event-based rules provided the glue to enable complex circumstances to be modeled. For example “When John Trigg and Chris Martins are within 10 feet of each other  and there is a video terminal in their view, then play message ‘good morning’ on the nearest video terminal”. Some people described this kind of environment as an “Exploded Robot” – because rather than a single object having various sensors and actuators attached to it, the network cloud is the medium to which they are attached – and CEP and EDA are the mechanism through which the “neuron firing” is wired together. Nowadays, we are familiar with how CEP and EDA are enabling all sorts of “exploded robot” applications – such as logistics companies that monitor the wide-area movements of their fleets of trucks, and optimize based on the perishability of their goods, weather, traffic conditions etc.


Although we are at an early stage with Sentient Computing, the technology approaches of CEP and EDA definitely provide a great framework. An event is a nerve impulse from the body. CEP provides the brain – which can be changed with new rules (new thinking?) at any time – rather than being hardwired. The next stage of course is rules that are written and refined by the system itself. But the exciting thing is that there doesn’t have to be one brain; the powerful thing about events is they can flow on to another brain that can be processed and analyzed in a different way (a different perspective on the information; a different opinion?). And then events resulting from decisions can be routed to actuators/services/humans to ultimately cause things to happen – the resulting nerve impulse. All the time, EDA is providing a framework to bring these various nerve inputs, brains and nerve outputs together. Clearly we’re not there yet with EDA – and CEP is being used mainly in individual systems unconnected – but it is coming to offer an exciting “event cloud”, enabling various CEP systems to interoperate.


So should algorithms have rights? Well of course not – and it is April 1st as I’m writing this. But I do ask you to think about what you would have been like it you’d be alive in the 18th Century and enjoyed cheap tobacco, sugar and cotton. Would you have “thought differently”? I remember an episode of “Star Trek – The Next Generation” in which a scientist wanted to take Mr Data apart to see how he worked. Captain Picard successfully proved to Star Fleet that Mr Data was sentient and thus an individual, rather than being property. I don’t think you necessarily have to be sentient to have rights – there’s no reason to pull the wings off a butterfly just because it isn’t familiar with Descartes. I used to comment to one of my friends – who is a Maths Professors – that his computer programs were so badly written that they were like strange mutated creatures that probably suffered. Of course this is very silly!


Anyway, I leave you with the thoughts that perhaps Event-driven Architectures offer us the ability to have a truly global “information nervous system”. We are asynchronous; the world is asynchronous – so computing paradigms should support this. Of course realistically this is just a set of inputs, rules and outputs. However, as we add more intelligence to the “brains” then we must be careful this system doesn’t become too autonomous. After all – you know what happened with "Skynet" in Terminator!