« January 2009 | Main | March 2009 »

February 2009

Wednesday, February 25, 2009

Structured Programming in the Apama EPL

Posted by Louis Lovas

<p>Structured Programming in an EPL</p>


This is my second installment on a series devoted to the Apama Event Processing Language, MonitorScript. In my introduction to the Apama EPL I described the basic structure of the language, some concepts and terminology. I also included the obligatory Hello World sample. In this entry I'll continue that basic tutorial delving into the elemental structure of the EPL. I've decided to do this to highlight the very simple fact that the language adheres to the principles of structured programming.  Just so we're all on the same page, here's a definition I found on the web:

DEFINITION - Structured programming (sometimes known as modular programming) is a subset of procedural programming that enforces a logical structure on the program being written to make it more efficient and easier to understand and modify.


That's a definition that is clearly irrefutable. In order for a language to live up to that definition it must have a number fundamentals.  Code modularity, or developing code as separate modules. This allows for parallel development, improves maintainability and allows modules to be plug replaceable. Encapsulation is a means of hiding an implementation or inner workings of a calculation or algorithm. Lastly there are  Interfaces and instances. This architectural pattern  may be applied to the design and implementation of applications which transmit events between loosely coupled software components. These are just a few of those fundamentals. These essential elements of languages have been part of the scene for decades. They are present in the procedural programming languages like C and in object oriented languages like C++ and java.  

Lest we forget our roots as software engineers, with all the hype surrounding CEP and their constituent domain-specific languages, it's important to do a reality-check and not get swept up and forget the basic principles necessary for creating long-standing maintainable code. Last year I wrote a piece on readability as a criteria for a successful language this is primarily based on a development language having these basic fundamentals: modularity, encapsulation, interfaces and instances.

Apama's EPL, MonitorScript has all these capabilities, as I will describe below. It's what allows us to build reusable framework components, and solutions. Which we've done so with our Capital Markets Framework and Solution Accelerators for FX Aggregation, Smart Order Routing, Surveillance, Bond Pricing, etc.  These are components written in our EPL that have the plug'n'play modularity to be redeployed in multiple customers. 

To illustrate this idea of structure - modularity, encapsulation and interfaces I'll use a short example of a producer and consumer. This is a common design pattern or idiom we use extensively. The Apama EPL's event paradigm extends not only to the type of CEP applications we enable but also to the nature of the language itself. If you're familiar with message passing languages such as Erlang this will be a familiar concept. Different modules that make up an application communicate with one another by passing messages (or events as is the case). 

In this example I have a service or producer monitor that generates and sends Item events, a client or consumer monitor that consumes Item events and an interface for the interaction between the two. If the term Monitor seems strange, I've defined a set of terms and concepts in my introduction, I suggest a quick review of that get up to speed.

The interface to the monitor, defined below is the set of events it receives and transmits. The event definitions for these events are declared within a package name (i.e. com.apamax.sample). Apama's EPL supports java-like package names for name-space isolation to strengthen that modularity notion.


The Item Interface.

package com.apamax.sample;

event SubscribeToItems {
    string item_name;
}

event UnsubscribeFromItems {
    string item_name;
}

event Item {
   string  item_name;
   integer item_count;
   float   item_price;
}



The Item Interface is simply a set of prescribed events. Two are to instruct the consumer to start/stop receiving Items and the Item definition itself. As I mentioned it uses a subscription idiom.   We use this notion extensively where a Monitor is a layer over some subscription-based service such as a market data adapter. A real market data interface would be much more extensive. Here, I've scaled it back for the sake of simplicity, but you can imagine a more robust interface including error handling and status events.

 



The Item Consumer
.
package com.apamax.sample;

monitor ItemClient {

    SubscribeToItems subscribeToItems;
    Item item;
   
    action onload {
        subscribeToItems.item_name := "sample";
        route subscribeToItems;


        on all Item():item {
            log "Got an item: " +  item.toString();
            if (item.item_count > 10) then {
               route UnsubscribeFromItems("sample");
               log "All done.";
            }
        }
    }
}



The Item consumer is also a Monitor in the com.apamax.sample namespace.  It is a user of our event interface to the Item service and as such is interested in receiving events of type Item.  The interface defines the means to do this by subscribing. The consumer simply has to create a SubscribeToItems event and forward it to producer service.

As I mentioned earlier, the Apama EPL adheres to an event paradigm as a fundamental characteristic of the language. The route statement is a means by which Monitors communicate. This is another precept and underlines the fundamentals of modularity and encapsulation.

Once a subscription request has been sent  (route subscribeToItems), the consumer listens for Items events, (on all Item()).  In this simple example, we're just looking to receive them all without any filtering or pattern matching.  I will explore event pattern matching - both the simple and complex in a follow-up blog.

To complete the picture, the sample tests a field in the Item event and terminates the subscription if it exceeds a constant value, (item_count > 10).


The Item Producer.
package com.apamax.sample;

monitor ItemService {
   
    SubscribeToItems subItems;
    UnsubscribeFromItems unsubItems;
    integer cnt := 0;
    float price := 0.0;
    listener l;
   
    action onload {

        on all SubscribeToItems():subItems {
                startItems(subItems.item_name);         
        }

        on all UnsubscribeFromItems():unsubItems {
                stopItems(unsubItems.item_name);
        }
    }

    action startItems(string name) {
        l := on all wait(0.1) {
            route Item(name, cnt, price);
            cnt := cnt + 1;
            price := price + 0.1;
        }
    }

   
    action stopItems(string name) {
        // do something to stop routing events
        l.quit();
    }
}



The Item producer is also in the com.apamax.sample namespace.  It defines listeners for SubscribeToItems and UnsubscribeFromItems.Two events from our interface. Typically, subscriptions would be managed on a per-user basis, thus allowing multiple consumers to subscribe to our Item service. This is a detail I will outline in a subsequent installment along with a few other related features such as instance management. 

Once a subscription request has been received, the startItems action (i.e. a method) is invoked to continuously route Item events to the consumer every 0.1 seconds (on all wait(0.1) ...) .   Again, in a real world scenario, this particular portion of a service would be more involved, such as possibly managing the interaction to an external adapter for example.

For terminating a subscription on behalf of the client (on all unSubscribeItems()), we simply terminate the wait listener (a looping construct) set up in startItems.


 


This example is designed to illustrate a few common principles in the Apama EPL:

  1. To stress that the fundamentals of structured programming are ever present: modularity, encapsulation and interfaces. Two benefits of Apama's modularity that are noteworthy relate to the plug'n'play idea I mentioned earlier.  a) As typical with modular programming, revised modules can be introduced with little or no impact as long as interfaces are left intact. This is also a truism with the Apama EPL. b) Those revised modules (Monitors) can be replaced in a running system, a shutdown is not required. Furthermore, the modularity also extends beyond the boundary of a single CEP engine to multiple CEP engines in an Event Process Network (EPN) with no actual code change.
  2. The event paradigm is inherent in the language not just for application events but for the entire application design center.

In future installments on the Apama EPL, I'll delve into a few more of the language constructs that extend this basic idiom (multiple consumers, spawning, and parallelism).

Once again thanks for reading,
Louie


Monday, February 16, 2009

Sun host Developers Workshop of complementary technology for low latency

Posted by Louis Lovas


Last month I was invited to speak at a Developers Workshop sponsored by Sun Microsystems on building low-latency trading applications. I had a 25 minute time slot to fill with the goal of educating an audience of 110 architects, engineers and technical managers from Wall Street on CEP and Apama's vision of it.  I'm usually able to sequester an audience for a lot longer than that to give them my perspective and since I tend to ramble just a bit, it was a tall order to me to whittle it down to this shorter time slot. 

To go one step further, I also did a demonstration of the Apama Algorithmic Trading Accelerator (ATA) and our graphical modeling tool, Event ModelerTM. So I had to move fast to accomplish this.

Since this was a Sun sponsored event, there were a number of sessions devoted to Solaris and Sun hardware. Sun has done some great work with the Solaris operating system to leverage multi-core processors for scaling low-latency applications. Still, you need ample knowledge and expertise to be able to fine-tune the OS to unlock that capability. There were a few demonstrations of how, using a few of the Solaris OS command tools you can better apportion  processor cycles to starving applications to achieve tremendous gains in performance. One has to be quite scholarly in Solaris systems management so one does not shoot thyself in the foot.

Besides myself representing Apama and CEP, I thought Sun did a great job of bringing together a group of complementary vendors that touched upon the theme for the workshop - low latency. Just to highlight a few... Patrick May from Gigaspaces discussed distributed cache technology and  Jitesh Ghai from 29West described the benefits of low-latency messaging to the trading infrastructure.  I've always considered both of these technologies very complimentary to CEP.  Among many other uses, distributed caching engines provide a basis for a recoverability model for CEP applications. Low-latency messaging brings new possibilities for architecting CEP applications in a distributed model.  

As for me, I presented a number of CEP themes in my talk:

1) Drivers for CEP Adoption. 

Fitting with the theme of the workshop, the drivers for CEP adoption are the increasing demands for low-latency applications. It's the race to the micro-second on Wall Street whether we like it or not. Additionally, the need for rapid development and deployment of complex applications is pushing CEP technology into the hands of the sophisticated business users. Graphical modeling tools empower these astute users, the ill prepared will get left behind.

2) Evolution of CEP  in Capital Markets.

 From single asset broker algos to cross asset, cross border smart order routing to real-time risk and surveillance. CEP is growing and maturing in Cap Markets on numerous fronts.


3) Anatomy of  CEP Architecture.

I presented a macro-level anatomy of the Apama vision of that architecture. There are 4 key components: 1) the CEP engine itself. 2) Integration to the external world. 3) Tools for development and deployment. 4) Visualization across a wide spectrum from richly interactive desktop click trading to widely distributed web-based user interfaces.

Lastly, I want to thank my pal Gareth Smith the Apama Product Manager and slide designer extraordinaire for these slides on the architecture. He's a wiz at putting ideas into compelling visuals.

You can download the slides of my Apama CEP presentation here.

As always thanks for reading,
Louie


Friday, February 13, 2009

Algos Go Global - How Do I Get a Ticket to Brazil?

Posted by Dan Hubscher

Carnival, or Carnaval as it's spelled in Portuguese, runs through the weekend of February 21st this year in Brazil.  Who do you think might be there?  Turns out it’s U.S. President Obama, in the form of thousands of plastic masks.  Wish you could send a likeness of yourself to the festival too?  You might not have made it to Brazil quite yet, but Apama’s algorithms have.

This week, we announced that Ágora Corretora, the largest broker in Brazil, is deploying the Apama platform to support algorithmic trading in both futures and equities on the BM&FBovespa, the leading exchange in Latin America and the world’s third largest in terms of market value.  Ágora is the first broker to develop and deploy unique trading strategies within the Brazilian market.  In the first phase of the project, Ágora has deployed the Apama platform in support of its proprietary trading desk. In the second phase, Ágora will provide access to the Apama platform for its buy-side clients to create, execute, and monitor customized strategies that are unique to their specific trading goals. Ágora represents the first broker in Brazil to offer custom algorithmic trading strategies to its clients.

Developments like these are important outside of Brazil too, as the buy side worldwide increasingly participates in multi-asset, cross-asset, and cross-border trading.  Historically, bringing order flow from market participants in the long-standing algorithmic trading centers such as New York and London to the Latin American markets has not been easy.  Doing so requires exceptional local market expertise and leadership.  As the Brazilian capital markets continue to become more strategic on the world stage, it is important for funds with expansion goals to see alignments of leaders.  In this case, the leading Brazilian broker has chosen to trade on Latin America's leading futures and equities exchange with the world’s leading algorithmic trading platform.

Maybe next year around this time, my customer visits will take me to Sao Paolo.  It will be interesting to see how trading firms have been able to innovative over the course of time on a flexible, open platform.  And then, a quick detour to Rio for some rest and relaxation. Maybe I’ll watch a parade.  No Obama mask required.  One more pin on the world map, with customers from New York to the Bank of China International.  How far can your trading platform take you?

-Dan

Monday, February 02, 2009

From Boston to Davos to Cross-Asset Trading

Posted by Dan Hubscher

Cold, snow, and crossing boundaries of all kinds turned out to be the theme for the week of January 26th.  One such boundary crossing is cross-asset trading, where counter-party risk can leave traders out in the cold, fast. Cross-asset trading, especially including multi-legged trades with multiple counterparties, produces very high risk and complicated risk calculations.  Multi-legged trading requires constant data updates to monitor each trade’s lifecycle and the firm’s corresponding risk exposure.

Dealing With Technology landed on subscriber’s desks on January 26th, with a special report on “Laying the Cornerstone for Cross-Asset Trading.”  In the report, business leaders including John Bates, founder and general manager of Progress Apama, discussed the common hurdles in deploying cross-asset trading platforms.  I thought about those deployments as I crossed a number of boundaries myself.

I attended the annual Progress Software Analyst Day event in cold, snowy, Boston on the 29th.  A global representation of customers spoke to industry analysts about their use of Progress products like Apama to solve business problems.  We were far away from the World Economic Forum in Davos, Switzerland, also running the same week, where news coverage featured the cold, the snow, global economic crisis, and an admission price of over USD 50,000 per person.  CNBC interviewed Thomas L. Friedman, who talked about investment in technology and business innovating its way out of crisis. 

The DWT special report participants spoke more accessibly about how close capital markets firms are to deploying truly cross-asset trading platforms with advanced capabilities such as multi-leg trades.  But the Apama consultants operate where the rubber really meets the road.  Some of them had come to the cold, snowy, greater Boston area from around the world for a week’s work too.  I visited some of their meetings for a detailed discussion of how Apama is used in the field.  Here are some real world experiences that they have had, helping customers create cross-asset trading applications with Apama:

  • Trading options against futures
  • Trading cash treasuries against treasury futures
  • Algorithmic FX spot trading against FX futures
  • Trading cross-border listed equities with a spot FX leg
  • Adding FX futures as an asset class to an existing FX aggregation deployment
  • Trading exchange-traded spreads that consist of two or more legs

Friedman’s comments in the CNBC/Davos interview were more around broad notions of innovation - like new energy technologies - than cross-asset trading, but taken generally I have to agree.  There’s innovation happening in our industry every day, and this activity will be an essential part of economic recovery.  I just hope the jet-set attendees found Davos worth the price.

-Dan