The Gateway to Algorithmic and Automated Trading

CBOE: Integrated Extensibility

Published in Automated Trader Magazine Issue 24 Q1 2012

When it comes to senior technology personnel, not many exchanges enjoy the continuity of the Chicago Board Options Exchange (CBOE), the oldest and largest U.S. options exchange and creator of listed options. Gerald O'Connell, CBOE Holdings' Executive Vice President and CIO has worked there since 1984 and been in his present role since 1993. He talks to Automated Trader about the thinking behind, and evolution of, the exchange's current CBOE Direct technology - and its key role in supporting the exchange's business model both now and in the future.

CBOE: Integrated Extensibility

When was the starting point for the exchange's current technology?

Our core CBOE Direct technology had its origins around about 2000. We developed the platform in anticipation of competition in the option space from the ISE. However, when we designed it, we also wanted it to also be capable of handling stocks and futures, as we anticipated that we might at some point be expanding into those businesses and wanted to ensure that our technology would be able to handle these markets from day one.

We didn't want to be in a situation where we would have to build everything from scratch for every new market that we entered. This obviously made the original development project longer and more demanding, but has saved us considerable time since, as we moved first into the futures business (about five years ago) and subsequently into stocks.

So the object model you were designing around 2000 must been pretty extensive anyway to accommodate the possible future addition of futures and stocks, but did you also design it so could be even more generally extensible, such as for OTC contracts?

Yes - absolutely. Back then the big thing was extensible objects. So we decided early on that we needed to take advantage of this concept so that we would be able to extend market/instrument definitions to cover pretty much anything. The intention was to make sure that everything we did was extensible and reusable.

What technology platform were you using before?

Prior to 2000 we were an assembler shop, initially writing for the IBM 370 series (which in the mid- 1980s was probably one of the fastest transaction processors in the world), but then we were also specifically coding for various other hardware platforms, as well. So moving from there to a higher level language was a very significant change for us.

Though the technology we use now is radically different, the low-level understanding we gained from writing assembly code has nevertheless been incredibly valuable going forward because it gave us a strong grasp of software/hardware interaction and how it can be tweaked to enhance performance.

It also gave us a good insight into mission-critical systems in that our mainframe-based platform was originally based upon an IBM operating system called Airline Control Program, an airline reservation technology.

So what technology did you switch to in 2000?

Java - which was obviously a much higher-level language than we had used before (although you could also argue that even C/C++ are high level compared to assembly language). We chose it because it gave us the opportunity to build, test and deliver applications in a shorter time frame, as we would no longer be having to write code tailored to a very specific CPU architecture.

Gerald O’Connell

Gerald O'Connell

Back in 2000 many exchanges would probably not have considered Java for a high-performance environment such as exchange matching engine, and would presumably have seen something like C++ as the more obvious choice?

Yes - at that time one of the features in Java that many people were concerned about was the garbage collector and its impact on performance. We certainly worked hard in the early years to mitigate the effects of this on our environment. On the other hand, the concept of Java Hotspots meant that we were able to compile and run our code so that it would actually be faster than the C++ equivalent.

Our relatively early Java adoption has meant that we have been able to gain huge amount of experience as regards Java-based server technology for tasks such as high-frequency trading. We also benefited from being able to work closely with Sun Microsystems (now Oracle) to build a high-performance Java server application.

What hardware do you run the platform on?

We started out using Sun boxes with SPARC processors, but migrated to x86 processor architecture a while back, which has engendered a healthy competitive atmosphere among Sun, HP and Dell, which are the server manufacturers we use.

We work closely with these manufacturers who provide us with pre-release chip samples for advance testing. For example, we are currently testing the latest Sandy Bridge processors. We will typically start testing a new processor architecture about six months before it goes on general release so that we can tune our platform to it. That is a task that has been made considerably faster for us by the use of Java.

What are your views on parallel processing technologies such as GPUs and FPGAs?

We've obviously looked at GPUs and FPGAs, but haven't done a huge amount with either as yet. I think FPGAs could certainly be applicable to high-frequency processing of message traffic, but by comparison GPU technology seems to be less relevant to an exchange environment. That seems perhaps better suited to some of our members who might be needing to run multiple instances of option pricing models (especially highly iterative models such as Cox Ross Rubinstein) across a large option portfolio.

At present our focus is more on parallelism in terms of how we design our applications and message flows to take advantage of our symmetric multiprocessing multi-CPU environment. By contrast, some exchanges try to keep everything on a single core, which pretty much limits their platform to the speed of that individual core and misses out on the performance opportunity inherent in threading across multiple cores/CPUs.

It sounds as if the exchange only has a preference for building rather than buying technology?

Yes - I think we are pretty hands on in that respect. My division consists of about 250 people, of which about 150 are on the development side, with the remainder on the operations side, looking after things such as hardware and network. We also augment the programming side with on average about 100 consultants at any one time.

However, our philosophy as regards consulting is that we never farm out anything that involves a core design or strategic technology. We primarily farm out assurance testing, configurations and so on. Where we do use outside consultants for programming, the completed work is always turned over to us and supported by us in-house.

How easy is it to upgrade your core matching engine and network capacity?

We can add can add servers or switches overnight because we designed our system architecture so that it is inherently scalable. We continuously monitor capacity, load and distribution and always look to have a significant cushion in excess of any anticipated demand. In the past we used to work on the basis of having processing and network capacity available that was approximately twice the previous peak load, and while we no longer rigidly try to adhere to that, we generally tend to be at about that level anyway.

The interesting thing as regards network capacity is that as everybody is now migrating to 10 Gbit, the bandwidth for more than sufficient headroom is probably already available - plus there is the reduced latency advantage of migrating. As a result, we really don't see network bottlenecks as a potential issue.

What networking technology do you use?

We use Arista for network switching and Solarflare for network cards. On the switch side, Arista and a few other vendors are providing technology that is much faster than the mainstream. Every time you go through a switch there is a copy of a message made, and how the vendor software within the switch processes that message obviously has a major effect on performance. It can easily make the difference between an in/out time of 4µs and one of 40µs. When choosing a switch vendor, we did a "bake-off" between Arista and several major network vendors, and Arista proved to be the fastest in terms of both 10Gbit switching and switching more generally.

On the network card side, one of most significant changes in recent years has been the much higher performance available from specialist as opposed to generic vendors. Cards from companies such as Solarflare have a significant advantage through features such as their OpenOnload network stack that allows kernel bypassing. As a result, some or all of the TCP stack processing is done on the network card rather than the operating system. The performance difference this delivers effectively makes using this sort of technology mandatory if you want to be competitive in terms of network latency.

How do you feel you stack up against the competition in terms of network performance?

That's the key question. We think we stack up pretty well, but not just in terms of round-trip times. An order based exchange deals with high-frequency short orders to and from the exchange. By contrast with an options exchange you might have 2000 series for a single option class all with separate markets and bid/asks. Therefore on the option side you are dealing with quote packets which can be of widely differing sizes, with up to as many as 400 different quotations in a packet. Therefore when discussing latency and roundtrips you have to consider what sort of load you're talking about.

The important thing isn't so much what you might measure on the exchange side, but what the member firms actually see and experience. So when comparing ourselves with other exchanges, we always seek input from trusted firms who tell us what type of relative latency they see between us and our major competitors. Based on that information we're comfortable that at the moment we are doing well.

They report to us not just in terms of individual point-to-point latency but also as regards consistency of latency, as they are obviously very concerned to keep jitter to the minimum. The other thing they are concerned about is access to market data - how long is the period between a new piece of market information being created on the matching engine and them actually receiving it?

As the exchange expands, how do you manage issues such as rack space capacity?

We run multiple data centres, with the primary data centre here in our building in Chicago, while for two other exchanges we run the primary data centre in the New Jersey area in an Equinix facility. (We also offer collocation facilities for market participants in both these locations.) We have already built out considerably more space over and above that currently needed by existing racks and cabinets.

What's your take on processor density? Has the dynamic of processing demand versus physical space changed?

I don't think we need to install new hardware as quickly as we might have had to in the past, because processor density in terms of heat output and power consumption has improved. As a result, the competition becomes more one of the level and quality of software you install. When you look at the most recent generations of x86 chips as regards outright speed, there has been less distinction between these individual CPU generations in terms of speed than in the past. The software that you use and the networks that you deploy make the difference in what is now a microsecond rather than millisecond competition.

CBOE: Integrated Extensibility

Do you make any distinction between automated and manual traders?

No. If they are market-makers, manual traders on the floor are still required to stream in quotations. They may be standing on the floor and hoping to get walk in business to the floor, but they are also required to make markets, so they will have a server somewhere that is streaming us in quotations. They are treated just the same as if they were not physically present on the floor.

Do you impose quotas on the ratio between trades consummated and messages?

No we don't. However, we do charge for quotation traffic and packets, so participants have a cost incentive only to send realistic quotes in the first place.

Have you detected any perceptible slowdown in the growth of quote traffic?

Well, the past month hasn't been so high, but on November 15, 2011 we generated a quote peak of 847,000 messages per second. Obviously as an options exchange our capacity to create market data traffic is enormous.

I still think the ratio between trades and quotations will continue to increase because high-frequency traders are not yet as commonplace in options as they are in other markets, such as stocks. However, they are now starting to enter the options world and as they do so they transmit a very large volume of quotes. We actually see this high-frequency trading starting to come from two sources. On the one hand are existing high-frequency stock traders entering the options market. On the other hand, existing traditional options market-making firms are starting to employ high-frequency techniques.

What's your take on the effect of high-frequency trading on the market in terms of accessibility and actually getting trades done?

I think it's been good. The more liquidity represented in the marketplace, the better the marketplace.

Any particular category of trading entity or business you see the exchange attracting in the future?

I think these days you really do have to take some sort of risk in improving the market. In the US we have nine options exchanges and we cannot trade through each other. Therefore the really successful people are those that are willing to improve the national market and attract order flow to it. So whether you are trading high-frequency or doing something else, you still need a low latency platform to do it on.

Most of our traffic still comes from people that we have been dealing with over many years. Having said that, I think we could see legislation being an important influence on our participant demographic and instrument types in the future. Dodd-Frank will result in more exchange based trading, but as yet it's not clear to what extent markets such as over-the-counter swaps will be affected. However, even without Dodd-Frank, the effect of the global financial crisis combined with continuing economic uncertainty has been to drive participants towards exchange trading in order to access guaranteed central counterparty clearing. That is obviously pretty key to managing counterparty credit risk.

We think there is also a processing standardisation opportunity inherent in trading OTC instruments on an exchange, which reduces the amount of manual intervention and paper work often involved in the back office when dealing these instruments off exchange today. In fact, we are currently working with our clearing organisation (OCC) on precisely these questions of processing standardisation.

One of the things that we think will help us as regards attracting OTC activity is our flexible options concept, where our CFLEX technology makes it possible to quickly amend things such as expiry dates, strikes and instruments.

I think that combination of shifting demand and flexible technology makes it a realistic possibility that we might one day see OTC products trading on exchange at volumes similar to existing standardised products.