The Gateway to Algorithmic and Automated Trading

The Boys from Balchug

Published in Automated Trader Magazine Issue 02 July 2006

Dmitry Bourtov runs the US-based hedge fund Solaris, as well as heading a group at the Moscow offices of data vendor CQG that combines the activities of a quant shop and a specialist financial software developer. Bourtov talks to AT about the process by which he and his team design and manage a fleet of automated trading models across hundreds of markets around the globe.

Dmitry Bourtov principle of hedge fund Solaris

Bourtov and St Basil's

You head an interesting group - partly building and trading automated models, and partly building software for CQG. How does this all work in practice?

The team consists of seven people, including myself. Two of them are what you might call researchers, both with a strong background in developing complex mathematical solutions. (One comes from the Landau Institute for Theoretical Physics, the other from the Russian Space Research Institute.) They have been responsible for refining some of the methods we use, such as cluster analysis (a mathematical procedure used to reveal the groups of interdependent variables) and a variety of other sophisticated techniques.

There are four other members of the team who are usually more directly involved in trading model design and programming, including myself and one option specialist. Finally we have one dedicated programmer/developer.

The team has two roles. On the one hand we develop concepts and prototypes that may be included in the main CQG product line or just used internally. On the other, we build mechanical and automated trading models that we use for trading a variety of proprietary client accounts and an account for the US registered hedge fund, Solaris.


When did the team first become involved in automated trading?

We had been interested in the concept since the mid-1990s when the emergence of electronic exchanges began to make it feasible. However, it was not until mid 2000 that we actually went live with our first fully automated model. By that time most of us had been at CQG for a while, having first met at the company's original offices near Balchug Street.

Where it all started:
Balchug Stree
t

Balchug Street

Is all your trading completely automated?

With the exception of those markets that are still pit traded, yes. However, the orders for those markets are all still generated by mechanical models - it is only the order entry that is done manually. We do no discretionary trading whatsoever.


What do you regard as the biggest challenges in developing automated models?

I think the main issue is the huge gap between back testing and real time trading. That is critical enough in a manual environment when you might be trading just a few markets. However, when you extrapolate this to a fully automated environment where hundreds of markets are being traded the potential impact is simply colossal.

For example, back testing makes the implicit assumption that everything operates in a synchronous fashion. An order appears, on the next available tick the order is filled, no orders are lost due to communication failure or the exchange going down, or because of an additional 10 millisecond delay in the order reaching the exchange gateway etc. However, in the real world these things do happen.

Another problem with back testing is that it is commonplace to make assumptions about data that will never actually be replicated in real time trading. One trivial example is that when back testing many people use exit signals that take the market close as the exit price. In the real world it is very unlikely that you will be able to buy or sell exactly on the last price of the day, in practice will need to execute your order a few ticks before the close of the final bar. To simulate that realistically when back testing, you need to use not conventional daily bars, but adjusted daily bars that exclude the price ticks for (say) the last minute of trading.

The Boys from Balchug

Meet the Boys: from left - Vlad Zhigalov, Eugene Dorofeev, Vitaly Kurbakovsky,Dmitry Bourtov,Sergey Repin, Grigory Gankin, and Alexander Finogenov


So you presumably spend a considerable amount of time working on that sort of data representation issue?

Yes, but that is only a comparatively simple example. There are more complex issues around event order. While database replay technology has moved on substantially in recent years, many of the popular model development platforms in use today still do not take advantage of these advances. They are incapable of replaying data at the tick by tick level. At the simplest level, this means that you cannot determine for certain the exact order of events within an historical price bar. For example, if a trading model trails entry stops above and below the market (and both stops are close enough to be hit during the same price bar), you will need to know which stop was hit first if you are to simulate historical performance accurately. If your development environment is incapable of tick by tick replay, this accurate simulation is impossible.

The ability to scrutinise data at the finest granularity is also vital for many trades involving spreads or intermarket arbitrage conducted at high frequency in short time frames. Looking at less granular historical data, it is easy to fall into the trap when back testing of assuming "the price on this market was A, and on that market B, so the spread was C". Without looking at the tick data, you cannot be sure that prices A and B were ever available simultaneously. Yes, you might have been able to achieve the spread price by legging into the position but that is by no means certain and of course also incurs additional market risks. That sort of discrepancy can make a substantial difference to model performance when you move from back testing to real time trading.

Another issue is that when trading automatically, it is easy to overlook the significance of the fact that market data and order related data come through two different channels.

Since there are two distinct interfaces, you will receive order acknowledgements and quotes at different speeds. This means that you can have significant problems with order cancels order (OCO) orders if the orders concerned are close together. In the back testing environment, it is easy to assume that these orders will always be executed as intended. In a live situation, you can find that the automated cancel message does not arrive in time and both orders end up being filled.


It sounds as if there is a whole layer of additional considerations you allow for when building automated models?

Yes, but a lot of these considerations apply to manual traders as well, and relate to the change from open outcry to electronic markets. For example, in the past, if you were not on the floor you had to pay the bid/offer spread. In electronic markets, that concept of "inside/outside" no longer applies, and people often assume that this simply means that they no longer have to pay the spread. Up to a point that is true, but it overlooks the fact that they have no guarantee that they will ever have their order filled. If they join the bid or offer they are just joining the order queue and may never get to the top. Therefore, it is easy to fall into the trap when back testing of assuming that your orders are being filled when they may not be. That has significant performance implications when a model is transferred to the real time environment.

Central Bank of the Russian Federation

Moscow regional branch of the
Central Bank of the
RussianFederation (think NY Fed)


Would you say that the discrepancies between the back testing and real time environments you have outlined are always negative?

I would say that 95% of the time they are - i.e. the real time environment is tougher than the back testing one. It is not good enough just to pluck a figure out of the air and degrade your back testing results by that amount in order to simulate this. You have to be more specific, because different types of model are affected in different ways and to different extents by these "real world factors". We therefore spend a lot of time incorporating these real world negatives into our backtesting environment, and then using that environment to stress test models with various combinations of costs, slippage and order routing failures.


So how do you go about actually deploying your automated trading models?

Having back tested a model satisfactorily, we will then paper trade it. The main intention is not just to check that the overall performance is comparable - we are looking for a trade by trade match. So if for example, we have paper traded a model for a month we will collate the results and then run a back test for the same month. The results are then cross checked and while there may be very minor discrepancies (such as the odd tick of slippage) any missing or significantly non-aligned trades will be investigated.

There are two objectives here. On the one hand, we want to make sure that we have not made any false assumptions as regards the specific model (such as a peek ahead1). On the other, this process gives us valuable general feedback about how accurately we are simulating the real market in our back testing process.

If the model passes this test it will be deployed in the live market. Even here, we continue the cross checking process by having a separate machine still paper trading the model and comparing the output with the results generated in live trading. If any trades are not perfectly synchronised, this machine issues an alert.

We also run real time checking of all open position information and the available equity across all our accounts. This process is particularly important for options trading where we have also built a real time algorithm that provides a close approximation of the CME's SPANTM engine. This allows us to model the margin expectations of any positions the model might take and thereby calculate pre-trade risk/reward/cost metrics. (While this sort of SPANTM data is available from exchanges, it is only available at the end of day, which is obviously too late for our purposes.)

Like it says - Balchug Street


What about monitoring the trading models?

We have a rota whereby one member of the team per day is dedicated to tracking all the models and ensuring that everything is running smoothly. As per the example above, they have a range of monitoring tools to assist them in this.


What is the typical development cycle for an automated trading model and what is the rejection rate?

As you might expect, it varies considerably, but somewhere between three and six man months is probably the average time taken from concept to going live. It can be a lot longer though - we have one model that we have spent three man years on so far.

As regards the rejection rate, I would say that about 95% of the models we come up with will, after extensive testing, be discarded. However, they may not stay discarded. Sometimes we find that a change or advance in technology makes a model we had previously rejected viable.

In House Tool: Pattern Recognition


How would you say your models were split in terms of market making, arbitrage and directional logic? And is any particular category preferred for particular markets?

We have models in all three categories, but directional models probably make up the majority. In general, I wouldn't say that any particular approach was tied to a particular type of market. However, as regards options, there is perhaps a bias in favour of arbitrage approaches, as we feel that we have some techniques that give us an advantage in spotting mispriced options.


Are there any markets that you find are not amenable to automated model trading?

Not as regards the business logic, no. However, option markets are an interesting challenge for us in terms of technology. Historically, option market-making was always a request for a quotation market - a price was made only at the time that it was requested. That is no longer the case, so our automated models now have to place orders across hundreds of strikes and expiries. If the underlying market moves rapidly, you cannot just move all five hundred orders in a millisecond, so keeping our portfolio balanced in that sort of environment is obviously very challenging and an ongoing area of research for us.


Would you say that your higher frequency automated trading models require radically different business logic?

The same general principles in terms of risk/reward will still apply. The logic that generates buy/sell orders is also probably broadly comparable in approach to that used for longer time frames. However, I think the way in which electronic markets function has opened up additional opportunities for high frequency models that incorporate depth of market analysis.

The snag with this is that historical depth of market data is only just starting to become commercially available, which rather hinders rigorous back testing. That is one reason why we believe that while high frequency auto trading adds an extra layer of tradable opportunity to the market, it isn't the Holy Grail. In general, we feel there are more opportunities in the portfolio distribution possibilities (markets, models, parameter sets, timeframes etc) of automated trading.


So how many automated models are you running across how many machines?

We are probably running several hundred models at any one time. That may sound a lot, but we classify models that use the same basic logic but radically different parameter combinations as separate models.

For actual order execution we use just three servers, though there are additional machines used for real time risk management and alerts.

Dmitry Bourtov principle of hedge fund Solaris

Bourtov on the trading floor


What tools and technology do you use?

We obviously use CQG and some of the complementary tools we have developed for that. That aside, we are completely agnostic as regards the technology we use. All we are concerned about is that it fulfils the needs that we have. We use a very wide range of languages/tools, including C++, C#, VBA and MATLAB. (MATLAB is also extensively used in a program we run for students at the Moscow Institute of Electronic Engineering. Of these students, some two or three per year take an additional diploma in trading model development.)

In addition, we have built a number of specialised tools to help us in our research, such as statistical libraries. We also use a number of libraries we have built for digital signal processing and certain mathematical transformations. As you might expect, if we find ourselves regularly writing the same piece of code, we will simply rewrite it as a callable library for ease of reuse.


In the past you have built some tools for CQG that use artificial intelligence techniques. How many of these are actually used in your automated trading?

We do use them - but not to try and predict where the market will close tomorrow! We have found them useful for discovering hidden relationships and lagged correlations, so we are primarily using them as sophisticated data mining tools. For example, we have a pattern recognition tool that we built that is based on a blend of various AI techniques. We use that extensively in our option trading as we have found it useful for identifying patterns that predict significant volatility shifts.


Who comes up with the ideas for models?

When it comes to the business logic, all the ideas originate from within the team - none of our models are "bought in". Everyone in the team tends to suggest ideas, and there is always a great deal of ongoing discussion taking place about possible techniques and approaches.


How does market connectivity affect the deployment of your models?

The improvements in market connectivity in recent years have allowed us much more flexibility as to how we deploy models. There is always something of a trade off; housing models in a co-location facility close to the exchange reduces latency, but means there are more issues involved in managing them. As a result, we tend to operate on a case by case basis. In general, the most time-sensitive high frequency models will be housed in a co-location facility, while longer term models will be run from servers here in Moscow.