The Gateway to Algorithmic and Automated Trading

What do the traders say?

Published in Automated Trader Magazine Issue 21 Q2 2011

For the second instalment of our two-part opener for our new regular Buyside Beat feature, David Dungay spoke to traders and other buysiders about the many challenges they face in the quest for alpha. This month, the discussion ranges over the development and implementation of new trading ideas, and then moves on into the life-span of models before finally addressing the feasibility and usage of adaptive parameters in modelling. [Buyside Beat part one may be found in the Q1 2011 issue, page 18, or online here]

On the BUYSIDE BEAT this month

Dmitry Bourtov, CEO, Unimarket Corp Dmitry Bourtov, CEO, Unimarket Corp
John Reeve, CTO and head of trading, BlackCat Capital John Reeve, CTO and head of trading, BlackCat Capital
Miles Kumaresan, Principal and Head of Trading, Algonetix Miles Kumaresan, Principal and Head of Trading, Algonetix
Fred Pederson, Business Development Manager, Vincorex AG Fred Pederson, Business Development Manager, Vincorex AG
Thomas Parry, FX Trader, Algotecture Thomas Parry, FX Trader, Algotecture
Dr Peter Wiesing, Founder and CEO, Global Arbitrage Group Dr Peter Wiesing, Founder and CEO, Global Arbitrage Group

David Dungay: Do you have a formal pipeline process for the development of new trading ideas?

Dmitry Bourtov: Yes we do. We have found our best software has come from within. Once you develop one system you have an idea of the process.

John Reeve: We have a formal approach to testing and introducing new strategies into production. We have developed a comprehensive set of tools to speed this process. However, the creative process of making new discoveries cannot really be formalised as inspiration arrives in it's own time. We have also developed tools to aid market characterisation and this helps with the discovery process.

Miles Kumaresan: The brains of most quant traders never stop working and looking for new opportunities or improving existing ones. Regardless of whether you are in a meeting or helping your kids with homework we are always thinking. We live and breathe models. Therefore, the solution to a problem comes at the most unexpected moment and then you refine that conceived solution in your head for days.

At this point, only when you know a fair bit about the new trading opportunity, will the long lunches come to a sudden end and the formal pipeline kicks in. The team will enthusiastically work around the clock to analyse methodically and rigorously before finally modelling it.

Fred Pederson: Yes. If we enter a new market and we don't know exactly what our slippage will be, how we actually perform in this market, or how stable the correlations we
see from the historical data are, back tests are not sufficient.

We let the strategy trade in a dummy mode which means it gets fed a live feed but is not sending its trades to the exchange. We see how it performs for a couple of days and after that we switch it into live trading mode but with a decreased risk limit. We won't enter into large positions in this phase. Then when this works for a few days we turn on the full algo and trade normally. It is about a week long process.

Thomas Parry: Not really, I think the best way to describe it would be iterative or waterfall. It's a constant process. I tend to stick with some core principles (i.e. "buy low/sell high") and then optimise the models according to the instrument being traded and market conditions. There's usually a number of trade-offs in the actual development process. When I first started developing models I thought everything had to be 'perfect' before it was released to the market. Now I feel it's probably more important to be in the market trading and have the market tell you if you're right or wrong than trying to build a perfect model for imperfect markets. One caveat to this is that certain features of the system such as PnL and Position Calcs, do need to be perfect. Once the models are in the market trading, I then spend a lot of time digging through logs and market data making sure that they are doing what we intended them to do as well as looking for possible ways to improve the models in the next generation.

Peter Wiesing: Yes, our investment ideas are entirely rule-based and undergo a clearly defined process.

In general we pursue the development of investment strategies based on sophisticated quantitative analysis and modelling of capital market characteristics. We have a strong focus on the systematic examination of financial data from international markets at different temporal granularity.

The process always starts with a very precise idea/hypothesis. Given the idea, we collect and prepare the necessary historical data to test this idea. We then examine which models represent the underlying effects in an optimal way and assess the models' predictive capability. Finally, we run the strategy on a paper trading account and finally on a prop trading account. Once we are convinced of the superior quality of an investment strategy, it can be tailored to the individual needs and constraints of our clients.

David Dungay: In your view does proprietary technology accelerate the process of getting viable trading ideas into production? Or is off the shelf technology adequate?

Dmitry Bourtov: If you mean for the decision making between two typical trades on entry and exit then off the shelf is OK. You will find tons of publicly available software and you won't need to develop anything else. You might be working on some model specific mathematics and need to develop something but infrastructure-wise this stuff is all available. Your entire work in this case relates only to system logic. If you are trying to compete practically with various spreading applications and develop something really high frequency I would say this would normally be worked from scratch. You would interface directly to the provider and you would develop on C++ or java, or equivalent. The system infrastructure required to support trading systems' development will be much
larger than the trading system logic itself.

John Reeve: Proprietary technology is key to our trading performance. The code base we employ has been developed over nine years and has benefited from continuous use and improvement to enable effective and rapid deployment in a fully automated execution environment. A difficulty with off-the-shelf technology is the person that designed it may not have been thinking in the same way and key functionality may be missing from the APIs. If this happens a large amount of time can be wasted implementing a work-around and the results will likely be non-optimal.

Miles Kumaresan: I only use proprietary technology (with the exception of our tick database and Matlab) as this hugely affects time to market and accuracy. I can pick the core of the model used in analysis and plug it into our proprietary system for accurate simulation and later go live with no additional coding. Getting data to and from the model is seamless as we have a generic framework for all models. This type of convenience comes at the cost of initial investment of several man-years and experience, but once done it is invaluable.

There are some classes of systematic strategies that would benefit from off the shelf technology and it would also be the most cost efficient option. Furthermore, the maintenance overheads are small and speed to market is short. The last time I looked there were some really good ones out there.

Fred Pederson: I can only really speak for us here. If you have a strategy then you try to develop everything to support this strategy in the best possible way. In my opinion this is usually easier when you do everything yourself. If you want to optimise something later and you want to adjust the code it's easier to do so if you have done everything yourself. The other benefit of this is you are in control. If you buy software and something goes wrong then it can be difficult to fix. If you do everything yourself then you know where to look for the solution. When you buy off the shelf HFT technology I doubt you could rely on it to be bug free. If you get off the shelf technology you might be faster deploying your ideas but it does not mean you are better. The performance won't be better. Later on when you want to make a minor modification of the code or how your trader gets the feed or anything like that, you will be able to do that much faster if you own all the code. Another advantage is that you learn more about the micro structure of the markets when you dig into different exchange specific protocols yourself. This can also help you in optimising the order execution.

Thomas Parry: I think that there is some great off the shelf technology available right now, but I hate buying off the rack. Granted I use a number of open-source packages for Matlab/R for statistical and econometric analysis but all of our actual trading software and market data analysis tools have been custom built around our needs. I think the key is to really know what tools are available to avoid re-building common components unless they need to be customised/ optimised for a particular need. There is probably an over abundance of tools out there and finding which ones are right for you and being able to utilise them effectively is key.

Peter Wiesing: Proprietary technology helps a lot to speed up your product development. Within our firm all trading software (data API, data cleansing, trading model, fix execution) is solely proprietary. However, we have limited experience with off the shelf technology but feel that most of the time it does not fulfil your individual needs/requirements and you always have to extend/modify it. Obviously this does not apply to any kind of hardware or services like dedicated lines and co-location server.

David Dungay: What criteria/metrics do you use to determine whether a particular trading model is coming to the end of its productive life?

Dmitry Bourtov: I would look for significant change in the risk reward. I am speaking about a statistical factor and the system would already have a tolerance to risk factor laid out. If you have a good system and normalised the return you should get a normal distribution. If you see that your return has significantly shifted and you measured it in terms of sigma you might get a good feeling for things that are natural to the system. Or the data may significantly change the structure of the system and it can adopt these types of scenario. Statistically you can develop this matrix and apply it continually to your systems. You can't get an answer on the first day that system is not valid any more, but after some time-lag classical statistic analyses may lead you to conclude that the system has reached the end of its life.

John Reeve: Understanding why a model makes money is key to understanding the conditions under which it might no longer work. We use this knowledge to select parameters to monitor in production trading as an ongoing health check for each strategy.

Miles Kumaresan: This is a tough problem. Simple measures of tracking live performance stats against those from backtests do not work in most cases, due to the error in calculating stats without taking into consideration the underlying market status. Since we experience frequent structural changes in the market more sophisticated measures should be utilised. This becomes particularly critical for the lower Sharpe strategies that really benefit from a metric to spot their end.

Instead we work with performance signatures. All equity curves have a set of key signatures. It is important to link these signatures to the underlying market conditions or drivers that your model is sensitive to. This is a non-trivial exercise.

Once done, conditional performance drift measures will break down the performance, or lack of it, which would permit classification.

Fred Pederson: We are working towards changing these models on a daily basis, for now we just change them weekly of fortnightly. In the end we don't want to think about criteria anymore. All the trading configurations will change themselves and their model based on the new historical data feeding into it.

Thomas Parry: Besides some of the more common performance metrics such as Net Pnl or Max Drawdown, I like to look at each model's edge per USDmil traded for my spot FX models (or edge per contract for futures). We also look at our return in terms of exposure in the market and the ratio of our NetPnl to commissions to determine how effectively we are allocating our risk capital.

Peter Wiesing: We use sophisticated statistical tests to determine if the recent risk/return profile of our trading model is consistent with the expected risk/return profile of the particular strategy. However, you need a significant amount of trades to verify if your realised profit/loss is consistent with your expectations. Especially for long-term driven strategies, such as global macro strategies, it is impossible to predict when the particular strategy is coming to the end of its productive life. This is different in the High Frequency space where you collect a lot of trades profits/ losses in a short period of time.

David Dungay: What criteria/metrics do you use to determine whether such a model might be revivable through tweaking or needs scrapping completely?

Dmitry Bourtov: We would use the same criteria. Normally the fewer parameters that exist in the model then it's better for final usage. If your model has two dozen parameters which you need to tweak every single day, or week, the model probably has zero life expectancy.

John Reeve: If the fundamental reasons why one of our strategies was profitable failed, then the only action would be to stop trading it. In the time we have been trading, we have not seen this happen to a single strategy though some have been replaced by better strategies as we have made new discoveries.

Miles Kumaresan: It is in many ways a reverse problem of the previous question. You need to know what market conditions drive your models, or what they are sensitive to. Once you know this, it is a question of having a simple tracker model looking for these conditions to be met to determine whether it is a good time period for a particular model. In my experience I find that

some market opportunities simply vanish forever while many others recur.

Fred Pederson: That would be the same. I wouldn't be thinking of reviving anything.

Thomas Parry: They are the same as above.

Peter Wiesing: We constantly work on existing models. Sometimes a particular approach does not work anymore, the opportunity might still exist but it needs a different method (e.g. linear vs. non-linear) to utilise the potential profit. If the opportunity disappears then we might be forced to scrap it completely.

David Dungay: Parameter-less self-adaptive models - reality or fantasy? If reality, do they have longer profitable life expectancy?

Dmitry Bourtov: It's a reality. A good system is a system that is not trying to tweak the parameters which are optimal to a particular data series. It is normally something that performs well on average. Practically speaking you don't want your morning results to come in from the far tail of your distribution. You want them to stick in the middle of the distribution and not deviate too much. You want to know the typical return on the range and that there is not a better range. Usually this comes when the system doesn't have too many parameters to adjust and a system that is built around some sort of self adaptation mechanisms.

A lot of these things were developed years ago and people have been using elements of self-adaptation successfully for a while. I think lots of people are
using it but the question is how many people are utilising it.

John Reeve: All our models contain an element of self-adaption. It's not possible from a risk perspective to have unconstrained adaptive models. Parameters are required to place limits on behaviour.

Miles Kumaresan: It is a reality but I have to clarify this first. You can create models that use adaptive parameters. This means models have parameters but they can autonomously change them in response to changes in market conditions. We have been doing this for years and there are others out there who do this too.

We view adaptive parameters as critical to creating resilient models. Adapting to changes in the market dynamics is one thing, but responding optimally to many changes is another. Therefore, I also calibrate the models manually once or twice per year to make them as optimal as possible. The notion of responding to feedback from the environment is a very common process. It is fundamental to survival, and refinement.

However, what I do not believe in is the existence of an uber-model that monitors its behaviour and responds to its impending death with a clever fix all
by itself.

Fred Pederson: It's a reality for us. We have something like a hybrid. We have a way of deriving the trading configurations from historical data. On a daily basis you can get the newest historical data that came in the day before and then use that to adjust your models.

You can do all this in an automated way and then you end up having a model that adjusts daily by itself. This frees you from a working process so the whole process will become simpler because you don't have to change your models and other parameters and configure them "manually". We just do this on a daily basis by using up to date data from the last 30 days and let the model adjust itself. Then you have a fixed recipe of how you got your current trading configuration from the past historical data and it can do this every day.

You won't even have to think about all these models anymore. It is very much a reality and I think it is even fairly straight forward to do. It's hard to say if others are doing the same because this area is a bit secretive and no one has told me what they are doing. It does come quite naturally to us so I guess it will come naturally to others too.

Thomas Parry: I'm currently not using any AI or self-adaptive models. Although we use a lot of data mining techniques to identify and optimise the majority of our models underlying parameters. This is usually done in the form of an end-of-day batch process rather than a real-time one. Although I believe many of these techniques are viable and that the market will ultimately gravitate to them, there's also a huge speed/latency trade-off that needs to be considered. The logic needed to implement some of these techniques drastically increases the complexity of the code supporting the models, which makes them much harder to optimise as well as maintain.

Peter Wiesing: We use parameter-less methods such as various statistical tests, kernel methods and Bayes frameworks a lot. These approaches do have their clear advantages but are prone to overfit. They are a reality but it is a clear fantasy without a smart quant analyst. Algorithms and methods are only as good as the person who is using them. We do not observe that these models have a longer or shorter profitable life expectancy.