Strategies: Optimisation Algorithms for Automated Trading

First Published in Automated Trader Magazine Issue 04 January 2007

Automation opens up the possibility of trading multiple models or the same/similar model with multiple parameter sets. However, that raises the question of how best to optimise those parameter sets. Chris Donnan, who works in equity derivatives trading technology at a top Wall Street firm, answers it.

Chris Donnan

Chris Donnan

Much work in algorithmic trading has been in the area of trade execution. VWAP, TWAP, Implementation Shortfall algorithms and their brethren have become ubiquitous techniques for executing trades. It is inevitable that algorithms will become componentised elements in other parts of the automated trading landscape in much the same way as these execution algorithms have modelled the execution aspects.

It is just a matter of time before optimisation algorithms are an integral part of this space. These algorithms can be used to tune or train entire trading systems and/or any element ranging from risk parameters, to VWAP parameters, to entry and exit rule parameters.

Optimisation algorithms are tools that can be applied to the training or tuning of automated trading systems. This training typically happens during the tail end of the development phase of trading systems, but it is also possible to use optimisation algorithms continuously during real time trading. In this article we will focus on using optimisation algorithms as an integral part of the trading system development process.

You have developed your latest and greatest trading system. You are certain that this model will be put into production trading. Are you doing yourself justice by rolling it out as it is?

  • Could this very system be tuned to deliver greater returns?
  • Could this very system be tuned to have a better risk profile - smaller drawdowns, larger trades?
  • Could you avoid losing money on a system that is already manually curve fit and will fail in real time?
  • Could you create more than one system from this candidate system?

Wouldn't it be ideal if you could simply express your goals to the computer and have it adjust your trading system to meet your goals?

This is where optimisation algorithms fit in. The optimisation of trading systems is a crucial step, but you must know what you are getting yourself into! There is a whole world of powerful techniques that could be used for optimising your trading system, but each technique has its own benefits and baggage. Not only do you have to choose a particular mechanism of optimisation, but just as importantly, you must exercise an appropriate process so that you do not shoot yourself in the foot.

In between the point in time that your system is developed and the point in time that it is running in production, the process of optimisation aims at improving your chances of success and profitability.

What is optimisation?

In the simplest sense, "optimising your trading system" means making the desirable numerical attributes of your trading system go up, and/or making the undesirable numerical attributes of your trading system go down. Making money is desirable - so you want to maximise how much money your system makes. Losing money is undesirable, so you want to minimise how much money your system loses. That is, of course, simply stated in English - yet not so simply stated to a computer in many cases. We call these desirable/undesirable attributes "fitness objectives" and they are either minimised or maximised. We can think of them simply as goals. We want to apply an 'optimisation algorithm' to the task of optimising our trading system(s).

What does 'best' or optimal mean for your trading system?

The first thing you must do is to decide what "best" or "good" or "optimal" means to you. This seemingly simple goal can often wind up being a difficult task in practical reality. To start, you might decide that "best" means; "makes the most money". This certainly sounds reasonable for a trading system. As soon as you do this - you start optimising and you find out that your 'best system' now has one zillion penny trades! From here - you refine your vision of "best" to: "makes the most money per trade." This sounds fine as well until you see that you get one giant trade and a million giant losses - and you are at a net loss. This process goes on for a while, thinking about how to tell the computer what you want from it.

Next - you might begin looking at things like Sharpe ratio, Sortino ratio, Sterling ratio etc. These are all fitness calculations that combine your fitness attributes into a single numerical value - one goal. You could of course come up with your own calculations that attempt to combine all the desirable features of a "good" trading system into one number. At the end of the day - the important thing to note is that you need to express your goal to the optimiser.

Whatever goal you set out for your optimiser - it should do one of two things to that individual goal; make the numbers you want to go up - actually go up, and/or the numbers you want to go down - actually go down. Again, in practical reality it is often fairly difficult to express these goals all rolled up into one pretty number.

Possible Techniques

There are many devices you could use to optimise your trading system. People often start out doing it manually. This is by far the most common mechanism of optimisation. Traders and/or quants develop a model, watch it, change some input parameters, and see that it is doing more of what they want and less of what they don't. This is an iterative process; often consumes lots of time and effort, goes in circles and is difficult to measure.

You might choose to do a brute force optimisation. This is the kind of optimisation that exhaustively tries every single combination of system parameters to see which one is 'best'. A brute force optimisation is only practical/possible when you have a relatively small number of inputs and/or small amount of data to evaluate. If you have lots of inputs you could be looking at literally eons to complete your exhaustive search!

Brute force optimisation

There is a long list of optimisation techniques you could choose from including:

  • Ant colony optimisation
  • Particle swarm optimisation
  • Simulated annealing
  • Tabu search
  • Genetic and other evolutionary algorithms
  • Coordinate descent
  • Artificial Immune Systems
  • Dynamic Programming

There are many, many more. Optimisation algorithms each have their own characteristics. Some of the characteristics you care about are:

  • How long does it take to "do its thing"?
  • Does it really get me the "best" possible answer among my alternatives?
  • Can it handle problems as complex as the one I am about to give it?
  • Can it handle optimising towards multiple objectives?
  • Can it deal with any constraints I may have?

Goals

In life in general - it is good to have clear goals. As humans, we make goals, strive for them, update our goals and go after them continuously until we have met them. We want our optimiser to be a goal seeking machine that works on our behalf.

Imagine how much more complex it would be to manage our lives if our goals were all stated like something like this:

"Maximise this self: 'take the average weekly hours with family, plus with the total income per month, plus the square root of the average income of citizen of nation X, divided by….. '"

Ugh! We would never be able to strike the right balance in our lives at all. Instead we would like to say:

  • I want to spend time with my family.
  • I want to earn as much money as possible.
  • I want to work in an area that I am passionate about.
  • ..etc.

This is a bit of an absurd example, but the point is that you would ideally like to state your goals clearly and send the computer on a mission to get them directly.

Multiple objective optimisers

A multi-objective optimiser is a very useful tool. These kinds of optimisation algorithms allow you to optimise with multiple stated goals in mind at once, without having to 'roll them up' into one numerical value. There are a few problems with rolling your objectives into one number, but simply - it is not expressive enough of what you really want. If, for example, you roll your goals into a Sharpe ratio, it will not necessarily give you what your REAL goals are - it will simply try to give you a good Sharpe Ratio. Instead - it is better to simply tell the optimiser your real goals - and have it look directly for them.

Ubertrader


The general mechanics of an evolutionary optimisation algorithm

We have a fictitious trading system; UberTrader. UberTrader trades multiple assets - asset composition is X parts put options, Y parts call options and Z parts units of the underlying asset. UberTrader also has other parameters - so let's look at them as a list of possible input parameters:

We need to tell the optimiser what ranges we it can use for each parameter. For example, our first parameter was "put option contracts". It would not be reasonable to accept a value of 9823423597823 in all likelihood. For each parameter, we tell the optimiser the acceptable minimum and maximum bounds - say 1-10,000 for parameter 1 - "put option contracts".

Let's also decide on some goals. We need to express our goals to the optimisation algorithm so that it can go get them for us. Our goals or 'fitness criteria' will be:

The idea is we take UberTrader, vary its parameters somehow and see if we can meet our goals any better than with its current parameter values. In this example we want to make the most possible capital, lose the least possible capital, and have the least time exposure possible. This is fairly direct and you could imagine that it would be somewhat difficult to aggregate these goals into a single expressive 'fitness function' that would roll them up into a single numerical value.


Evolutionary Computation

Evolutionary computation is a subfield of artificial/ computation intelligence typically used for solving optimisation problems. This type of algorithm is used to learn and to search, just what we need to train our trading systems.

The basic idea of an evolutionary algorithm is to mimic the evolutionary process, just as the name implies. The trading system and parameters map onto the evolutionary model in a straightforward manner. We do this by creating a group of potential solutions - called in evolutionary computation parlance "a population of individuals". We evaluate the population in our environment - the trading system in some data. The environment tells the optimiser how well each individual did for each of its goals.

A potential solution or "individual" in our above example would get evaluated and get data appended on it like:


With this information, the optimiser will select 'good' individuals and 'mate' them together by combining information from their parameters. For example:

If we have 2 'good' individuals that are being bred together you might see something like:

This example of crossover is a very simple one. There are a wide variety of specific crossover techniques that range from simple crossovers as shown above - to probabilistic sampling techniques, to Bayesian networks, etc. The important point is the generalised idea; information is taken from the current population based on their fitness and used to make new, hopefully "better", individuals.

The final standard element of evolutionary computation is mutation. Mutation happens in order to keep the population of possible samples diverse. It is possible that all that breeding becomes inbreeding. At this point - all the individuals start to look alike, and we are no longer learning or evolving much. Mutation simply throws some noise into the mix every now and then and keeps the population of potential solutions diverse.

There are many specific evolutionary algorithms. Some use more crossover technique, some more mutation. Some use very complex hybrid techniques etc, so for our example let's just consider a generalised evolutionary algorithm with the following characteristics:

  • It can accept multiple objectives
  • It uses reproduction (it will take good solutions and 'mate' them in an effort to make other 'good' solutions)
  • It uses mutation In order to meet our goals, our evolutionary algorithm will do the following:
  1. Generate a random set of possible solutions (a population)
  2. Evaluate each solution by trading it over some time/data window (an environment)
  3. Assign fitness values to each solution (an individual)
  4. Combine the most 'fit' individuals to hopefully breed 'better' offspring. (crossover/ reproduction)
  5. Apply some mutation to the individuals in order to keep the population diverse.

Go back to step 2 until we decide we are done.

The perils of over optimisation

We have selected our candidate system. We have told it about our input parameters. We have told the optimiser our goals. Finally - we need to select some data, and we are ready to roll. We tell the optimiser to evaluate the trading system over that window of data and 'evolve' the parameters until the system meets our goals.

Our optimising ally dutifully goes about its business and evolves the parameters until we run out of time/ patience or it finds the same solution(s) over and over again. When an optimiser finds 'the end' - we say it has converged. This means that it is not finding any different answers. When an optimiser has converged - it should stop or be stopped. In trading systems, it is very likely that your problem is extremely complex. If you have a system with 50 input parameters and you are testing it on 6 months of tick data for 3 instruments - it will take some time/computation to 'do its thing'. We could require some time and computation limits for this work. In this scenario however, we do not necessarily expect the optimiser always to converge. We expect to use as good an answer as it can give in as much time as we have. Often, as desired and expected, our optimiser finds a great set of possible solutions for us. It has minimised and maximised our goals and the system reports look great!


This sounds great doesn't it?! We have simply told our computer our goals, and let it go get them. What could be better!? Unfortunately - this is where many people stop. It is however important that you are not one of those people, as it a sure way to burn a pile of cash!

The importance of process

In the above example, we let our powerful search technique loose on some parameters in some window of data. We have effectively 'fit' the parameters to the data. This is sometimes called 'curve fitting'. What we really want is not just to curve fit the parameters to this particular window of data, but uncover parameters that are 'better' on many windows of data. We want to find effective and robust parameters. The basic process that is usually followed is "Train, Test, and Verify". An alternative is to optimise continually and select the 'best' individuals continually, but it is typically much more challenging to do this.

Train - Test - Verify

What we did in the above example is training. We tried to train our population to be effective in the environment we gave them - the window of data. What we need to do now is to test our population over some other data. This can be some different instruments, some different time windows or both. Ideally our parameters should work just as well in other windows of data - and perhaps (depending on the system) on different instruments. Ideally we would like to test many, many times. Often we find that we train and fail the tests, we train and we fail the tests, etc - until we find a niche that starts passing the tests. We have effectively combined a useful process with a useful algorithm to solve our problem. If you use the algorithm without the process, you will send your system into production and in all likelihood it will fail. It is possible to train your system continually on some data and then to automatically "walk it forward" into unseen data. At the end of your process, you would have a system that has been trained on some data and tested on some other windows of data and potentially other instruments.

In the verification step the potential solution is typically put on new real time data and allowed to "fake trade" in real time, then allowed to trade small with real money.

Curve fitting and over optimisation

This process ensures that we have tuned and trained our system, but not over-optimised it. It ensures that the optimiser has not been allowed to simply curve-fit to some data that will have nothing to do with the future. The goal of this entire process has been to extract some "better" parameters that are robust in real time and the process is what enables this. Optimisation is curve fitting. This is not necessarily a bad thing - but often the term 'curve fitting' when said about trading system optimisation is used as a negative term. The real problem is over optimisation and/or not following an effective process.

Speed: how fast is the core algorithm?


Some tips:

  • Instead of selecting extremely wide parameter ranges (like 1-99999999999) for every parameter, constrain your parameter ranges to areas that are likely or known to be effective.
  • Do not have 1000 parameters that you optimise. This many parameters will make it easy for the optimiser to tune to any data, but not very likely that any one parameter set will be effective as a generalised parameter set solution.
  • Have realistic expectations. I have seen many smart people curve fit their systems, not really follow their process - and burn money. It never ceases to amaze me - but it does happen. People want to believe that the optimiser has found something and that it will always work - since it has done so in the past.
  • Evaluating optimisation algorithms
  • As stated earlier, there are many different optimisation algorithms to choose from. This article introduces multi-objective evolutionary algorithms as an excellent choice. When evaluating different optimisation algorithms there are several items to consider.
  • Interoperability - how will I map my trading solution onto this optimisation technique?
  • Speed - how fast is the core algorithm, is it parallelisable/distributable across a grid or a farm?
  • Is it powerful enough to handle the problem? You should never send a boy to do a man's job. Choosing a state of the art optimisation algorithm will matter, just as choosing a state of the art execution algorithm matters.


A weak optimiser will:

  • Converge fast - and never get near the 'true best' or global optimum ever. This is common with 'vanilla' genetic algorithms and lower class local searchers.
  • Get confused if there are multiple optimum areas, and force you into a no-mans land in the middle.
  • Get stuck in a local optimum. This is the most common. Weak or misapplied optimisers will often get stuck standing on top of a foothill - declaring it the tallest peak, while there is a grand mountain range directly behind it!

Other things an optimisation algorithm can tell you

Sometimes optimising your trading system can tell you interesting things. For example, often the optimiser will 'optimise out' certain rules via its process.

If you had a simple rule that said something like...

...and your "Some Input" always gets the value 0 in your optimisation runs, this means it is a bad rule. Also - if some inputs contribute to the "Some Condition" variable, then it is likely they are fairly useless. This happens a lot in practice! I have often said that if software developers got paid for deleting code - there would be lots of better software out there. The same goes for many trading systems. Lots of wasted rules could be removed - let the optimiser help you.

"I have often viewed an optimizer as a sort of system harvester..."

A Harvester - a system research tool

I have often viewed an optimiser as a sort of system harvester. It is possible to take your trusty optimiser and use it to harvest trading systems. You can use it creatively in many ways. You can use it to turn rules on and off. You can use it to find out what rules seem to have no value as we have seen above. You can use it to combine different systems into optimal portfolios or combinations of portfolios. Since evolutionary algorithms are good general purpose tools - the list of data mining type tasks they can be used for is a long one. Let's say we have a trading system with very few parameters, but lots of internal variables. It is possible to expose some of these internal variables and use an optimiser to find what is useful and what is not. It is very common to use evolutionary algorithms as data mining, machine learning or knowledge discovery devices. Data Mining and Knowledge Discovery with Evolutionary Algorithms by Alex A. Freitas provides a good review of how these same tools can be used to learn.


Enterprise Optimisation

Enterprise Optimisation

In a world where algorithms are componentised elements of the automated trading landscape, it is just a matter of time before optimisation algorithms are an integral part of this landscape. These algorithms can be used to tune or train any element ranging from risk parameters, to VWAP parameters, to entry and/or exit rule parameters. When this "enterprise optimisation" boom begins, a whole new level of adaptation will come alive in trading agents all over the computational trading universe. Today many of these techniques are applied as a system generation pre-process, but as techniques become more and more sophisticated and specialised, the real time optimisation world will kick in and leave the competing trading bots behind.

As long as componentised elements of the trading infrastructure can report their health or fitness, and accept parameters that govern their functionality, enterprise optimisation can govern their working and keep everything running according to discrete goals.

Conclusion

Evolutionary optimisation algorithms and other optimisation techniques can offer a wide array of utility to the automated system building and management processes. These tools can help to bring systems back to life, and to spawn multiple children from good base systems. They can help tell you what is not of use in your system. The primary feature of this variety of optimisation algorithm is the ability to specify your goals, and have it work to achieve them for you. This model sits well with people from any discipline across any organisation. It is very difficult at times to get "buy in" for complex quantitative techniques and measurements, as they cannot necessarily be understood by all involved decision makers. Everyone has and can understand goals. Once you can express them to your optimisation software, it can go to work and help make them happen.

  • Copyright © Automated Trader Ltd 2014 - The Gateway to Algorithmic and Automated Trading

advert
click here to return to the top of the page