The Gateway to Algorithmic and Automated Trading

Algorithmic trading: fact or fiction?

Published in Automated Trader Magazine Issue 05 April 2007

The rise in the popularity of algorithmic trading has been accompanied by a commensurate increase in the amount of marketing material that relates to algorithmic trading. Unfortunately a significant proportion of this material has a tendency to confuse rather than inform. Dr Usman Malik, of algorithmic trading specialists P.E. Lynch LLP, debunks nine popular algomyths.

Usman Malik, Algorithmic Specialist, P.E Lynch LLP

Usman Malik

1. Fiction: Communication latency has a huge affect on algorithmic performance

This may have been a fact three years ago, but is certainly not the case now. Every investment bank states a time to market of less than a second, almost all quoting times in milliseconds. If latency has a quantifiable effect on algorithmic performance, a graph could be plotted showing VWAP performance improving as latency decreases. This graph is conspicuous by its absence in the marketing of both investment banks/brokers and communication suppliers. Hence nobody quotes VWAP performance in relation to changes in time to market. Even without published data, one can make a common sense analysis. No one really believes that a 20% reduction in latency has the same effect when the time to market is 500ms compared to when the time to market is 150ms.

Improving latency between a bank and an exchange is definitely worthwhile if there is a proven quantifiable affect on performance. However, the performance of a strategy is also based on the statistical intelligence embedded in the algorithm. Improving latency whilst ignoring statistical intelligence may not lead to a significant performance improvement. Fundamentally, achieving marginal reductions in latency comes with a large implementation time and cost, but without a guaranteed VWAP performance improvement.

"Many of these development enviroments charge extra for consulting"


2. Fiction: A specialist computer language provides a trading algorithm

There are many commercially available algorithmic trading development environments in the market place. Whilst many of these products come with eye pleasing GUIs there is nothing these specialist languages can do that cannot be achieved in JAVA or C++, both of which are free to use. Typically, once the specialist language has been purchased, one still needs to spend considerable time creating a statistically intelligent strategy that performs on the market. Hence, a team of financial engineers will still need to be employed.

Many of these development environments charge extra for consulting on how to get the best out of the product. Furthermore, if the vendor decides to upgrade their products, additional training courses will most probably be required at further cost.

Finally, one needs to be aware of the potential subtle differences between upgrades and new products from the vendor. The new product may seem like a marginal improvement on the existing software to the customer, but may not be backwardly compatible.



3. Fiction: Ten trading strategies are better than five

When algorithms first became popular in Europe, the talk was of providing algorithms with the best performance. Over the last two years the sales pitch of many algorithmic providers has moved away from talking about performance and instead focus on the variety of new strategies. The argument is that the more strategies the client has at their disposal, the greater their control over their orders and their behaviour. This is rather illogical because strategy proliferation alone guarantees nothing - the ultimate arbiter is whether or not the individual algorithms perform well enough to save the client money. Quality rather than quantity is the critical factor, so expecting clients to use a suite of multiple algorithms with poor overall performance appears rather perverse.

Ultimately a client will want to trade with respect to liquidity or with respect to time. If your aim is to beat the price over a time interval (comparing the final execution price against the VWAP price) you will trade with respect to time. If your aim is to beat the arrival price (comparing the final execution price against the arrival price) you will trade with respect to liquidity. If your aim is to beat the close price (comparing the final execution price against the close price) you will trade with respect to either liquidity or time. Any strategy that does not compare well against any of these three benchmarks is of limited use.


"Quality rather than quality is the critical factor..."

4. Fiction: Every investment bank has algorithms

Whilst many investment banks have marketing material only a few have a real global algorithmic product. It is certainly true that volumes traded algorithmically on exchanges in Europe have been steadily increasing and this trend looks set to continue. Deal flow is naturally gravitating towards banks with established algorithms. These banks are more efficient and can deal with larger volumes.

The gap between those that have algorithms and those that do not is growing ever wider. Banks with good algorithmic products have offered out their algorithms on an agency basis. Banks without algorithms cannot and those with poorly performing algorithms have not.


5. Fiction: Optimisation techniques are easily integrated into algorithmic trading

There is much written about applying optimisation techniques to algorithmic trading. There are three steps to carrying out any optimisation:

  • Formulate a mathematical problem to solve
  • Select the correct numerical technique (the optimiser) and apply it to test data to find a solution
  • Check to see if the answer really is optimal

The first danger lies in making incorrect assumptions when formulating the objective. For example, the task may be to carry out a buy order on a particular stock intraday. The price impact of this order is reduced by trading slowly, but this increases exposure to price volatility. The objective is to find the optimal time over which an order should be executed and the trade size of each slice sent to the market to minimise impact and volatility, given the initial order size. The optimiser is used to find the best choice of order time and trade size (the parameters). The assumption made in the above objective is that a one off optimisation can solve this problem. Unfortunately, bid/ask spread and liquidity in the order book both vary greatly intraday. Consequently trading behaviour needs to adapt as and when changes occur in the market. A trading strategy based on an optimisation objective that does not adapt to intraday behaviour is likely to be inefficient.

A further problem lies in employing an optimiser that is numerically intensive when a simpler method exists, effectively using a sledgehammer to crack open a nut. The state of the art optimisers based on genetic/evolutionary algorithms are only appropriate when no other alternative exists. Even with a very difficult problem, applying a complicated numerical technique does not guarantee a fast result within seconds, which may be necessary for intraday trading. Applying the wrong optimiser to a problem can often lead to a result which is not the global solution (the best solution for all possible candidates). Instead the optimiser will return a local solution, i.e. the best solution from a subset of candidates. Also, there is the possibility that the optimiser will give a result (choice of parameters) unique to the test data and unrepeatable on other data sets. Rigorous out of sample testing is required to validate the answer produced by the optimiser. Whilst optimisation provides very useful tools, great care must be taken in their application.


There are algorithms - and algorithms...

6. Fiction: The next step in algorithmic trading is machine read news

The concept of machine read news is that a piece of software will scan news data feeds and react if it thinks a news event will affect the market. The idea of machine read news as the next step in algorithmic trading is mostly hype, because all good execution algorithms already react to news that affects the market. If a news event comes out that causes a reaction in the market a change on the order book will occur. A good algorithm will react to changes on the order book faster than the speed of the human eye. If a news event comes out which does not cause the order book to change, then the algorithm will carry on as normal. So in other words, the only thing that is important for execution is to look at the market's reaction to news, not the news event itself.

The confusion arises due to the fact that execution strategies are being mistaken for tools that detect a new trading opportunity. Looking for combinations of keywords in news feeds and then deciding to buy or sell a stock before anyone else reacts is trade detection and not trade execution. For machine read news to detect a new trading opportunity, proof is needed that combinations of keywords relate to market moves and the direction of these moves. If an investment bank has this technology then it is far more likely to reside on a statistical arbitrage desk than be provided to clients. An investment bank will be more likely to use the technology to make money for itself rather than for clients.


7. Fiction: Crossing/internalisation is always good for an agency client

The idea behind internalisation is to match electronic order flow (orders to buy and sell the same stock) in a crossing server before the order reaches the exchange. Crossing two agency orders is a very desirable objective. The ability to cross at the mid price will benefit agency clients and the reduction in ticketing costs will benefit the broker. However, crossing an agency order with an order generated from a bank's proprietary system is undesirable in the long run. Unless the bank provides a clear audit trail showing that every fill occurs at the mid price, the possibility always remains that one side will end up paying the spread. Post trade analysis needs to be provided showing the state of the order book when the crossing occurred and where the price moved after the trade. The post trade analysis must validate that the crossing was worthwhile.


"...Look at the market's reaction to news, not the news event itself."

8. Fiction: Predicting future VWAP performance is easy

Using a set of evaluation orders to estimate the future VWAP performance of an algorithm is not a simple task. Whilst it is convenient to place a hundred orders generated from random order flow on the trading floor, this approach will produce an inconclusive and unrepeatable end result. Furthermore, this type of evaluation process cannot be used to distinguish effectively between two algorithms.

There are four components to measuring VWAP performance: choosing the total number of orders used in the evaluation, choosing the size of each order, choosing the stocks that are used for performance evaluation, and removing the direction of the market on each day. The more orders placed on an algorithm the more confident the user can be of the performance. An average performance calculated on fifty orders has far less confidence than an average on ten thousand orders. When trying to compare two algorithms this problem becomes more acute. It is not sensible to compare the performance of algorithm A that has traded tens of thousands of orders against algorithm B that has only traded one hundred orders. It is important to realise that large orders and small orders do not behave in the same way. The actual size of each order must be significantly large otherwise it is exposed to volatility of the stock. Consequently, it is not possible to place a small test order on a stock in order to deduce the algorithm performance of a larger order on the same stock. To carry out a useful evaluation there should be a wide degree of variation in order size.

An evaluation universe should be split into three tiers: highly capitalised stocks with low spread, medium capitalised stocks with medium spreads and low capitalised stocks with large spreads. It would not be sensible to divide the evaluation turnover equally between all three tiers. If an algorithm was employed full time to deal with all the flow that comes into the trading floor, after a period of time greater than six months it can be seen that the distribution of turnover for each stock is directly related to the market capitalisation. In carrying out an evaluation the idea is to simulate how the algorithm performs over a long period of time. Therefore, the actual turnover traded in each tier should be directly related to the market capitalisation of each tier.

Finally it is desirable that the orders are placed in a long - short, cash neutral, country neutral manner to remove any bias caused by the direction the market moves on any given day.


9. Fiction: Algorithmic trading systems are commodity specific

A common misconception is that different algorithmic techniques are required to trade different commodities. However, there are standard intraday statistical properties that are measurable in all markets irrespective of instrument: bid ask spread, volatility, trade volume (where available) and liquidity at different prices in the order book. The differences between markets are predominantly in exchange rules: the behaviour of auctions, short selling rules, name changes in relation to dates, times of market hours, tick sizes and pricing. However, many of these differences already exist between equity markets so moving to other commodity markets is not as complicated as many believe. The existence of an electronic order book is fundamentally all that is required to trade different commodities.

  • Copyright © Automated Trader Ltd 2018 - Strategies | Compliance | Technology

click here to return to the top of the page
content