The Gateway to Algorithmic and Automated Trading

Strategies: Building a Better Bear Trap

Published in Automated Trader Magazine Issue 02 July 2006

One of the most critical elements in algorithmic trading lies in accurately modelling trading costs, yet this still remains a rather inexact science. While certain cost elements are relatively stable and/or easy to predict, others are not. As a result, models for estimating trading costs have tended to be reasonably predictive when viewed across a very large sample of trades, but decidedly indifferent performers on individual ones. This has in turn made the task of minimising these costs through the selection, tuning and scheduling of appropriate execution algorithms difficult. Dan diBartolomeo, president of Northfield Information Services, discusses the current limitations and suggests some additional elements that can be used to improve forecasting of trading costs and trade scheduling.

Current methods for estimating trading costs tend to consist of a fairly standardised set of components, including agency costs, bid/ask spread, market impact and trend costs (the effect of other participants' activity on the stock price). Some of these components, such as agency costs and the bid/ask spread, are relatively stable and predictable. Others, such as market impact and trend costs are not. Simple methods for estimating trading costs use some form of linear regression equation that attempt to incorporate these latter components (usually as the multiplicative and power coefficients) often in terms of some form of liquidity estimate.

However, this approach omits a number of significant factors, including a rigorous method for estimating short term trends during the trade life cycle, the effect of any other concurrent trades and the risks inherent in delaying trade execution in the hope of securing a more favourable execution price.


The portfolio concept

An alternative approach that can incorporate these additional factors is to consider the problem in the context of portfolio management, and more specifically as a long/short portfolio that has to be liquidated. This conceptual portfolio essentially consists of long positions in stocks that are not wanted and need to be disposed of, and short positions in stocks that are wanted and need to be acquired. The objective is to liquidate this portfolio over a time frame consistent with available market liquidity. Inevitably this involves balancing the usual cost constraints: trading too fast runs the risk of moving the market away and increasing costs. Trading too slowly avoids this problem but introduces two others - market risk due to general volatility and the opportunity cost that the stock (if being bought) will start to move up as anticipated before a position can be taken. However, the additional consideration is that these risks should not be treated in isolation on a per stock basis; they also need to be balanced in the context of the whole conceptual portfolio or trade list, to allow for any interaction.

For example, buying a large block of GM will have an effect on the price of Ford. However, modelling that in the context of the risks outlined above will obviously depend upon the direction of the trades; buying GM and buying Ford has very different implications from buying GM and selling Ford. There are a number of possibilities here - for example:

  • If the two correlated stocks in question have very different levels of liquidity, it may actually not be expeditious to execute the more tractable stock at the first opportunity, as it is providing a hedge for the less tractable one.
  • In certain rare situations, it may be better to deliberately execute a trade so as to maximise market impact, with intention of driving a correlated stock in the trade list in the desired direction. (For example, buying one stock so as to inflate the price of another that is to be sold.)

(These are of course relatively simple examples - in a trade list there may be multiple cross correlations, which will obviously shift as trades are executed.)


Intra Sector Impact

De-skewing the data

A further opportunity to improve the accuracy of market impact modelling lies in incorporating more realistic data in the calculations. A significant flaw in many existing market impact models is that they have been calibrated purely on empirical data - i.e. actual trade records. At first glance this appears logical, but it overlooks one important point. The problem is that traders are aware that they cannot execute very large trades without incurring excessive cost - so they don't attempt to do so.

This effectively skews the empirical data set by truncating it at a certain size threshold. Without this data, the impact modelling lacks a realistic excess cost function for large trades. Including such a term in the trading cost equation to provide an estimate of the volume within a certain timeframe that will cause a serious liquidity breakdown will correct this.

Some assumptions have to be made in order to arrive at a value for this term. For example, assume that the intention was to execute a sell that was ten times the average daily volume (ADV) of a stock. One way of doing this would be to execute a principal bid trade with an investment bank (i.e. an old style "block positioning" trade). Assume that the investment bank believes the most it can liquidate either in the open market or by private trades with clients is half the ADV. It will therefore take 20 trading days to unload the position.

Allowing for the fact that the amount of the trade outstanding over time is declining, the investment bank will have to:

a) Finance the position at some cost of capital (typically at the broker call rate).

b) Reserve capital to cover possible losses from a decline in the value of the stock during the liquidation process. (The stock could go up, thus providing a windfall to the bank, but it will generally assume not, as it would suppose that the client had some information to support such a large selling decision.) The more volatile the stock, the more money has to be reserved. Historically, this was done using methods such as the RAROC procedure originally developed at Bankers Trust. The required rate of return on this money is very high.

c) Allow for the market impact of the liquidating trades. The investment bank may be able to shave this a little by placing some stock privately with its clients. However, it might still have to assume the impact of a half ADV trade, exacerbated by the fact that this trade will have to be conducted repeatedly over several days, which makes it almost certain that the market will realise what is going on.

Given items a, b and c above, it is possible to make estimates of what a principal bid might look like for trades of 2, 3, 4 times ADV etc. With these estimates in hand, it also becomes possible to calculate an extension of the cost function over and beyond the data available from empirically observed trades.


Short term trends

As mentioned earlier, a significant component in predicting trading costs is the accuracy of the prediction of short term trends generated by the activities of other market participants. There are a variety of ways of predicting such trends including technical and volume analysis. A possible alternative method is to use option prices.

One economic theory maintains that there is only one option implied volatility number associated with each stock - i.e. that volatility remains the same across the entire range of option strikes. This is clearly not true in practice, as witnessed by the volatility smile across strikes commonly seen in live markets. One possibility is to use the shape and curvature of the smile to back out the short term expected price return from the price distribution it implies. Where stocks do not have listed options it is still possible to estimate the short term trend based on covariance with the trends derived from stocks that are optionable.

These short term trends cannot of course be estimated with certainty - if they could, participants would simply arbitrage the effect away. It therefore seems sense that where this method is used as an input to trading cost estimation, it should include a user-configurable setting for the confidence level of the prediction.



Sticky Footprints

Stickiness

A further element that can add value to the cost modelling process is "stickiness". This refers to the persistence of market impact caused by a trader or algorithm. For example, if an algorithm buys a slice of stock and moves the price up, how long before that effect is dissipated? While a large individual trade in a relatively liquid stock executed today is unlikely to still have an effect a month ahead, it may nevertheless have a very pronounced effect during the rest of the order lifecycle - both for the stock concerned and correlated issues.

The decay rate of this effect is extremely variable and is influenced by a broad range of factors, including size of company, average volume traded, average volatility, recent price movements etc. Evaluating these various factors also suggests that the decay rate does not appear to be a linear function, but approximately resembles the square root of the number of periods since the trade took place - i.e. a relatively sharp initial decay that then flattens out.

Put plainly, the value of estimating this stickiness factor lies in helping to prevent algorithms/traders tripping over their own footprints - in effect averting a cumulative market impact effect. It is therefore an important element in trade scheduling (see below).


Adaptive scheduling

When the elements outlined above (and others) are combined with data on trade urgency, it becomes possible to construct a trade schedule that is self-adaptive. This can be envisaged as a spreadsheet where each row represents an order, while the columns represent time blocks, which may vary in size. For example, each time block might represent the average time taken for 5% of the ADV to trade. At the start of the order execution process the optimal trade sizes per time block are calculated. At the end of the first time period they are then recalculated based upon the over/under execution achieved and the desired weighting given to the various execution variables, which can of course also be adjusted in real time. This process is then repeated at the end of each time block until the order is completed.


Conclusion

Taking the factors outlined here into account should allow participants to improve the quality of their trading cost estimates. Using these estimates as inputs to a dynamic execution schedule will ultimately result in reduced frictional costs and a consequent improvement in net return.

  • Copyright © Automated Trader Ltd 2018 - Strategies | Compliance | Technology

click here to return to the top of the page
content