The Gateway to Algorithmic and Automated Trading

AT Round Table: Pre-trade analytics - expectations and reality

Published in Automated Trader Magazine Issue 02 July 2006

Pre-trade analytics have become an integral part of the workflow for algorithmic traders. We asked four major sellside banks for their views on some of the current and emerging themes in analytics. With: - Chris Biscoe, Head of US Ecommerce, Barclays Capital - Mike Duff, executive director, UBS - Andrew Freyre-Sanders, Head of Algorithmic Trading for EMEA and Asia, JP Morgan - Timothy Reilly, Co-Head of Alternative Execution at Citibank, Citigroup Global Equities

Apart from accuracy, what do you see as the main competitive differentiating factors for pre-trade analytics?

Biscoe: Currently, the high-end expectation from pre-trade analytics is that they will not only provide liquidity, volatility, and market-impact analysis, but also benchmarking and a recommendation of the most appropriate algorithm. The conventional attitude is that pretrade analytics will give you some short term insight into historical patterns and/or short-term mean reversion of volatility or shifts in order book liquidity. Much of this is limited, and to a large extent, the outcomes are positively selected (that is, the more you put into the analysis, the more likely you are to steer the analysis towards a particular benchmark), but nevertheless the expectation remains that pre-trade analysis will provide this service.

I think the real value of pre-trade analysis should be in the ability to lay off risk more effectively and in particular, to discover or suggest contingent or alternative liquidity. Going forward, pre-trade analysis needs to look beyond the immediate implementation goals towards assisting investment objectives through smarter hedging analysis and offsetting risk. It becomes particularly interesting when you consider pre-trade analysis evolving into suggesting possible proxies for trades. As a result, something that can recommend an alternative solution that provides similar economic exposure might be of more value. For example, instead of trying to predict the liquidity of a particular commodity stock this afternoon, it might recommend the purchase of a proxy basket of commodity futures.

Duff: I think showing breadth and depth of information in an easily assimilable format is one important factor, as it assists the trader in making rapid and well-informed decisions. What you might term good "information ergonomics".

In equity markets we have a wealth (one might almost say a surfeit) of rich market data that describes the market and individual instruments. Extracting the key facts from this and presenting them in an easily comprehensible format for the viewer is therefore a competitive differentiator.

A key part of achieving this lies in taking into account the business flow from the point when an order from a portfolio manager arrives at the dealing desk, to the point when it actually starts being worked in the market. The way in which pre-trade information is presented to the trader should streamline that process as much as possible.

Freyre-Sanders: No matter how good the underlying algorithmic models are, if the pre-trade analytics aren't easy to use, they will be of limited value. For example, if it takes ten minutes to log into the system, select the necessary options and obtain the desired analysis, then the overall trading process will be impaired. It is absolutely critical that pre-trade analytics are a fast, user-friendly, source of relevant information for decision support.

Ideally the analytics also need to be well integrated into the client's workspace - perhaps in a pop up window. It is a well established fact that clients do not want to have to juggle multiple applications on their desktop.

Reilly: If you assume the top pre trade models are already accurate, then the main differentiators are ease of use, integration, and flexibility of the platform. An important aspect of platform flexibility is the ability to examine multiple execution strategies in advance of trade execution.

However, given the number of trading strategies and algorithms used in the market (some 500 in the US alone) this is no small challenge.

An important accuracy component is global data quality. In order to have a truly global solution, clients need be working with vendors or broker dealers that have the ability to capture a robust global database. Managing data on this scale is extremely demanding - it is definitely a key competitive differentiator.

Do you feel that buyside expectations as regards pre-trade analytics have risen significantly over the past year?

Biscoe: Yes and no. Expectations have risen in that pre-trade analytics are now automatically part of the buyside checklist. Having said that, I'm not sure that that the buyside is using sellside analytics to the extent that you would infer from the media. It seems to me that they view pre-trade as something perhaps better deployed as an in-house tool.

If the buyside doesn't have the confidence to approach a broker and say "We want to buy 25 million shares of IBM, can you help us do it?", then they might not trust any sellside analytic tools. To some extent, pre-trade analytics have become primarily a sellside marketing tool - in terms of utility for the buyside community, non-broker analytics will probably continue to increase in importance.

Duff:We have seen increasing interest in pre-trade analytics emerging over a variety of time frames. For example, in portfolio trading we've been seeing interest for a number of years.

Though there has been strong uptake of pre-trade analytics, I think there is a lot more growth yet to come. The buy side is becoming increasingly sophisticated and asset managers are looking to dealing desks to trade intelligently in order to boost fund performance. Then of course there are the changes in the regulatory environment, which are obviously raising the execution accountability bar for dealers.

In an environment where factors such as these are pushing performance expectations, information is the key to making the correct trading decision. Good pre-trade analytics empower the trader to make the correct choices and justify the decisions made.

Freyre-Sanders: Yes I think they have, which is entirely understandable in view of the way that the range and complexity of the possible trading options has also grown.

One of the earliest uses of formal pre-trade analytics was in basket trading. The objective was for the analytics to sort the basket of stocks and highlight the "problem" stocks that were harder to execute and would therefore require more individual attention.

Now you have a situation where the trader is confronted with a broad selection of possible execution methods, which may also have user-configurable parameters. Inevitably that means that clients expect pre-trade analytics to do a lot more than just identify problem stocks. They want the analytics to give them a streamlined path to optimal algorithm selection.

Reilly: I don't think buyside expectations have necessarily increased. I think clients have always expected pre-trade tools to be accurate, thorough, and actionable. However, I do believe their interest in these tools has increased substantially over the past two years. I attribute this change to three main factors:

The global increase in electronic trading has introduced more technology into the decision making and execution process.

Regulation - over the last two years there has been far more scrutiny of how and where clients trade. In a best execution environment, this regulatory factor has undoubtedly spurred the interest of the buyside.

The products and the tools available have improved in terms of functionality and ease of use. Today there are fewer barriers to clients adopting a pre-trade toolset.

What do you feel is the most effective way of benchmarking the predictive performance of pre-trade analytics against actual trade outcomes?

Biscoe: I think you have to be able to distil major factors, such as volatility and liquidity changes into normalised parameters and benchmark against that. That information can then be used for scenario analysis, which should also be of value in anticipating the impact of those factors on a trader's list of trades and flagging any necessary strategy adjustments required. Ultimately these could be coupled with algorithms so they can respond to these factors automatically in real time.

Duff: On any particular trade or subset of trades the actual traded level can deviate widely from predicted performance. Any news events or imbalances in the market may not be reflected in the prediction model and can have a significant and unpredictable impact on the actual trade outcome. Therefore, in order to fit prediction models to real trades, one needs a large population of trades and descriptive market data, which assist in eliminating general market noise.

In view of the amount of noise in the market, it is very hard to benchmark individual trade executions against their predicted execution. There is therefore a balance that has to be struck. On the one hand, one has to be sure that one is accurate in average terms and that any outliers are explicable and acceptable. On the other, one has to be sure one is not systematically generating these outliers because a gap in the model means that one subset of trades is always falling through the net.

Normalising the data to allow for the market conditions (e.g. volatility and liquidity) prevailing at the time of a historic trade helps to provide a more usable dataset for analysis and tuning of models. However, it is important to be aware of the risks of over-fitting the data to the desired predicted outcome, as this can result in a model that cannot generalise properly and that therefore performs poorly out of sample or in real time.

In order to avoid this, we adopt a hybrid approach by normalising the data to some extent so that meaningful comparisons can be made, but still excluding a large number of non-representative outliers from the tuning process. That excluded data set can then be used to model unusual trades or market conditions. However, care is required during this operation in order to ensure that genuine outliers are not removed, thus causing an over-fit.

Freyre-Sanders: The simplest (and also most simplistic) way is by comparing the pre-trade prediction against the result executed. However, pre-trade analytics typically include standard deviation error bands, so pretty much any prediction can appear satisfactory, because the trade result is likely to fall within those bands.

I think really effective benchmarking has to be able to allow for the very organic nature of the trading process. The original objective might have been to participate at a third of the volume with a limit price set at "x". However, an unexpected slice of volume might appear in the market and the execution strategy would change in order to take advantage of that. Then trading might later have to be temporarily suspended due to the release of an economic report.

All these strategy changes in response to circumstances mean that a direct comparison between the anticipated and actual trade will be fairly meaningless, unless a means of capturing and time stamping all the various changes is available. If it is, and the various "sub-strategies" can be replayed from a database, then a more realistic benchmarking process becomes possible.

Reilly: Benchmarking is an interesting topic, because it is unique to the investment manager. The naive assumption would be to examine only individual trades. However, any single trade may or may not be representative, because markets are dynamic. Therefore, looking at transactions over time and examining trends that stand out with individual broker dealers, execution venues, or regions becomes a valuable process. The time window over which this scrutiny is conducted depends upon the profile of the individual client - i.e. are they high or low turnover, and what is their trading style?

Examining transaction data in this framework provides benchmark information, which can be used for performance attribution. It can also be used to identify trends in an investment manager's trading, such as return analysis, universe comparison, or execution channel utilization. This process can be a tool for improving their overall trading process, which obviously adds value.


Chris Biscoe, Head of US Ecommerce, Barclays Capital

"…the real value of pre-trade analysis should be in the ability to lay off risk more effectively and in particular, to discover or suggest contingent or alternative liquidity"

Chris Biscoe

As the usage and sophistication of algorithms continues to increase, does the effect of this on market behaviour actually make it harder to improve the effectiveness of pre-trade analytics?

Biscoe: Yes, I think it makes it a more complex task. However, I don't believe it necessarily reduces the effectiveness of the analytics, just as long they are treated as essentially a support tool, rather than a fundamental decision maker.

Duff: There are certain identifiable traits and there have been instances where we have seen people using the same algorithms in the same way and they have somewhat dominated the market. That can raise a few concerns. However, in general I think there is enough variety in the range of market participants for this not to become an issue.

One occasionally observes interesting eccentricities. For example, on a US bank holiday earlier this year European volumes still ticked up at 2:30pm, even though the US market was closed. That suggests there are lots of models out there trading a static VWAP curve.

However, none of this represents a major obstacle to improving the effectiveness of pre-trade analytics. By definition, markets are always changing and evolving so refining models and analytics to cope with that process is simply part of the territory.

Freyre-Sanders: I think there is little question that algorithmic trading activity influences the way in which markets behave - you only have to look at average trade sizes to appreciate that.

However, the ultimate determinant is how the algorithms are deployed. If used intelligently and appropriately then they can potentially reduce volatility - if used clumsily, the opposite applies.

This doesn't necessarily make it harder to produce effective pre-trade analytics. The analytics may identify that a particular stock is very illiquid, but if you then trade it aggressively and move prices, you could argue that it was obvious that this would happen and factor that into the analytics.

Ultimately, if algorithms are already well designed to cope with changes in market behaviour, then the pre-trade analytics should still be able to generate a reasonably accurate prediction of how those algorithms will perform in real time.

Reilly: Algorithms clearly do influence market behaviour. At a superficial level, you only have to look at the decline in average equity trade size in the US.

In my opinion algorithms actually make pre-trade analytics an easier exercise. These strategies add structure and discipline to the trading process. If a client understands their execution strategy ahead of time, they are in a better position to predict its performance. By contrast, a human trader introduces a different level of variability to the process.

As usual, this opinion is based on the requirement of a quality and robust data platform. Any pre trade model or algorithm is only as effective as the underlying data it uses. This requirement is an under appreciated hurdle for vendors and broker dealers new to these type of products.


"Though there has been strong uptake of pre-trade analytics, I think there is a lot more growth yet to come"

Mike Duff, executive director, UBS

Mike Duff

Historically there has been a perception from some on the buyside that broker-provided pre-trade analytics will always favour the broker's own algorithms. Do you think this perception is changing?

Biscoe: This suspicion is probably inevitable, and with some good reason, given that this is a commission/liquidity driven business. If the broker's algorithm, which is presumed to be the best, cannot attract the customer's commissions or liquidity, then it will have failed to serve its purpose.

Duff: I'm not sure whether this is changing or not. Inevitably, we can only model our own algorithms, as we cannot know the details of how other brokers' algorithms function. Where we don't know the facts we have to make assumptions and they tend to be conservative ones. It may therefore be that we see more competitive numbers from our own algorithms.

However, I would caution against using just predicted impact as a means of broker selection - other factors also need to be considered. The models concerned will almost certainly make very different assumptions and the output the user receives from the analytics may not be directly comparable. For example, they may not necessarily be predicting the same kind of trading environment for the trade, which is something you cannot control when you come to the market.

Clients may presume that brokers either lowball pre-trade impact estimates for their algorithms to attract business or highball them to look good post trade relative to expectation. In reality, we have no reason to do anything other than attempt to be as accurate as possible.

Freyre-Sanders: With many of the pre-trade analytics being distributed by brokers having previously been used by the brokers themselves internally, the forecasts they generate are inevitably more accurate and also likely to reflect favourably on the algorithms concerned.

I don't believe that is due to a deliberate bias, but more the way in which the analytics and algorithms have been tightly integrated since their inception.

Another important factor in the algorithm performance forecast by pre-trade analytics is the source of the underlying data. Changing the source of the liquidity/volatility data fed to the pre-trade analytics can have a huge impact on the output. Broker analytics designed and tested upon a particular broker-provided data set will understandably predict stronger performance for the broker's algorithms.

Reilly: I do think that this is changing but there is a structural issue that should not be ignored. Most brokers' pre-trade analytic models are the same as those used in developing and designing their algorithms. If the intellectual capital is a shared resource between the pre-trade analytics' modelling team and the algorithmic development group, then a client should expect consistency between the pre-trade prediction and the performance of the trade.

However, a competitive pre-trade system should allow a client to examine execution strategies well beyond algorithms. The tools traders need will not only address which algorithm to use, but every execution channel and venue available to them.

A broad set of scenarios needs to be considered; should the order be routed via a sales trader? Should the trade be done over an extended timeframe? What venue is the most liquid? Should I even use an algorithm?

Do you find that the limitations of some OMSs (e.g. no support for real time tick data) hamper the effective deployment of pretrade analytics and algorithms?

Biscoe:Without a doubt there are limitations within some order management systems, but that should not be perceived as an indictment of all OMSs. To a large extent, we are in the midst of a technology arms-race, and the ability to roll out differentiating functionality puts a heavy burden on third-party vendors.

Duff: I would say that the availability of real-time tick data isn't an issue for us as we would want to put our own data into our own models anyway. (We know where it originates and we can pre-filter it.)

A bigger question is the integration of models with vendors for analysis and even control of specific algorithms. There isn't a standard API with which to make the connection, so the main challenge we face is integrating with a great many OMS vendors in a variety of ways. Therefore it is the lack of a standard with which to interact that is the problem, more than anything relating directly to the individual OMS vendors.

Freyre-Sanders: That ultimately depends upon the degree of integration required. For example, if the OMS is Web friendly then it is not particularly difficult to embed the analytics as a browser page. Depending upon the interoperability of their OMS, a much bigger challenge can arise if the client demands tight integration with their desktop application.

I think a significant part of the integration problems that can arise are because OMSs were originally designed with compliance and order tracking (rather than order execution) in mind. (One might logically have expected EMSs to be a natural evolution of OMSs for the dealer, but they have in fact become segregated.) The net result is that trying to embed pre-trade analytics in some OMSs will always be problematic, as they were never intended to handle the level of messaging involved.

Reilly: In general, I would say the implementation of algorithms poses much less of a challenge than implementing pre-trade analytics. However, there can be issues relating to throughput - some OMSs struggle to accommodate the level of fills from algorithms and become "locked up" by the volume of execution data.

The integration cost to the OMS vendors when initially implementing our pre-trade analytics is significant, so we must demonstrate that Citigroup analytics will add overall value to their platform. In my experience, client input is essential. In a number of cases, our clients have made it very clear to their OMS vendors the need to include our BECS pre and post trade tools. This has had a huge impact on OMS prioritization.

In practice, I would say the biggest issue relating to the OMS implementation of pre-trade analytics is screen real estate. There is a finite amount of space available on the OMS desktop and obviously no shortage of vendors keen to place their pre-trade analytics there.


Andrew Freyre-Sanders, Head of Algorithmic Trading for EMEA and Asia, JP Morgan

"I think really effective benchmarking has to be able to allow for the very organic nature of the trading process"

Andrew Freyre-Sanders

How is the demand for customising/optimising algorithms affecting pre-trade analytics? For example, do you think it is becoming essential to be able to optimise algorithm parameters as part of pre-trade analysis?

Biscoe: I think that is a lot to ask of a system. Custom algorithms are gaining popularity as a way of more tightly replicating the trader's thought process. To then successfully extend that to the pre-trade space with any consistency seems like a major stretch. If you are able to do that, you are approaching a real black-box environment, rather than the commoditised world of pre-trade tools.

Duff: As the sophistication of algorithms continues to increase, they are becoming more self-adaptive. There is therefore less need to optimise model parameters. The quality of model execution with just the default settings will be very good anyway, so the question becomes more one of appropriate algorithm selection rather than algorithm optimisation.

Freyre-Sanders: I think this ties back to my point earlier about how pre-trade analytics need to be quick, user-friendly and relevant. Introducing too much flexibility in terms of tweaking parameters may ultimately defeat the original objective of providing effective decision support and instead result in analytical paralysis.

Ultimately the analytics and algorithms should interact in such a way that the analytics provide the optimal setting for the algorithm automatically. For example, rather than the user having to input the thickness of the order book and adjusting the analytic's settings to suit this, the analytics sense the current state of the order book and recommend the most suitable algorithm and configuration.

However, I can see a case for such a degree of flexibility offered in a separate tool that would allow users to gain a better insight into the way the various algorithms work and their performance sensitivity to various market factors.

Reilly: Pre-trade tools must allow their users to examine trades across different time horizons and strategies. The algorithmic packages that are available need to allow the flexibility to add or exclude different parameters so the trader is empowered to implement their desired trade instructions.

I think better integration between the pre-trade analysis and the actual algorithmic trading package is the direction the business is evolving. Nevertheless, I'm a strong advocate of the buy side trader owning the final decision about which execution strategy to use and when to pull the trigger.

The optimisation of pre-trade analytics is available today. However, the client is often limited to the execution choices offered by the broker who designed the pre trade model. There are a number of products claiming to provide a multi broker level of algorithmic analysis, however it is quite challenging. In order for this process to be thorough, all brokers would have to disclose the logic behind their algorithms to the analytics provider. For confidentiality reasons this isn't likely to happen.

In the past, there has been reluctance on the part of some buyside firms to use broker-provided pre-trade analytics for fear of disclosing their trading intentions. Do you think this attitude is changing?

Biscoe: To some extent the attitude is relaxing, but not entirely. The issue is both about exposing trades, but also about showing the size of the order, which larger firms think is to their detriment.

Duff: I think that is still something of a mixed bag. Some clients remain very wary of disclosing their intentions. However, if they are seriously concerned about this, it is easy enough to deter any improper behaviour by simulating performance for size they have no intention of trading, thereby broadcasting noise rather than information. However, such concerns are symptomatic of a bigger problem. If a buyside organisation believes that its relationship with its executing counterparty is adversarial, perhaps it should be reconsidering the entire relationship? Is it placing its orders in the right way with the right people? Ultimately, agency execution shouldn't have any conflicts of interest.

Freyre-Sanders: I think you can split this into two concerns - one associated with generic network/Internet security and one associated with deliberate misuse of confidential information by a broker.

The purely technological concern has by now largely dissipated - most users are comfortable with the idea of secure network access.

On the second point, there is much less actual cause for concern than some people believe. Granted, in the early days of algorithmic trading I can recall one pre-trade analytical tool which simply consisted of clients emailing their intended trades to a broker's desk and receiving a suggested execution list in return! However, this sort of situation simply no longer exists, as most brokers now operate processes that make sensitive client trade data physically inaccessible to the broker's employees.

An increasing number of clients appreciate this and are content to use broker-provided analytics. A few remain opposed, while some others are comfortable on the middle ground, whereby the analytics are run as a local application rather than being Web based.

However, in general, I would say that this is much less of a client concern than it used to be.

Reilly: I think this is changing for a number of reasons:

The broker dealer community has the resources to invest in the data and infrastructure to develop a more sophisticated and robust platform than most clients can build independently.

The regulatory environment maintains a zero tolerance policy related to a broker dealer viewing or interacting with any type of client data in relation to their own proprietary trading operations. For example, at Citigroup we have gone to the length of having an independent third party handle and manage all client pre and post trade data. This structure provides the necessary arm's length reassurance for clients.


"…a pre trade model should access a robust global tick database, but that needs to be coupled
with actual transaction data."

Timothy Reilly, Co-Head of Alternative Execution at Citibank, Citigroup Global Equities

Timothy Reilly

Do you believe that actual transaction data might be a more effective input to pre-trade analytics than historical prices from exchanges?

Biscoe: In the end, I think both are key to successful modelling. The more analysis you can factor into your models, the more likely you are to be able to divine, and then prepare for anomalies.

Duff: Absolutely - exchange data is very useful and has its place, but it is not rich enough, particularly in terms of context. By contrast, actual captured transaction data describe some of the intentions and motivations and even the constraints that exist around each individual trade. By selectively choosing the most appropriate data to fit models, one can derive far more accurate constants that can in turn be applied in modelling for more accurate output.

Freyre-Sanders: I think actual live traded tick data is of more relevance than historical prices. That applies both in terms of immediate pre-trade analytics, but also as regards larger scale trade simulations.

Particularly with very large trades, being able to simulate probable trade outcomes over a large range of market conditions based upon actual traded data is extremely valuable. It can obviously never be a perfect substitute for real time reality, but it is particularly useful when evaluating and comparing algorithms.

Reilly: I think actual transaction data is an essential ingredient. The best case scenario is a combination of both actual trade history and tick data from exchanges. On the one hand, a pre trade model should access a robust global tick database, but that needs to be coupled with actual transaction data. This model allows the client to leverage the best of both worlds.

Some analysis will be better represented by pure tick data, but other scenarios may be normalised by using actual transaction data.

Do you think pre-trade analysis might benefit from including artificial intelligence techniques in the algorithm recommendation/selection process?

Biscoe: As I mentioned earlier, I believe that suggesting alternative and contingent liquidity is a potential frontier here. This will require a substantial change of mindset on the part of the portfolio management community, but will potentially offer tangible long term benefits. However, as a way of more efficiently divining the expected liquidity on a single line of stock, I doubt pre-trade analytics will ever be truly effective in the long run. In a sense, the market is designed to prevent the ability of any one person to influence the direction of events, and to a large extent, to predict the direction of events.

One of the most interesting areas for pre-trade analytics lies in broader based scenario analysis that can suggest appropriate reaction to a disparate range of possible events. For example, you might be able to model how best to react to a spike in volatility for a commodity stock while factoring in shifts in the copper price or news stories showing that China's demand for commodities is slowing. The possibilities seem endlessly intriguing…

Duff: Yes. There are a lot of applications of artificial intelligence in the execution space. The first place it tends to arrive is in the execution strategies themselves. They are increasingly intelligent and sophisticated and are definitely a driving factor in differentiating one set of algorithms from another. As the discipline increases in sophistication we will be deploying more and more artificial intelligence in the strategy selection algorithm parameterisation domain.

However I do think that will always be a role for humans. Machines may be better at crunching numbers, but there is still a role for human traders to interpret machine output and overlay their own judgment regarding current market circumstances and trade objectives.

There is also the sanity check factor. Even allowing for the emergence of news filter technology, when machines make decisions they may still not reflect everything there is to know about the market and/or the trade. In certain situations on certain days there will always be something that is dominant to which a machine may not attribute the appropriate weight. I think that the human element will also become even more important as regulatory pressures regarding best execution increase.

Freyre-Sanders: There is a lot of technology available and currently coming into the workspace. I am a big believer in technical analysis and behavioural finance, as the market is very biological and influenced by crowd psychology and other such behavioural theories. I am sure therefore, that AI will have a role to play and could make a valuable contribution.

Reilly: I think we are already there. Citigroup currently offers algorithms that learn from market data and single stock behaviour. The next chapter is delivering list based strategies that not only dynamically interact with the market, but also learn from the behaviour of other stocks in the portfolio.