The Gateway to Algorithmic and Automated Trading

Dark Pool Delivers

Published in Automated Trader Magazine Issue 10 Q3 2008

Quantitative Service Group’s (QSG) June research note on NYFIX Millennium trade execution has caused quite a stir in the industry. The research, which examined three months of detailed NYFIX execution data, will leave buyside traders who have long sought empirical evidence on the performance of dark pools praying that it is just the tip of the analytical iceberg. Andy Webb talks to Tim Sargent, President and Co-founder of QSG, about the report and its findings.

Tim Sargent
By Tim Sargent

How did the research come about?

Our institutional clients have obviously long been interested in dark pools and particularly some objective measure of their execution potential. NYFIX had initially commissioned QSG to help analyse its Millennium execution data for the purpose of identifying any information that would be valuable from a client perspective. Through this research, we saw results that were pretty compelling, so we asked for authorisation to use the data for our own research. From our and our clients' perspective it was a tremendous opportunity to gain a comprehensive view of a dark pool. While we have access to the execution performance of more than a hundred institutional clients across multiple execution venues (including the major exchanges and largest alternative trading systems) the opportunity of gaining a more holistic picture across an entire venue was extremely attractive.

What was the basic thrust of the research?

Using a variety of metrics we were essentially comparing the cost of executing trades on NYFIX Millennium with the standardised transaction costs of our institutional client base operating across multiple trading venues (excluding Millennium).

What were the overall findings?

In general, across a range of capitalisation sizes and industries, NYFIX Millennium delivered reduced average execution costs, and particularly market impact costs. Even with names that were difficult to trade, we did not observe the cumulative impact effects of child orders that are typically associated with signalling risk in displayed markets. Obviously the value proposition of midpoint matching was responsible for some of the saving, but the lack of cumulative impact when compared with the aggregated execution data we collect from clients was striking.

Given the number and variety of transactions (in the Millennium sample more than 5.3m fills across 5796 names), there was no question that this was a statistically valid exercise. However, it should be remembered it was only taken from a three month period. I think it would therefore be unwise to draw any firm conclusions until the analysis can be extended over a longer time frame and other trading venues, which will hopefully be possible. In the meantime, a certain degree of scepticism is probably advisable.


Presumably there was a risk that the data could have been skewed by only trades of a particular difficulty being put through NYFIX, so how did you deal with selection bias?

That certainly is a possibility and we are in fact conducting further research on that point. However, I have to say that on first inspection it does not appear to have been a problem. NYFIX Millennium has always been positioned as an open, neutral pool and this level of accessibility was certainly apparent. One of the things that impressed us when we analysed the NYFIX data was the overall breadth and consistency of coverage that we saw on a day-to-day basis. We were pleasantly surprised by the range of depth, capitalisation and breadth of industry coverage, which reduced the risk of executions being 'cherry picked'. There certainly didn't appear to be any obvious over-representation of a particular type of execution.

An additional safeguard is that we only analysed trades that fell into one of two 'motivation categories'. Traditional benchmark analysis has its own embedded faults as far as day-to-day comparisons go in that it does not take into account the motivation of the trade and how that affects efficient trade execution. Consider the concept of price momentum, which is a popular trading strategy across a broad range of managers. By its very nature, a momentum strategy will flag up trades in stocks that are already starting to exhibit trending behaviour, so it is a given that such stocks will by trading away from your desired execution. The corollary is that because this is by definition not a contrarian strategy, most participants will be on one side of the trade; an imbalance that causes disruption when sourcing liquidity. We have defined five generic strategy classifications - price momentum (referred to above), earnings momentum, historical growth, deep value and relative value. Stocks that display price momentum and earnings momentum characteristics are typically the most difficult to trade and in preparing the research note we focused on just stocks in these two categories. We therefore felt comfortable that the Millennium sample did not contain an excess of 'easy to trade' stocks that would have skewed the results to flatter performance.


How do you calculate the cumulative impact of trading stocks that fall into these two categories?

We use two separate metrics: Liquidity Charge and Timing Consequence. The former calculates the difference between the last price and the actual trade price for each child order in a given execution and then sums them all to produce a cumulative indication of their effect. Once this Liquidity Charge is separated from the overall slippage between the price of the stock at the time of the first execution and the final execution price (namely the average price of all the child order executions) the residual price change is the Timing Consequence. From an individual trader's perspective, the Liquidity Charge represents his/her own market impact, while the Timing Consequence is the net effect of all other participants' market impact and represents the absolute value of intra-execution price drift.

Do you anticipate that your research into execution costs on Millennium will trigger opportunities to analyse execution data from other dark trading venues?

It was great to see NYFIX take the first step and share their execution data, and we certainly hope their willingness will trigger more venues to do the same, because (perhaps ironically given the 'dark' soubriquet) it is in their interests to be as transparent as possible regarding the potential price improvement they can offer. At the very least, since their value proposition relates to the provision of liquidity and the reduction/removal of signalling risk, they should be able to support this with some kind of objective and quantitative data. It is very apparent from our conversations with asset managers that they find the current availability of only broad volume characteristics disappointing - they definitely require greater granularity to be able to make informed decisions as to execution strategy.