Will Winzor-Saile, Fidessa
"A clever algo is no good without a decent SOR, but even the best SOR is no good without reliable, performant market access and market data."
There are many decision points in the life of an order. At the start of the day, the portfolio manager decides what to trade and passes this on to the trader (or algo) who breaks the order up over the course of the trading day making the decision about when to trade each slice.
The role of the SOR is to decide where to trade. It needs to find the best price or the lowest trading fees, achieve the most rebates or minimize market impact, but above all else it has to get the order filled. Achieving this simple objective in today's ever-changing landscape is, however, incredibly complex.
A vital part of a firm's market access infrastructure, the SOR needs to provide a normalized view of markets across all regions and remove the multi-market complexities from upstream systems. It also needs to understand the nuances of every market - order types, queue priority, varying liquidity of stocks, etc. Walking the line between global consistency and regional specificity is just one of the challenges facing SOR developers.
Everything is relative
In science relativity teaches us that there is no such thing as a universal frame of reference; what you observe depends on where you are.
Most SORs start by combining order books from every exchange into a single, consolidated view of the markets. With each exchange located in a different city, a different data centre, or at the end of a line with a different latency, the way these combine will vary between locations. Consequently, each SOR is looking at a completely different view of the market.
Regardless of latency or strategy, two identically-programmed SORs located in different places could react very differently to the same market signal. This inescapable fact exerts itself more profoundly as markets spread more widely across the globe, such that no single view of the market is 'real'.
Deploying an SOR successfully is about understanding the objectives of the firm and appreciating that where an SOR is located makes a real difference in terms of the market that it sees.
Myths and legends
Some firms seek to be the fastest, while others have no interest in time-to-market. Most are somewhere in the middle, asking "how fast is fast enough and at what point does the investment required to reduce latency outweigh the benefits?". To address the more fundamental questions around what latency you are reducing and why, we first need to dispel some persistent myths.
First, that tick-to-trade is only relevant for HFT firms. The time between seeing a price appear on the market and being able to enter an order against it is the only important latency measurement, regardless of your strategy. Being able to send an order to the market in a few microseconds is of little use if your view of the market is already out of date. Reducing market access latency from 5ms to 1ms might appear to be an 80% improvement, but if your market data, algo, SOR and risk checks take a combined 35ms this drops to just 10%.
Second, that reducing latency stops orders being gamed by predatory HFTs. There is some substance to this, but if the time between two orders arriving at two different venues is long enough, then the signal from the first order can cause the market to move away from the second. Reducing latency lessens the chance of signalling by shortening this gap, but a suitably intelligent SOR can negate this effect regardless of latency.
So the real motivation for reducing latency is to ensure that what you see is what you get. Any strategy is useless without a true view of the market.
Need for speed
The price a retail trader sees on his screen is typically guaranteed for around 30 seconds, so any time taken to make a decision and submit an order to the broker is insignificant, he'll get that price. However, if a high-frequency trader spots a good price (or wants to cancel a bad one) chances are another HFT firm has spotted it too and a head-to-head race ensues in which a few nanoseconds determine whether he makes the trade or misses it.
The rest of the market lies somewhere in the middle with most brokers generating orders throughout the day according to a client's strategy. As each order is produced, it is sent to an SOR which will try and trade it for the best available price at that time. Certain algos will attempt to hit specific prices; others will trade according to a pre-defined volume curve. These actions are very similar to those of an end user, but make a big difference to the impact of latency.
The SEC's Market Information Data and Analytics System shows that over 18% of orders fill in under 50 milliseconds, so if your latency is 50ms and you're trying to hit a specific order, there's only an 82% chance of it being filled before your order arrives. An over-simplification maybe, but this shows just how quickly the reliability of the data drops as latency increases.
For more passive strategies, the impact is far less pronounced. If the strategy is generating orders according to a pre-defined schedule, the chance of the market moving drops to under 1% in the same time. This can make the difference between beating and falling short of a key price benchmark, but measuring this impact is not always easy. Traditional benchmarks such as VWAP and Implementation Shortfall will be impacted by the performance of an SOR, but the effect is so overwhelmed by the impact of the volume curve that it's hard to distinguish. Even lower level metrics such as Spread Capture and Market Impact focus on the execution strategy rather than the SOR. While it's possible to directly measure the SOR by comparing the price, volume and time taken to execute each slice, the effort required means it is often overlooked.
Even without huge geographical distances between exchanges, comms lines and system latency can impact the reliability of market data and therefore the performance of an SOR. Not only is everyone looking at a different view of the market, but without a performant system that view is inaccurate a significant amount of the time.
The next logical step is to ignore latency so every market participant sees exactly the same set of prices, and if they can see them, they can trade them. But there's still a large section of the market where volume remains hidden in dark pools or in hidden orders on lit order books.
Hidden orders may be iceberg orders, where only a small volume of a much larger order is shown, or completely hidden order types where nothing appears on the book. They may be operated by the exchange or maintained on an external system. On top of this, some exchanges give visible orders priority while others preference hidden orders. These subtle differences affect the way an SOR needs to interact with the exchange.
If there are 100 shares visible at a certain price and the SOR needs to trade 150, it has a choice to make. If there's hidden volume at that price, the best strategy is to submit the full 150 immediately and take both visible and hidden volume. If there's no hidden volume, it should take the 100 and wait to see if any more volume appears at the price. But there's no way to tell if there is hidden volume until it trades, by which time it's too late, so the best an SOR can do is to try to predict hidden volume based on past behavior. That's not easy because trades against hidden orders are not always specifically flagged so their presence needs to be implied from discrepancies between quoted and traded volumes. Our own analysis suggests that for liquid stocks this is around 25% of the visible volume.
A new approach
Seeing isn't always believing. Even if a single, consolidated view of the market did exist, every millisecond of latency reduces its accuracy and the lowest latency systems can still only see 75% of the market with the rest concealed behind hidden orders.
Instead of relying solely on the information it sees, a truly intelligent SOR needs to take an analytical, predictive approach to look at how the market has been performing and calculate the probability of the price moving and the amount of hidden volume on each market.
By understanding the reliability of the data and analyzing market trends and microstructure, firms can add real value through their SOR. Not only will they avoid missed volume, but they will be able to leverage hidden volume and alternative liquidity pools to provide price improvement and more efficient alpha capture. All this comes at a cost, however. A flexible, analytical SOR may perform better when it's working, but unless it is part of a resilient global infrastructure it will be rendered useless.
A clever algo is no good without a decent SOR, but even the best SOR is no good without reliable, performant market access and market data.