The Gateway to Algorithmic and Automated Trading

Market Data Maelstrom

Immense market data volumes and a relentless latency race to zero has created plenty of challenges for high (and low) frequency trading firms as they attempt to trade modern markets. Not least of which being the need to be able to spot faults at trading speed. Bob Giffords talks to experts from leading buy- and sell-side firms to explore the world of FPGAs, microsecond timestampting, clustered trading engines and more...

The speed and volume of market data has been ratcheting up year on year. "Five years ago market data delivery platforms were one to three milliseconds through the ticker plant," recalls Tony Kingsnorth, director of operations at Fixnetix. "Today with some of the legacy architectures they are still around 1 millisecond. Meanwhile, with some FPGA solutions - albeit with little or no data enrichment - we are heading for under ten microseconds. This is a huge gap."

Tony Kingsnorth

Tony Kingsnorth

"Five years ago market data delivery platforms were one to three milliseconds through the ticker plant,"

Data volumes have similarly grown despite often thin markets. Hirander Misra, co-founder and chief executive of Algo Technologies, notes that collectively across all the major equity exchanges in Europe we were seeing rates of 5,000 to 10,000 messages per second before MiFID went live back in 2007. "Now Chi-X alone can peak at over 80,000 messages per second and we're seeing total market peaks approaching 150,000," he adds. "With LSE's Millennium and Chi-X both able to handle more than the whole market we could soon see huge growth. If some of the MTFs start offering single stock options we could see a data explosion in Europe."

In the US the rates have, of course, already exploded, with total equity, options and futures markets topping 4 million messages per second according to marketdatapeaks.com. "Recently OPRA reached 2.1 million messages per second and continues to climb," says Michael Tobin, managing director at Knight Capital, responsible for algorithms, DMA, EMS, OMS and listed derivatives technology. "We maintain capacity headroom up to 8 to 10 million messages per second to cover the millisecond bursts. Now we worry less about volumes, and more about latency and local bottlenecks. Equities also continue to hit new peaks. Volumes on Nasdaq TotalView alone for example have risen rapidly to over 450,000 messages per second."

Indeed speed and scale are linked. "As trading platforms improve the data feeds get spikier," says Nick Morrison, head of market data technology at Nomura. "The exchanges are moving to 10gig Ethernet internally and gigabit to the colo centres. Eventually we'll see 10gig to the trading engines as well. The microbursts will inevitably just become higher and more frequent."

"You need lots of headroom to cope with spikes," insists Kingsnorth, "at least 10x for infrastructure and 20x for exchange and connectivity bandwidth." He argues that spikes come with volatility, which is where most high frequency traders make their money. "If you can't deal with spikes," he concludes, "you're not really in the market."

"The classic latency problems occur at the European start of day," says Morrison at Nomura, "the opening of the US cash and derivatives markets, and end of day. You have to plan capacity to meet these peaks and they have been rising." However, the challenge is not just speed but also the complexity of processing. "Normalising across fragmented data sources is also a serious issue," adds Morrison, "different symbologies, currency differences, tick sizes, time-stamps, trading status indicators etc. It's a real headache, especially in Europe and Asia. With the consolidated tape and quote system in the US it's somewhat easier."

QuantHouse is one solution provider that normalizes feed and trade status data across venues and their clients can also use the local information. "For example," says Pierre-François Filet, their CEO and co-founder, "LSE might show 40 different status values while other MTFs might have only 5."

Complex analytics are also required. "Quants are increasingly demanding curves rather than raw data, so we build and update the curves from the raw data and then distribute the curves, which might mix data from different sources," says Jeremy Green, global head of market data services at Standard Chartered Bank. "However, we use the same market data feed handlers with a publish and subscribe model." Green also points to the major challenges around the commercial management of data. "Some vendors impose restrictions on how their data may be used for example processed by algorithm or displayed to human traders," says Green. "They may change the packaging or valuation of data by use or for delayed data. So contractual compliance is a huge issue and very complex; we have to track usage carefully including blended data. Tracing the origin of data may also be difficult. Most banks will have a significant team managing these issues"

Nick Morrison

Nick Morrison

"Latency challenges differ depending on the trading venue, their volumes, connectivity solutions and data distribution strategies,"

Keeping up the pace

"Latency challenges differ depending on the trading venue, their volumes, connectivity solutions and data distribution strategies," explains Morrison at Nomura. "Some markets are very spikey with huge microbursts, while others shape the traffic flow, trimming the peaks but generating delays. Each one is different."

"Exchanges differ hugely in their delivery mechanisms," adds Tony Kingsnorth at Fixnetix. "Some provide TCP, some IP multi-cast, some offer re-request channels to backfill, others don't, some are single, others multi-stream. It's highly varied and it all impacts on latency as well as reliability."

However, not everyone needs to go with the flow. Stuart Plane, managing director of Cadis, the enterprise data management supplier, explains: "Most of our long only fund clients use very cautious algorithms and don't chase every tick relying on the markets or their brokers to apply the circuit breakers. Where they validate data it is usually out-of-band, using a copy of the real-time feed. It's easier to work across data sources and apply more complex analytics with a transparent rules engine that traders can understand and change without software programming."

Plane describes how big buy side and some sell side firms use these out-of-band systems for intraday TCA and best execution, for validating their execution policy rules and for preferencing or blocking certain data sources for smart order routing. "With time-stamps we can also check for stale or delayed data and cross-market consistency or cross-overs," says Plane. "It's like a second opinion providing reasonableness checks, and normally only runs a few seconds behind the actual trading. If problems arise the software will send instant messages or email alerts and traders can take action. Even the flash crash evolved over minutes not microseconds."

"Out-of-band metrics are ok for scalability and capacity issues," says Frédéric Ponzo, managing partner, Greyspark Partners, "but not for quality issues. If all else fails of course, you still pull the plug."

"What matters for a client," says Filet at QuantHouse, "is to be able to do everything and to detect any situation through the API. Its program trading needs to react in microseconds. To do so the API needs to provide all information in micro-seconds, such as feed status, latency, network status etc." However, everyone is different. "Traders can take market by level with limited depth, or by order with everything, where available, and we store everything for back tracking where necessary," says Filet. "We then build the aggregate book across source feeds as needed for each size and price together with current latencies to reach the price to aid decision-making. This is where it all comes together." Filet admits that market by order involves huge amounts of data, but insists it is critical if traders need to synchronise pricing feeds with order flow.

"Everything has to be measured," adds Misra at Algo Technologies, "latencies, quote rates, update gaps, and book deviations especially for parallel quotes from multiple venues. It's quite complex. With our early warning systems the flash crash problems just wouldn't happen here for our customers."

Silicon Solutions

As volumes and speeds rise, some firms are turning to hardware solutions. "FPGA chips are very fast for data format conversion," says Filet at QuantHouse, "but they can't really cope with packet loss and retransmits and many other issues such as managing market by order. We only use them on our OPRA feeds in the US to reduce the hardware footprint. FPGAs only incur a latency of one to two microseconds, whereas the latest Intel chips can get down to 3 or 4 microseconds and recover lost packets, maintain the latest prices and many other enrichments. So we use standard servers for everything outside OPRA. What matters is not the pure decoding time but the overall latency from the exchange to the final client program call back, all FPGA solutions need additional processing which makes them slower overall"

Other high frequency players are equally cautious. "We have looked at hardware accelerators like FPGA, but a single multi-core server with two sockets each with six cores can handle the whole of the OPRA feed so we don't think FPGA is currently justified," says Tobin at Knight. He reckons the differences are in microseconds and stability is key. Moreover, some of the cutting-edge hardware doesn't have the same reliability. "Even an active-active failover can be costly to the trading algorithm," says Tobin, "and detecting a problem can sometimes take seconds."

Nevertheless, new FPGA solutions keep coming to market. "Our pure FPGA feed handler was launched last year," says Yves Charles, CEO of NovaSparks. "It reduced tens of microseconds for feed normalization down to a latency of less than one microsecond, handling most of the European live markets with gigabit Ethernet in from the router switching to 10 gigabit Ethernet out to the trading engines." Charles maintains that the best software based feed handler that they have seen for FAST protocol market feeds takes 20 microseconds to convert the feed, but claims that this rises to over 100 microseconds during microbursts. Similarly hybrid solutions that mix FPGA and multicore chips can take well over 10 microseconds during volume spikes. "The Novasparks solution takes less than a microsecond even in bursts," he says. "The curve's completely flat, providing a huge advantage."

Fred Ponzo

Fred Ponzo

"FPGA technology is very fast but there's a high cost to modify the algorithms and logic once it has been coded on the card,"

"FPGA technology is very fast but there's a high cost to modify the algorithms and logic once it has been coded on the card," says Ponzo from Greyspark. "The market data spec in the US is relatively stable and the volumes are huge, so FPGA can work well. In Europe, every exchange is different and the specs keep changing, so using FPGA is more of a challenge."

Charles at NovaSparks counters that their customers don't need to program the FPGA chip, since his firm does that. "80% of the code is generated by our own proprietary compiler," says Charles, "which lowers time to market, improves efficiency and is more adaptable to client requirements."

Time travels

At ultra low latencies, time becomes a key issue. "Time-stamping at millisecond level is the de facto standard for exchanges although some are less granular," says Morrison at Nomura. "The latest matching engine technology uses microsecond time-stamping." He admits however that synchronising between exchanges is very difficult. "If they give you a time source it is easier, but otherwise it is nearly impossible to model accurately because of the network delays, jitter and other effects." He notes that FIX are trying to get some standards, but progress has been slow.

Calls for regulatory intervention are growing. "In a fragmented market with multiple execution venues the manipulation of markets through sophisticated timing arbitrage strategies or deliberate 'choking' of the order input systems of venues or communication networks is a real possibility," wrote Mike Riley, CEO of Endace Technology, a provider of data capture and time-stamping technology based in Auckland, New Zealand, in his February 2011 submission to the European Commission DG Internal Markets and Services regarding their MiFID II consultation. Endace therefore recommended that the Commission require "all Regulated Market Exchanges, MTFs or Organised Trading Facilities introduce time stamps in pre and post data feeds that support on nano time stamp increments." Such time stamps should "synchronise themselves to the GPS timing clock … in such a way that they provide better than 0.1 microsecond absolute timing accuracy." This would be an ambitious standard.

Some firms, however, are already meeting similar standards. "Time stamping is down to the microsecond every time we receive, process or send a message," says Filet at QuantHouse. "Everything is transparent, and all servers in our global network are synchronized via GPS. This is crucial to efficient and prompt error handling. You must have confidence in your time stamps. It is also crucial when it comes to replay feeds for back testing purposes."

Victor Yodaiken

Victor Yodaiken

"Unix may take 10 to 20 seconds to recognize a fault; but if you have precise timing data you can recognize the faults in milliseconds or for some key faults even microseconds."

Victor Yodaiken, CEO of FSM Labs a provider of time management technologies agrees: "Unix may take 10 to 20 seconds to recognize a fault; but if you have precise timing data you can recognize the faults in milliseconds or for some key faults even microseconds." He explains how his Timekeeper product enables timestamping at a precision of a few microseconds or better, by rapidly synchronizing local server clocks with reference time, and reducing the system overhead for delivery of time to the application.

"Strange behaviours occur at low latency," says Yodaiken, "when algorithmic loops easily magnify minor delays. For example, when a cluster of trading engines share a time stamp log, but one falls out of time synchronization, what may appear to be a rising price could in fact be a falling price. Or a network fault may result in flow switching to alternative routing with a 10 to 50 millisecond delay instead of a 2 millisecond delay. How long does it take to recognize that latency has changed and compensate in the application?"

Some behaviours are particularly subtle. "The local clock on servers may speed up or slow down due temperature of the processor," says Yodaiken, "especially with the older silicon chips. At the microsecond level this makes a difference. Timekeeper catches these fluctuations and can resynch, typically within microseconds."

In low latency systems time is also quality. "Most problems turn out to be infrastructure issues or packet delays," says Victor Lebreton, managing director at Quant Hedge. "It's not that the feeds don't work, but the data is stale so trading computations are wrong. Then the data might come all at once. Avoiding bottlenecks is crucial." He notes that high frequency strategies are much more sensitive to small delays than conventional strategies.

Tony Kingsnorth at Fixnetix describes the 'Catch 22' challenge: "The biggest quality issues are stale data and missed messages; yet any interrogation of the data takes time and introduces further delays. So most low latency solutions often do very little, except perhaps to flag it for the trading algorithm to sort out. That said, data quality has improved significantly over the past 5 years as exchanges have upgraded their infrastructures so errors are now quite rare."

"If you have stale data from an exchange, you simply can't involve it in the decision making process," says Misra at Algo Technologies. "You have to exclude it. Similarly, when exchanges upgrade their technology, we've seen message rates shoot up and some firms initially aren't able to keep up with the resulting data explosion."

However, how do we recognize stale data? "If no updates arrive for 5 or 10 seconds, is the price stale?" asks Lebreton. "So you request a new quote. Algorithms have to be designed to run even when there are no price updates, which might happen at start of day. Some delays to the pipeline are probably also inevitable."

"Designers use various tactics to recognize stale data," explains Ponzo from Greyspark. "You can sample the feed, measure delays directly, compare prices with other feeds, or compare time stamps. However, it can take a long time to detect a problem and decision tree lags can themselves be significant. Once you detect a problem most people simply flag it as a potential issue, otherwise you might change the source or even stop trading. There's always the worry about false positives, so typically more radical intervention will be a human decision."

Victor Lebreton

Victor Lebreton

"If no updates arrive for 5 or 10 seconds, is the price stale?"

Since event timeouts outside your thresholds are usually the first indication of a problem, Yodaiken at FSM Labs argues that key parameters should be tracked with the same precisions as the normal speed of activities. "If you're trading in microseconds," he says, "your recognition of faults has to be operating in the microsecond range, at least."

"As computers get faster, the spikes get higher," says Tobin at Knight. "We're spending more time monitoring the peaks, but the markets are also getting more efficient; prices are actually more stable as the number of liquidity providers grows. So there are gains and losses."

Tobin refers to the May 6 flash crash last year, when some stat arbs had to pull out of the markets because their datafeed handlers couldn't cope or traders worried about stale prices and busted trades. "It's a critical customer issue," says Tobin, "so we had already focused on it 2 years earlier with much more capacity, instrumentation, and stress testing facilities. It paid off. "

Riley in his submission on MiFID noted that European consumers have more information on the 'quality' of a car they buy than on the execution quality of their trading venues, Endace therefore urged the European Commission to mandate that all trading venues "should at a minimum provide their prospective and actual customers with real time information on a minimum set of quality figures - eg. fill ratios, quote to tick trade latency, and order to completion trade latency figures… against an agreed consistent definition … on a near real time basis - for example hourly, rather than just providing daily averages that can mask a multitude of issues."

As Yodaiken at FSM Labs put it, "If you're not sensitive to time deviations, you're ultimately going to get bad trading decisions."

Such quality issues are beginning to have a real impact on the markets. "The fast MTF platforms are still taking around 300 microseconds to ack an order with that more than doubling in some cases with spikes," says Misra at Algo Technologies. "There's an asymmetry between the co-located price feed latencies and the order routing, especially if hedge funds have to go via the broker's data centre. So some brokers are beginning to move out to the colo centres themselves, but they can't afford to be everywhere."

"In a fast market, an approximate price is often good enough," says Lebreton at Quant Hedge, "since any price you see may disappear before you get to market. You may only have 50 milliseconds to take a decision. If you don't react to a price move, it will be too late. You can only afford to do so many checks and computations that add latency; so you go with the flow." If you then miss a trade Lebreton argues you just have to hedge the position and trade your way out of it. "That is one of the strategies to avoid impact on your performance," he says. If the markets continue to speed up, such hedging may become a serious part of trading.