The Gateway to Algorithmic and Automated Trading

Exploring a distributed world

Published in Automated Trader Magazine Issue 30 Q3 2013

Anna Reitman speaks to a range of experts to get to the bottom of some of the more common issues that come up when trading firms adopt distributed computing systems.

Leon Diamond, Mansard Capital

Leon Diamond, Mansard Capital

"The trends in the markets that can have a strong risk-reward payoff we try to spot are occurring anywhere from five to 30 days."

Traders are an increasingly intrepid group these days.

The days of making money simply by being super-fast and focusing on a few markets have been dwindling, with reports such as Automated Trader's Trading Trends survey showing ever-rising interest from firms to add asset classes and geographies. As the trading community discovers those new opportunities, they have to navigate their way through the world of distributed technology.

They will run into significant challenges, from the need to keep a lid on trading fees related to fragmented liquidity to more nuts-and-bolts issues such as network infrastructure complications. And those challenges go well beyond latency issues. Automated Trader spoke to a range of firms involved with distributed systems, as well as some users to gain some insight.

One firm that has wrestled with some of the issues is Mansard Capital. Its Mansard Macro Systematic fund covers 60 individual instruments in developed and emerging markets across asset classes including fixed income, FX and commodities. It uses systematic strategies, with trading based on signals generated from closing prices.

Leon Diamond, a founding partner and chief investment officer, said the firm does not rely on low latency strategies (he said playing the high frequency "game" required throwing money at it to constantly become faster). But it still needs to make sure that trading signals flow seamlessly across the network and generate the correct trades.

"The trends in the markets that can have a strong risk-reward payoff we try to spot are occurring anywhere from five to 30 days," Diamond said. "We are constantly modelling our slippage and taking that into account into our trading, taking model signals and seeing what impacts each of them has on the portfolio."

Mansard uses a vendor as well as banking systems for execution, but it has built the rest of its systems in-house.

A major task is to make sure there are no differences between the signals and what gets executed. "The consequences are being overweight or underweight that instrument in the market," he said.

It has been a significant hurdle, Diamond said, because an account could fluctuate tens of millions of pounds in response to model signals throughout the day. Or, the system could be signalling to trade an instrument in a month where there is not as much liquidity and trading the next month's contract becomes a better option.

Diamond says one of the most crucial execution aspects is having back-up systems for when things go wrong. Finding and identifying errors and why they occurred needs to happen quickly, he said.

"Especially when we are running give-up agreements with brokers, often on the counterpart side to banks that process has to be checked thoroughly on the operations side. We have picked up mis-bookings from different counterparts when we are running tapes through accounts," he said.

An investment bank that asked not to be identified echoed this sentiment.

Where the action is: H1 2013 volumns growth vs year-earlier trends

There is a trend, said a person at the bank, for high volume trading to be double-checked. The bank provides a separate drop copy of trades that goes via a different route to the buy-side firm's back end systems. The front end receives trade confirmations in real-time, while the drop copy from the bank's downstream system goes to the buy-side firm's back office where it can then be reconciled.

"If you do high volumes of trading, you do find errors in network packets, you get dropped trades. The quicker you can find the breaks, the less operation overhead there is," the person said.

Connecting the dots

Typically, the wider the area, the more difficult mapping it out can be.

"The biggest challenges we see are with connecting the data, the services, the architecture, so that there is a seamless communication across architectures," said Jeremy Hurwitz, principal and founder of InvestTech.

Many firms are running local software or software from a central service using various technologies such as Citrix or SaaS servers to either connect to a common architecture or logically connect distributed architectures. The shift in interest towards hosting and using cloud technology, Hurwitz said, takes away the need to link wide area networks. But in the trading and investment world there are still concerns about the security and reliability of this approach.

"You have challenges around firms releasing highly confidential data into the cloud, because they are dealing with client information and trading strategies," he said. For now, cloud storage is mostly being used for dealing with big data and generic common hosted services and then distributed consumption, though the performance and costs of moving data need to be considered.

James Roberts, an enterprise architect at InvestTech, added that in the automated trading space and especially for high frequency trading, the network infrastructure in some markets is difficult to operate in. As examples, he cited Hong Kong and Toronto.

"Having libraries in the cloud is advantageous in some technologies and environments and not in others depending on what your opportunities are in communications, and also co-location if you have to get closer and closer to trading entities," he said.

Roberts identified liquidity fragmentation as another consideration.

The pervasiveness of dark pools and smart order routers that execute orders in fragments can lead to higher trading costs. Roberts says this is best addressed by having well-defined agreements with providers - for example with target execution points or perhaps a more "lock, stock and barrel" approach.

"If a firm starts seeing fees and commissions starting to mount because of partial fills due to errors coming from multiple destinations, expectations can be contractually set out for what you are and are not going to pay for," Roberts said. "That means negotiating agreements with your EMSs (execution management systems) and folks responsible for execution."

Spreading the cost

NYSE Technologies has seen a surge of interest in distributed solutions in Asia, particularly in Japan, where it has a team of 40. When a Chicago derivatives shop approached NYSE to deploy in Tokyo, its original intention was to build infrastructure themselves, but as the need to test algorithms emerged, they decided to use NYSE's testing connectivity, said Jeff Drew, Global Liquidity Center programme director at the company.

In another case, a New-Jersey based firm tested its algorithms and realised quickly that it had to go back to the drawing board. In the US, algos are finely tuned; it is such a competitive environment that primitive algos were victims of natural selection, he said. That means they need to be calibrated to a new market.

In the New Jersey firm's case, the issue was that propagation delay distorted the quote-to-quote delay sufficiently that the firm shifted testing to Tokyo, Drew said. "It is common sense for these traders to want to be in fast growing markets. There is more volume, activity, retail," he said. At the same time, "they don't want to incur the high capital expense of building the infrastructure before they have tested the profitability of their algorithms".

This need is part of the larger push to outsourcing that has been gaining steam since 2009. Research from TABB Group predicts that by next year, 61% of all financial firm IT expenditures will be directed to external providers, up from an estimated 56% last year and less than 50% in 2005.

Larger firms tend to own, operate and host their own technology, although there are a number of vendors eyeing opportunities to provide a hosting environment. InvestTech's Hurwitz said leading buy-side platforms are moving towards offering distributed, hosted services that also enhance systems resilience across business continuance and recoverability.

But it is the middle-tier of the market in particular seeking to leverage those services as they grapple with the desire to stay agile and at the same time become well-established, he said.

Front office trading is lagging behind the full outsource curve. "Even if trade settlement and post-execution is being outsourced, trading platforms are still being kept in-house. What I see is more of hosting on someone else's technology platform but keeping control of the trading venue," Hurwitz said.

Despite the uptake of outsourcing of post-trade services, the disconnect between the front, middle and back offices continues to be one of the areas that both Roberts and Hurwitz see buy-side firms being most unprepared for.

Some commentators said the entire concept of a front, middle and back office is set to become outdated, particularly as regulatory mandates make real-time reporting a necessary reality. Considering the major technological investment and upgrades occurring across clearing and settlement structures, is real-time settlement also around the corner?

Hurwitz said: "This operational shift could manifest between the front and middle office, assuming true real-time settlements are achievable. The reality of this is still a significant leap."

After all, same day settlement at present is still challenging - not just across asset classes, but also across global boundaries.

Jeff Drew, NYSE Technologies

Jeff Drew, NYSE Technologies

"It is common sense for these traders to want to be in fast growing markets. There is more volume, activity, retail."

Faster, better, stronger

Even if firms are coordinated across the enterprise, the consumption of market data from across the world requires high performance infrastructures and several observers have pointed to multicast technology as inevitable in this space.

Multicast allows market data to become available to many systems simultaneously. That means allowing different applications to "listen" to the same piece of information across multiple, geographically distributed locations by sending just one message which the hardware automatically replicates to other subscribers.

Multicast typically uses UDP (user datagram protocol) and can send data at up to 100 gigabits per second. The technology has been around for many years, and is a key component of the technology stack behind providers TIBCO and 29West.

One person at an investment bank said using multicast technology in conjunction with his firm's firewalls had made life complicated from an engineering perspective. So what about for firms that want the benefits of multicast technology but also want to be able to trade quickly across wide areas?

Graeme Burnett, a network engineering specialist and FPGA architect at Hatstand, said it is achievable but it may require trading without a firewall. "Do you want to trade quickly or don't you? If you don't, put a firewall in place. Do you think CME and ICAP are hacking into your computers? Of course they aren't, there is no threat here. You have more threat from people in data centres taking photos of your racks."

As an example of an advanced design, he said a network provider can enable multicast on VPLS (virtual private LAN service), which is a construction on top of the MPLS (multi-protocol label switching) that allows many different networks to run on the core network switches. To make multicast an even more attractive option, a firm can use IGMP (internet group management protocol) Snooping, which means that unnecessary data traffic through some switches can be avoided, all of which gives "tremendous performance gain", Burnett said.

"Add to this a software multicast router such as Xorp and you save a hop through a switch enabling you to connect to the trading venue at the data place saving considerable latency. Ditch HSRP (hot standby router protocol) you lose 1.5 milliseconds. Then drop NAT (network address translation) and you gain another 500 micros," Burnett said.

Underpinning this construction, core switches need to be set to pass multicast traffic over all their hardware queues otherwise you might have a bottleneck due to misconfiguration of the core switch.

He said added that in the HFT world, too many network engineers sell themselves better than they design systems.

"Very few people like multicast because they don't understand it - they think it is quite complicated," he said. "The danger is it can flood the network if you do not know what you are doing or if a program screws up… But you shouldn't be testing in production anyhow - you should have a dedicated test network because the advantages of multicast are phenomenal."

unicastmulticast
Unicast and Multicast

Burnett said that as market data gets faster, intelligent architecture is going to become an integral part of the solution.

"The majority of people trade on events and events are getting quicker," Burnett said, adding that trading firms will have to invest in new technology to keep up or they will not be in the trading game. There was a time when everyone had 100 megabit pipes. Now, 40 gigabit is coming out and two years after that becomes standard, it will be 100 gigabit, he said.

A 40Gb Ethernet connection means firms are receiving about one message every microsecond. "If you have a regular three millisecond jitter, as most servers do, you could potentially have dropped messages."

That is where FPGA technology comes in. Though substantial investment is required to implement hardware acceleration, it can be done in a staged approach for just applications that need the speed, he said.

FPGA allows for the shredding of data across more processors and servers, shortening queue lines and improving periodicity, ultimately adding more spaces between messages.

The ability of FPGAs to cope with bandwidth spikes means participants can analyse full market depth consistently. That means getting a feel for demand in the marketplace and devising a better strategy to predict price movements, Burnett said.

There are several caveats, such as not being particularly suitable for implementing complex trading strategies, Burnett added.

InvestTech's Roberts said that FPGA does have a role as a specific point solution, such as high performance messaging, but it has drawbacks. One is that from a communications or locale perspective it needs to be close to the trading venue. Moreover, across the world there is a move away from high to mid-frequency trading and as a result, a shift to more diverse solutions to maintaining margins.

Graeme Burnett, Hatstand

Graeme Burnett, Hatstand

"Very few people like multicast because they don't understand it - they think it is quite complicated."

Less is more

Simon Morse, business development manager for Intel in London, said that as new generations of technology emerge, financial industry clients in the City are increasingly looking at efficiency as well as speed.

He points to the compatibility across generations of Intel's processors and the incorporated visualisation technologies backed by the company in its server processors. Not long ago, each application required a dedicated system, but virtualisation means that multiple applications run on one server.

"We have seen a significant change in the way organisations provide technology to their users and the growth of virtualisation has fuelled a lot of that. It isn't just speeds and feeds, clients are increasingly interested in some of the less obvious technology available within a processor," Morse said.

Earlier this year, Intel launched Xeon Phi, providing a processor with dozens of simple cores, each with multiple threads.

It is well-suited for breaking down tasks with large numbers of calculations that can be spread across numerous computers, benefiting applications such as Monte Carlo simulations for example.

In terms of power consumption, server visualisation has meant fewer boxes, reductions in energy use and fewer network points, ultimately resulting in lower power operation costs. And, in its mobile unit, Intel is providing core processors with low enough power consumption to make them suitable for use in fan-less tablets. The concept is finding its way into Intel's data centre products.

It's also creating compilers that simplify the optimisation of code by parallelising loops or identifying hot spots. "There is a lot of performance being left on the table here…and I know of (several banks) in the City that have got active programs around Xeon code optimisation."

There are a range of other issues that come into play with distributed systems and in many cases the technological landscape is evolving quickly, often in ways that could make it easier for more firms to adopt them.

Data normalisation is a case in point. One person at an investment bank said using a vendor to normalise market data is notoriously difficult because of the number of standards in use and concerns about making a major commitment to a vendor.

That could change if more firms provide OpenMAMA open standard - developed by NYSE Technologies with the Linux Foundation - so that multiple vendors are using the same API.

On its broker side, it uses an internal format to code raw market data. But open standards in other areas are becoming ubiquitous, the person at the investment bank said. "On the client side, everything is FIX Protocol. It gives you an idea of how far open standard has moved; we can talk to many vendors very easily."

Also, algorithm developers want to run their software globally without customising the interface with each market's protocols. Programmers want to be able to focus on their trading logic rather than the subtleties of how each exchange formats data, NYSE's Drew said.

Looking ahead, Intel's Morse said he expects to see greater adoption of solid state disks in data centres, which will improve performance of processors while driving down power consumption. And in the next 18 months, software defined networks (SDNs) will get more attention. SDNs are networks decoupled from underlying physical infrastructures and will do for networks what server virtualisation did for computing.