The Gateway to Algorithmic and Automated Trading

Trading Hubs Reach for the Clouds

Published in Automated Trader Magazine Issue 17 Q2 2010

The e-trading landscape is being radically transformed by two irresistible forces: proximity trading and cloud computing. Together they create rich opportunities for e-services providers. So what are the drivers and how can firms stay ahead of the curve?

Bob Giffords - Independent Banking & Technology Analyst

Bob Giffords - Independent Banking & Technology Analyst

As the price of bandwidth has come down, latency has become the real differentiator both for market access and for real time data replication," says John Donaldson, managing director at AboveNet. "In the past people just asked about bandwidth and network topologies, now they focus on latencies and connectivity."

"Though all trading firms want low latency," argues Mark Casey, president of CFN Services, a managed telecom infrastructure services company, "there is a balancing of price, latency and time-to-market that all trading firms must prioritize."

Time-to-market is increasingly important. "For resilience you have to have two routes into every data centre or building," says Donaldson at AboveNet, "and the last metre can be very difficult with all the traffic legislation, the utilities under the road, and the landlord hierarchies common in London buildings. So people are starting to take existing network connectivity into account when they choose where to locate."

However, the focus on network latencies can be overdone. "It's the end-to-end turnaround including the trading applications and OS that matter," says Ali Pichvai, CEO of Quod Financial, an adaptive execution technology provider. "We have integrated our smart order routing and slicing algorithms to dynamically adapt to the real-time market context covering both lit and dark venues. Consequently we achieve much higher hit rates on Euronext Paris from servers in London than local Paris brokers despite the 10 millisecond transport delay." The positioning of distributed trading engines is clearly

becoming a key, but complex skill.

There is an immediate impact on e-services. "Most of the industry is moving to closer, real-time interconnection with critical vendors," says John Knuff, director of global financial markets at Equinix. "Faster applications are paramount in supporting huge volumes of actionable data. Similarly, vendors want to be closer to their customers and reduce network hops." So Knuff sees the ecosystems of the capital markets rapidly converging in multi-tenanted data centres. "This is driving demand for pay-by-use cloud computing, network storage and low latency connectivity for the whole value chain," he says.

Equinix hosts these ecosystem hubs in the US, Europe and Asia, and Knuff argues that by remaining network-neutral they can offer the best connectivity at any speed. "Broker-dealers, exchanges or pure-play net vendors can all plug in," he says. "That makes it easy for people to connect through us to their favourite vendors, and it drastically reduces time-to-market. "

Knuff points to the huge spikes in market data that we've seen recently - over a million messages per second for options, for example. This means data aggregators have to distribute their servers much closer to the trading engines and market sources to cope with throughput or they get left behind. "This is really changing the market," says Knuff, "creating many opportunities for e-services providers to add value. Exchanges too have been forced to retool their matching engines to cope with massive quote volume which means that the aggregators face a never ending acceleration."

Al Moore, one of the founders of Fixnetix, a connectivity services provider, agrees that the low latency space is growing rapidly as more and more tier one players recognize how much alpha they are losing. "Initially most traders are very reluctant to outsource the front office infrastructure piece," says Moore, "but now that the cost of keeping trading infrastructure performant is increasing exponentially, where the cost of both capex and opex can exceed the potential trading benefit, many realize they have to outsource, just to stay in the game."

Connectivity is key for proximity data centres. One hosting provider, Interxion has from 50 to 150 network offerings in their data centres including Internet public peering points. Additionally, their City of London data centre hosts several trading venue matching engines and access points. "Together with being in close proximity to most major London-based trading venues, this allows market participants to gain low latency access to these markets from a single location," says Anthony Foy, Interxion's group managing director. "Furthermore, we have a community of service providers to manage the servers or offer capacity on demand or other added value services."

Why do firms not simply co-locate at the exchange itself? "There's also a lot of politics in the exchange co-location space," says Moore of Fixnetix. "Some exchanges allow co-location, others don't. Some limit telco access - the network providers and data feeds that trading firms can access, whilst others are more inclusive." He thinks that in some cases this might force some participants to opt for a proximity data centre, which is better connected or more flexible, rather than the official exchange 'colo'.

Connecting the Dots

Since some of these trading hubs host the markets themselves while others provide low latency access to them over fibre networks, there is growing competition between the extranets to connect them all together.

Exponential-e is a fibre infrastructure provider with a focus on the capital markets. By switching messages at Ethernet layer 2 they can turn global WANs into a single, cohesive, LAN-like structure. "Our liquidity-in-a-pipe offering," explains Mark Cooper, capital market specialist at exponential-e, "allows multiple trading applications to share a single pipe and then we separate them out at optimal routing points. Since we manage the network we can control the whole stack end-to-end. This is important where customers want to see your physical cable routes to work out latency times. We provide this under a non-disclosure agreement."

"It has taken us 10 years to build out the network around London with large fibre count cables in dedicated, protected ducts including connections to all the major multi-tenanted, carrier neutral data centres," says Donaldson of AboveNet. Rather than adopt a hub and spoke strategy where traffic is slowed at the exchanges, AboveNet uses a ring topology and dedicates individual strands of optical fibre to users with direct connections, so there are no delays and negligible jitter. "Predictable network times are essential when managing service providers, for example, exchanges or prime brokers," says Donaldon.

Where investment firms want a customised solution CFN Services specializes in network design, planning, deployment, and managed services, including, ultra-low latency networking, local access transport and mobile backhaul optimization. "CFN Services leverages FiberSource®," says Casey, "a global knowledge-based platform that identifies all available dark and lit fiber, submarine systems, collocation, and lit buildings, providing the ability to quickly identify, design and operate optimal low latency solutions for the largest brokers and proprietary traders."

Location, Location, Location

Location, Location, Location

If everything is so interconnected, how do traders then decide where to locate their servers? "Availability and cost of power are key to data centre positioning, balanced off against connectivity," says Donaldson at AboveNet. So he is not surprised for example that Equinix has chosen to locate in Slough, very close to the power station. "With the pedigree and reach Equinix has, the location is also attractive to MTF's such as Chi-X," he argues. "Brokers can co-locate trading servers in Slough but still access the London markets and replicate back to their London data centres. Anything further away would test the technology, but anywhere within the metropolitan fibre net of London is pretty much the same in terms of access."

Brian Taylor, managing director of BTA Consulting, thinks there will still be room for optimization. "Once round trip matching engine latency is normalised around 200 microseconds for most venues by the end of 2010, in the high frequency trading space the competitive differentiator between venues will be network latency which, in turn means location," he says. "In a fragmented market, the optimal location for buy or sell side servers will not necessarily be co-located to exchange matching engines. Instead the epicentre will be between competing matching engines based on the weighted average value of liquidity that will be important."

"The state of the art is to optimize low latency networks for multi-asset class arbitrage between, say, futures markets in Frankfurt and FX ECNs in New Jersey on a point to point or native multicast basis," says Cooper at exponential-e. "Emerging markets in Asia are also gaining prominence. Traders want to squeeze out every microsecond. This is serious stuff." Cooper explains, for example, how exponential-e creates multi-cast groups by asset class or trading strategy to simplify running multiple models with different connectivity over the network.

Distributed trading is becoming highly complex. "Customers also have a choice of cost and performance options at our various data centres," says Foy at Interxion. "As energy costs rise we should start seeing users splitting applications between those that have to be close to markets and the rest without latency constraints that can run where costs are lower."

Investment firms that have so far resisted the low latency attraction of the trading hubs may soon join in. "London was the first European country to really fragment the market, so there was a lot of investment in low latency networks and co-location," notes Pichvai at Quod. "Exchanges tried to sell their real estate at huge prices and limit connectivity to competitors. So everyone moved to proximity data centres with connectivity. Now the law of gravity is pulling all the exchanges to London."

eServices Clusters

However, market proximity is not the only attraction of the multi-tenanted data centres. Easy access to e-services is also important. "The proximity hosting facilities play a big role in low latency and are already partnering to promote 'Software as a Service' (SaaS) add on solutions for their members," says Cooper at exponential-e. "They market themselves as a community, including instant access to exchanges, brokers, market data, real-time analytics, post trade services - everything you need, like a shopping mall under one security and power umbrella."

In a trading hub e-services have one big commercial advantage. "As regulations force market participants to do more intraday risk assessments, cost becomes a key issue," says Sognian Zhou, CEO of Platform Computing, a cloud technology provider. "The more you run the risk assessments, the better traders understand them, but demand is very bursty, so pay-as-you-go SaaS is increasingly attractive. Market data providers are looking at these kinds of solutions for example."

Have these cost issues really begun to drive decisions? "Within our multi-tenanted data centres users can provide their own hardware, use managed service providers or buy capacity on demand from cloud service providers," says Knuff at Equinix. "They can also access electronic services linked to any of the extranets we support. Since ecosystems vary by asset class and geography - equities, derivatives, bonds etc. - it makes enormous sense to use a shared facility to minimize the costs and lead times to switch suppliers. Indeed in many cases you can get one or two day connectivity to a choice of new suppliers, which is unheard of in your own data centre."

According to Moore at Fixnetix, some data centres are building communities to cross sell the community of users who are present, while others who have not focused on financial services just provide space, cooling and power.

Foy at Interxion highlights an important
consequence. "Within the data centre communities of interest form between users and feed off each other," he says. "Traders, for example, have a choice of market data providers and can change vendors very quickly. That keeps everyone on their toes because there are no lock-ins."

"Organising disaster recovery is equally straight-forward," says Knuff at Equinix, "and firms can deploy globally with a single contract and local account team for support. However, the best of breed suppliers in Tokyo will be different from those in London or New York. Pre trade analytics, specialist market data, post trade operations, they're all there. Customers often find these firms are already connected to our relevant data centres which makes life so much simpler and often gives our customers real competitive advantage."

Cloud Power

To stay ahead of the curve, everyone wants to have a footprint in the proximity hosting centre. Yet, while cloud computing may solve the cost problem, is there not an inevitable tension between flexibility and performance?

"Cloud computing - both public and private clouds - is a key advantage of a multi-tenanted data centre," says Foy at Interxion. "Research studies suggest cost savings of 40% on data centres, 30% on networks and up to 50% on services in these shared environments. Customers like the flexibility to expand or contract their compute capacity to suit their business activity." Indeed, Foy notes that some customers claim to cover their data centre costs through reduced connectivity costs alone."

"These application services issues in private clouds are becoming key to co-location discussions since firms do not want to lock down too many servers to meet peak loads, and volumes are spiking ever higher," says Zhou at Platform. "Pay-as-you-go capacity is much more attractive to tier 2s and the hedge fund space as it reduces costs. Tier 1 players are mainly experimenting with private clouds, but they still see many issues around performance and security."

"The investment banks are still finding it very difficult to build internal compute clouds because every user wants capacity at peak times such as market open and close," notes Moore at Fixnetix. "In addition banks do not want to share processing power cross industry either for security reasons." So internally banks wind up pricing by time of day, according to Moore, but still have a lot of spare capacity outside the peaks. "The economies of scale don't work unless you mix different demand profiles," he believes. "It's still evolving."

"Where people are using a co-located trading application any layer of software will increase latency," says Zhou. "On the other hand, cloud solutions can instantiate more copies of the trading software on additional servers to keep up with higher market data rates. If volumes continue to increase, firms will probably be forced to look at these solutions even for latency sensitive applications." For less latency sensitive applications, Zhou finds that it is already happening and response times are much more dependable. "In a flash of a second, low priority VMs can be suspended or rolled off," he says.

Getting data on or off the cloud with complex portfolio or risk calculations can be a major performance challenge and this has limited the use of clouds for these data rich applications, but solutions are emerging. "We see it as essential to schedule processing close to where data is already cached," says Zhou. "The latest versions of both our grid and cloud offerings therefore include data affinities in their scheduling policies to control distance in terms of network hops from the cache. The same could be applied to SaaS access points." Platform is working together with the data grid suppliers for example to ensure this works effectively. Zhou thinks this should also make cloud solutions much more acceptable to firms who are sensitive to latency and elapsed time. He cites one example of a benchmark application for US treasuries, which takes calculations directly to the nodes that hold the data. "This can speed up calculations by a factor of ten," says Zhou.

Perhaps we need to rethink our whole concept of clouds. People usually talk about cloud computing as a nebulous resource somewhere beyond the proprietary, fortress data centre and WAN. Instead it appears to be emerging as an integral part of dynamic, high-speed provisioning of secure local capacity and e-services in a globally distributed, trading architecture. Increasingly market participants and vendors are vying to get their software avatars into these proximity hubs. The age of the robot market place has arrived.