The Gateway to Algorithmic and Automated Trading

AT webinar: Optimising systems for latency isn't a 'one trick pony'

First Published 5th June 2014

Technology that reduces latency has to avoid a one-size-fits-all approach, said tech experts during an AT webinar

Mark Skalabrin, CEO, Redline Trading Solutions

Mark Skalabrin, CEO, Redline Trading Solutions

"The typical system falls way behind, and that is really where most systems are today."

Different market players hold widely divergent views towards the importance of latency to their operations. And that can translate to varying budgets for optimising performance of trading applications.

Automated Trader conducts an annual survey looking at global trading trends. Last year's report showed that nearly 30% of investment banks said they were entirely dependent on latency and only 20% reported it made no difference. Other sell-side firms rated latency's importance similarly high.

On the flip side, asset managers showed the lowest dependence on latency as their time horizon is longer. Lone traders and hedge funds as well were on the lower end of the scale.

So how do vendors accommodate the variety of user groups with differing latency sensitivities?

John Encizo, senior technical sales specialist for IBM, said that different systems target specific latency sensitive workloads. Configuring components optimally means that customers can determine to what extent they want to "push the latency envelope" internally or in a distributed way.

"We have worked with customers around how to actually tune their applications. So, where best to distribute threads? Where best to distribute IRQ (Interrupt Request) so that they can achieve the maximum performance with lowest latency," he said.

Encizo was speaking alongside tech experts in an Automated Trader webinar hosted by Intel, which included executives from Azul Systems and Redline Trading Solutions.

The CEO of Redline, Mark Skalabrin, said that he expects more firms to care about latency going forward for cost reduction but also to improve trading performance.

"The typical system falls way behind, and that is really where most systems are today," he said. "They do well under a light load but under a heavy load fall way behind the market. There is a lot of room across the board for improvement in performance."

Aside from latency, Skalabrin pointed out that customers also need to find solutions for recording, archiving and using data in a validation environment, as well as for regulatory analysis.

"People can't rely on latency alone. It needs to be there but the best systems build on that," he said.

Azul Systems' CTO, Gil Tene, argued that everything is latency sensitive, it just depends on how users might define the time horizon - whether that's nanoseconds or milliseconds to minutes and days even. The important aspect is how firms feel they compare to the competition or to what extent they are challenged by latency behaviour.

Azul tends to work with systems where trade latency is critical, though there are other considerations as well. For example updating risk models or distributing huge amounts of data.

"In all cases it is a situation where people want the best latency for the machine operation that is happening. We focus not just on the speed itself, but on the consistency of that speed."

In terms of trends in programming languages and APIs to develop low latency trading applications, there has been a shift from the traditional C and C++. For software-based systems, Tene said, JAVA, C-sharp, SCALA and Clojure are showing up in trading infrastructure.

"Whether in algos, where we see the most value in this because of the productivity needs, or in other systems like smart routers and matching engines, we see these languages pop up all over the place," he said, adding that he believes this is primarily driven by productivity and cost factors rather than speed.

If you missed the live webinar, a recording can be accessed HERE.