The Gateway to Algorithmic and Automated Trading

Speed to Market

Published in Automated Trader Magazine Issue 07 October 2007

How is the latest technology accelerating the development and implementation of algorithms and automated trading systems? AT asks leading solutions providers to share their views.

Speed to MarketWith:

  • Dave Bloom, vice president, product management, MarketPrizm
  • Stephen Engdahl, director, product management, Charles River Development
  • Steffen Gemuenden, co CEO, RTS Realtime Systems Group
  • Ali Pichvai, managing director, Quod Financial
  • Philip Slavin, head of European product strategy, Fidessa

What are the main hurdles to increasing the speed of algorithm and automated trading model development?

Bloom: The sheer change rate of virtually every aspect of the algo trading space is the essence of the problem. Main hurdles centre on the need for: flexible, ultra-low latency, multi-source, information-handling technology; ultra-low latency complex event processing that does not suffer higher latency with increased complexity; a workbench-like approach to algorithm development that combines event processing with advanced calculations and statistics; robust 'live' simulation environments that facilitate full testing; and a more advanced real-time risk management paradigm with associated tools. In short, the big barrier is that that the speed of trading is now driving the cost of algo trading beyond the infrastructural and technical capabilities of even the largest organisations. What is needed is a new generation of development and information management technology.

Gemuenden:
From the first spark of a trading idea you're in a race against time; design and delivery need to take place extremely quickly after discovery. One of the key issues is the ability deal with massive volumes of data. Your platform must be able to focus only on user-selectable relevant aspects of market data. Data selection before the routine needs to 'work' it is a crucial element of being able to see through the forest that data has become: does the application need to see the whole book?; is the traded volume relevant?; is every trade update relevant? etc. Moreover, the ever-increasing speed at which market data is returned to the trader's application means that not only does the system need to process this, but that the logic of the trading strategy needs to be interpretive as well as responsive. A great example of this is that order delivery is so fast that the system responds to the market data that the system itself has submitted. The system needs to be aware of its own orders to avoid this, but only the best systems today have the required high performance interpretive skills.

Dave Bloom
Dave Bloom, Cicada

"…the speed of trading is now driving the cost of algo trading beyond the infrastructural and technical capabilities of even the largest organizations."

Slavin: The main hurdles can be divided into four main areas. First, the number of lines of code required impacts not only on coding time, but also on the length of time required to test and retest. Second, as the number of parameters increase, so do the number of possible routes through the code that have to be tested and retested. Third, the number of simultaneous order slices can impact development time because each slice has to be monitored, and adjusted if required, and may even interact with each other. The final issue is user-interface customisation. By enabling automated deployment user interfaces, the provision of 'dialogs on demand' can greatly reduce the time to market. Developing against an API that has been specifically designed to support algorithmic trading clearly reduces these hurdles.

What evidence do you see of client pressure for faster time to market for algorithms / automated trading models?

Bloom: The type and scale of the inquiries we and our partners receive has significantly evolved over the past few years, with the emphasis moving from milliseconds to microseconds for processing and from months to days (and, we soon expect, hours) for development cycles on new algo trading strategies.

Pichvai: Algorithmic trading is still a comparatively immature segment. We consider the market to be in its third generation, with the most prevalent algorithms today being those in the scheduling category (e.g. VWAP). However, market forces are introducing complex requirements for best execution and eliminating centralised exchanges, thus removing the centralised execution and market data streams that VWAP algorithms require. Meanwhile, leading banks and vendors are developing the next generation of algorithms, which take a liquidity-seeking, adaptive approach. As adoption rates accelerate for these new algorithms on both the buy- and sell-sides, innovation will inevitably take hold and reduce time to market.

Slavin: More clients want to implement an algorithmic trading solution, but they're wary of complex system integration issues. Clients are increasingly asking for customised algorithms. Markets can change rapidly and clients need to meet the requirements of prevailing market conditions with appropriate new models and be in a position to deploy these as quickly as possible. For MiFID, clients need to ensure that any new algorithms they use continue to meet their best execution policy requirements. Another key consideration is user confidence - both in terms of ownership of the order and in terms of confidence in the model itself.

What data management or compression techniques are most valuable in expediting the processing of historical data as part development and live testing?

Bloom: The problem of testing the next generation of algo models is the need to be highly accurate about the complete replay of history for back testing. More precisely, the challenge is in capturing the dynamic resolution of the order book from the multiple trading venue/liquidity sources for an instrument, as well as how that instrument fits within the instrument universe. The other issue is stress testing with more sophisticated simulations that are not based on history. As August's volatility demonstrated, a new concept of stress testing and simulation is required. Sudden market discontinuity becomes harder to detect and defend against in a multi-venue, liquidity-fragmented environment. Compression is not the issue: the challenge is simply capturing the content in the correct structure and preserving the subtle (and sometimes not so subtle) interplay between the order books of different markets contemporaneously.

Pichvai: Compression is not always advised, since the compression/decompression of data adds some latency. The principle we follow is 'in-memory processing' (instead of database direct calls). From this principle, a whole set of data management techniques can be devised and implemented. These include synchronous/asynchronous retrieval and saving, i.e. importing large quantities of data into the cache for historical data processing etc.

Ali Pichvai
Ali Pichvai, Quod Financial

"Compression is not always advised, since the compression/decompression of data adds some latency."

Slavin: For the large-scale handling of data, the most effective method is to pre-calculate the required historical data where possible and store this as summarised data. This can be quickly accessed when handling thousands of orders simultaneously. This reduces both the bandwidth and network resources required. Ultra-fast memory-resident database technology is also needed to support fast analytics requirements.

Are there any hardware technologies that offer particular benefits for accelerating the development and deployment of algorithms/automated models?

Bloom: The impact of hardware ebbs and flows. Who would have predicted the impact of multi-core processors just a few years ago? Many believe that firmware-based or partly firmware-based solutions (i.e. where business logic is coded into the hardware) will dominate. The answer is more closely correlated with the attributes of the algo trading strategy and its rate of change. Strategies intended to generate pennies for million-dollar exposures held for milliseconds will gravitate to hardware-centric 'firmware' solutions when they are available. However, the vast majority of algo trading is and will continue to be more sophisticated and dynamic, which will lead firms towards software solutions that are capable of migrating quickly to the fastest hardware solution.

Gemuenden: Obviously the need for speed has a bearing on the hardware that is a component of the overall 'platform'. But it remains to be seen whether the hardware-based limitations in the modification of programme structures (beyond the bread and butter components of the trading application food-chain) are greater or less than those in software, and whether with optimal operating systems and processing chips, greater flexibility in modification (agility and adaptability that mimic the changing world) will be beaten by low level grunt (brute force and proximity).

Pichvai:
Execution algorithms are not built for parallelised software, which reduces the scope for utilising grid computing hardware (this is not the case for risk/pricing applications however). At the same time, algorithms are still software-based, and not hard-coded into the processor itself, thus greatly reducing latency. You will likely see a few vendors adopting this hard-wired approach in the not too distant future.

How much time can be saved by having a common data model within the application where the algorithm/automated model will be deployed?

Bloom: The trade-off is development time versus performance, and performance needs always win. The better strategy is to employ data-handling technology that delivers the required content with low latency, but in a format optimised for the specific algorithmic trading process. Common data models are intrinsically generalised and as such are rarely the optimal structure for a particular requirement.

Slavin: A common data model for all forms of internal data is fundamental for rapid algorithm development, in both development and deployment terms, allowing you to leverage all the data model aspects of the underlying trading system. These data structures can be re-used from model to model along with utility functions such as price-tick handling. By sharing the same algorithmic data model with the OMS, instead of merely interfacing with an OMS, the mapping required between systems is reduced which clearly has an impact on the speed of deployment.

What back testing and simulation techniques combine the required levels of speed and robustness to bring algorithms/automated models to market quickly?

Bloom: Back testing now demands very low tolerance levels and capture of the evolution of the various order books, as well as the replay of the evolution in a way that facilitates recreation of the actual market. In the U.S., due to the various consolidated tapes, this process is significantly easier than the post-MiFID world in Europe, where there is no consolidated tape and the clearing solution of each market is different, forcing adjustment to the composite book. Monte Carlo simulations can be useful but outlier days that exhibit sector and market discontinuities are more important when stress testing new trading models. This is particularly true with fragmented liquidity which can make such discontinuity harder to detect and properly safeguard against.

Gemuenden:
It is key that the backtest exchange simulation is accurate, e.g. support of FIFO, pro-rata trade models and transaction costs, mimicking market behaviour through individual configuration of latency per exchange. Where possible, the backtest should suitably mimic market impact, use full unaggregated market depth to provide maximum accuracy.

Pichvai:
Execution algorithms are very sensitive to dynamic market conditions. The most common approach is to apply historical data to validate the success of the algorithm. This approach is not time consuming per se, but the actual relevance is a big question, and the availability and accuracy of the data used determines the quality of the test run. In terms of relevance, the only true test is the market itself, where risk/pricing simulation methods such as Monte Carlo do a good job. The problem comes from the fact that execution (in the market) behaves in a complex dynamic systems pattern, which is, in essence, non-deterministic (or too complex to be deterministic with our current toolset).

Can off-the-shelf software technologies support rapid development and deployment or does proprietary technology always have an edge?

Engdahl: Rapid deployment to the trader's desktop requires that an algorithm can be defined within the scope of the order tickets of his or her trading system. Algorithmic providers who adhere to the FIX protocol with minimal use of user-defined fields for the inputs to their algorithms can take advantage of more rapid deployment than those requiring additional non-standard integration work. The range of algorithmic parameters and capabilities supported within the FIX standard is quite broad.

Steffen Gemuenden
Steffen Gemuenden, RTS

"Commoditised tools are the key to rapid deployment as they enable algo providers to focus on their areas of expertise."

Gemuenden: Commoditised tools are the key to rapid deployment as they enable algo providers to focus on their areas of expertise. Using advanced scripting languages geared towards algo trading means that algos can rapidly be prototyped, tested, backtested and rolled out into the live market. Another key tool is exchange connectivity; having an algo engine that quickly determines when to trade is important, but is wasted if you can't get it to the market quickly. A solution that provides direct exchange connectivity using proximity hosting will be the fastest solution.

Pichvai: We believe 'best of breed' technology provides the best long-term solution and that budget, time and expertise are limited enough to ensure we don't always re-invent the wheel. It is most important to select the right vendors and to integrate them well. The proprietary technology edge is defined as being able to solve one specific problem well instead of trying to resolve the generic industry problem. They also have a dedicated and simple architecture/design. But if you can apply the principles that make proprietary technology successful, the end result is a better, cheaper and more adaptable solution.

Slavin: Super fast real-time database technology, with fault tolerant support, is essential. You also need a compressed, self-describing, communications protocol for all communications with databases and exchange gateways (non-FIX based). An object-orientated programming language is required for rapid model development with a good integrated development environment for debugging and testing and dynamic dialog technology is essential for instantaneous model deployment.

What are the main challenges today in deploying third-party algorithms?

Bloom: The key issue today is the lack of standards for how third-party algorithms are to be integrated within their operating platforms. The Java and .Net worlds have their respective container models which have largely solved the issue of how third-party code can be integrated within a host environment, but there is nothing yet in the industry which codifies how algorithms are supposed to interact with their environments.

Engdahl: We see three main hurdles to deploying third-party algorithms: ambiguity in FIX specification documentation among providers; lack of standardised FIX specification documentation across sell-side organisations; and the plethora of algorithm rollouts aimed at the buy-side.

Ambiguity - Some sell-side algorithm providers have incomplete documentation regarding their algorithmic FIX interfaces, while others lack concise information regarding certain aspects of their interface, such as validation requirements. This requires more verbal communication between the algorithm provider and the vendor before the algorithm can be delivered to the client.

Lack of standardisation - There is little consistency among providers in terms of how algorithmic FIX interface specifications are formatted and communicated to vendors. Within large organisations, different types of specifications and interfaces may exist for separate geographical offerings, as well as for algorithmic trading of different asset classes.

Stephen Engdahl
Stephen Engdahl, Charles River Development

"… many new algorithms are often rolled out before they are 'fully cooked' … adding a costly second cycle to development efforts."

Plethora of Rollouts - When rolling out new algorithms, there's a tendency to meet buy-side need by providing as many choices as possible, rather than one algorithm that can gain traction. Furthermore, many new algorithms are often rolled out before they are 'fully cooked'. As a result, changes are often required soon after the release, adding a costly second cycle to development efforts.

Gemuenden:
There are of course many, but we would prioritise three: first, understanding how the algorithm may be affected by market conditions; second, recognising its strengths compared with competitors; and, finally, identifying risk and managing market response during the algo lifecycle (e.g. a three-hour volume participation algorithm may require user intervention).

Slavin:
EMS software vendors are faced with considerable challenges arising from the sheer number of algorithms and the increasing complexity of parameters. It takes time to build or enhance a dialog and to roll out a new version but this overhead can be greatly reduced by redrawing dialogs on demand.

How much does the complexity of an algorithm impact the speed of the deployment process in an OMS or EMS?

Engdahl: It's the simplicity of implementing a strategy - not the complexity of the algorithm itself - that impacts deployment speed. An extremely complex algorithm that requires a trader to enter only a few simple criteria may have very little impact on the speed of deployment. On the flip side, a rudimentary algorithm that requires a plethora of conditional user-defined fields and complex validations can significantly impact deployment speed.

Gemuenden:
In theory, the more complex the algorithm the longer the rollout process, but the key is for the EMS to provide tools to modularise and simplify this. By using EMS capabilities designed for algo trading, a complex algo can be deployed as quickly as a basic one. Concepts such as custom function, order agent, automated algo shutdown, real-time parameter update support are examples of features which enhance the toolkit available to the algo provider.

Slavin: Given a good algorithmic development framework, there is little or no correlation between the complexity of the model and the speed of its deployment. Where model complexity does have an impact is in the testing phase.

What impact will FIX's FIXatdlsm protocol have on time to market for algorithms?

Engdahl: The FIXatdlsm protocol is designed to help sell-side providers develop and deploy algorithms more quickly onto OMS and EMS desktops used by buy-side traders. To the extent that the protocol can standardise how algorithmic FIX interface specifications are documented and communicated by the sell-side, this could bring a substantive improvement in turnaround times. It would reduce the ambiguity and time involved to interpret and implement a new specification. However, idealists should be cautioned not to believe that the FIXatdlsm protocol is the answer to all problems. The protocol does not automatically eliminate the requirement for sell-side algorithm providers and OMS/EMS vendors to test and certify their integration before it is put into use by the first client.

Philip Slavin, Fidessa
Philip Slavin, Fidessa

"Super fast real-time database technology, with fault tolerant support, is essential"

Additionally, FIXatdlsm introduces the controversial concept of graphical user interface (GUI) standards. Multi-broker systems must maintain consistency of user interface principles across a set of brokers, but FIXatdlsm does not provide for this consistency. To give an example, it would be very likely for each provider of algorithms to select their own particular preference for the location, operation and labeling of the limit price field in the GUI. This inconsistency across providers would make it very difficult for a trader to learn and use a multi-broker system efficiently and could lead to mistakes and trading errors. Improvement in specifications, including clear details surrounding validation rules and conditional field requirements, is needed. But FIX is overstepping if it gets deep into GUI design.

Gemuenden: There are two aspects to algorithmic trading: the algorithm's underlying model and the parameters that define its operation. By standardising the second of these, FIXatdlsm will make it easier for people to design algorithms and provide it to their clients. Rather than affecting the time to market for algorithms, it will probably increase the breadth of people developing algorithms. This makes it easier for people to find the algorithmic models that they choose to run and is also appealing to software vendors looking to provide toolkits to algorithm providers.

Pichvai: FIXatdlsm provides a standard language for the parameters of the algorithm. This simplifies the communication between the buy- and sell-side institutions. Yet, it is not designed as an algorithmic coding language, which may limit its effectiveness.