Jukka Ruuska: "… using FIX FAST means lower latency than proprietary protocols, less bandwidth usage, and a simpler message structure that will facilitate system upgrades at lower costs."
The Nordic Exchange has reported record share turnover recently. To what extent has an increase in algorithmic and automated trading played a role?
The Nordic Exchange is a number of exchanges operating across a single technology platform and according to one set of rules and as such there are different levels of algorithmic and automated activity across the exchanges. There is a higher level of algorithmic trading in Helsinki, for example, because that's where Nokia, our biggest and most liquid stock, is listed. It is difficult to be precise about levels of algorithmic and automated trading, not least because there are differences in how the two are defined, but our working assumption is that both algorithmic and automated trading are growing very rapidly, currently accounting for at least a fifth of overall trading volume on the Nordic Exchange and possibly quite a lot more.
In total, daily trade value at the Nordic Exchange increased to SEK50,523m (EUR5,446m) in the first half of 2007, from SEK43,722m (EUR4,713m) in the first six months of last year. Over the same period, daily transaction volume increased to 177,423 from 129,461. It's worth noting that for every one unit increase in trading volume, we estimate a 2.5 unit increase in order traffic. So what's driving this growth? Clearly, we're seeing different trading patterns by investment banks and some types of investors; execution is taking place via algorithmic means to limit risks and make execution more efficient. From an exchange's point of view, this is excellent news. Algorithmic and automated trading are bringing heavy flows to the Nordic Exchange and improving liquidity significantly, thus making electronic order books much more efficient.
How is OMX handling the increased volume in message traffic that accompanies growth of algorithmic and automated trading?
There is an ongoing process of continuous improvement to remove capacity bottlenecks from our systems, both on the network side and on the core system side. We have invested quite a lot in our hardware in order to handle the capacity increases that have been needed. We have also taken the strategic step to invest in the development of a next generation trading platform called GENIUM, which will also be the core offering within our market technology offering. This will enable us to increase capacity to a completely different level at much less cost. We currently operate with at least 70 per cent capacity headroom, but GENIUM will put us into a completely different ballpark.
What are the key differences between GENIUM and the current platform?
At the core of GENIUM is a new matching engine which uses a patented messaging solution called High Speed Message Bus, which will play a major part in improving capacity and making the system much more efficient. We have taken into account three main parameters in designing GENIUM: speed, we have some high but undisclosed ambitions for reducing latency; flexibility, GENIUM is component based so the whole system isn't impacted when we need to make changes to specific areas of functionality; and reliability, which of course is crucial to any exchange. On top of this, the new system is designed to bring new products and services to market much more quickly than has been the case previously. We expect to implement GENIUM fully on the Nordic Exchange by 2009, but are not committing ourselves to any more precise a timetable at present.
We conducted a thorough analysis of the middleware and transaction technology market, but concluded that no third-party vendor offered - or was likely to offer - the capacity and robustness required by the financial marketplace industry in the timeframe applicable to GENIUM. So OMX decided to develop a high-speed delivery platform, purpose-built for the exchange industry to deliver maximum performance and robustness. It provides peerless high-performance transaction handling and availability as well as all common functional features, like reference data and session management. The platform is based on industry standard operating environments and deploys unified models for business and technical operations to secure maximum efficiency.
So what initiatives are planned to improve latency and performance on the Nordic Exchange prior to the implementation of GENIUM?
We will continually invest in improvements to the existing platform while GENIUM comes on stream. A major milestone for us will be the introduction of a FIX FAST interface for our information services customers in the next quarter (Q4 2007) and for our trading customers in spring 2008 as this will bring remarkable reductions in latency. Open standard protocols such as FIX FAST are crucial because they facilitate efficient connectivity, inter-operability and automation, as well as allowing for greater customer choice in hardware and software and ease of integration with new solutions. In choosing open standards, OMX makes no compromises in terms of performance. As well as enabling a high-speed interface with clients, using FIX FAST means lower latency than proprietary protocols, less bandwidth usage, and a simpler message structure that will facilitate system upgrades at lower costs. Moreover, many ISVs already support FIX and are in the process of also adding the FAST protocol.
We recently announced the launch of our colocation offering, Proximity Services, which will also diminish latency significantly for clients who choose to locate next to our servers in Stockholm. We're at the early stages, but we expect to reduce latency for messages sent from colocated servers to the exchange's matching engine and back again to below a millisecond. It's also worth remembering that our current system is one of the fastest in Europe, according to regular conversations with clients, particularly those based in London. We always accept of course that everyone seems to be able to find a way of defining latency in a way that suits them. It's now the case that there are lies, damned lies and latency statistics!
What impact will the NASDAQ merger have on GENIUM?
The ultimate ambition is to use only one platform for all markets. That platform is going to be a combination of NASDAQ's platform and GENIUM, using the best parts of each. Both companies have very robust technologies so that's a very good starting point.
Jukka Ruuska: "Algorithmic and automated trading are bringing heavy flows to the Nordic Exchange and improving liquidity significantly, ..."
What hardware changes have been or are being made to underpin performance of the trading system?
In safeguarding the efficiency of the exchange's systems, a core priority is to ensure that the hardware does not become a bottleneck. Reliability is also a crucial issue, so having up-to-date hardware which can support capacity growth is an important factor.
GENIUM is based on a mainstream infrastructure and runs on LINUX. We will use an industry standard Intel/AMD-based hardware platform in order to leverage widely-available industry skills and optimise total cost of ownership. GENIUM will provide the Nordic Exchange with one consolidated trading system for cash and derivatives, and we will be able to consolidate all trading operations onto a unified and easily extendable hardware platform, whilst protecting investments already made in key components such as storage and core networks. The new GENIUM infrastructure is also easily integrated with additional member services offered by the exchange, such as the new Proximity colocation services for optimising member latency.
How quickly can additional capacity be deployed, now and in the future?
Our strategy of continuous development means that we have been able to increase our capacity by a third each year. With GENIUM, capacity will be many times current levels. Its extensible architecture means that relatively speaking it will be simple to increase capacity as and when demand arises. Although we do not foresee any difficulty meeting capacity requirements with the current system, it can be a complex process. GENIUM will make capacity management a lot simpler.
As with OMX's current solutions, GENIUM can scale dynamically across several servers without affecting latency or other critical system properties. Using multiple blade servers, we will be able to add the extra business processing capacity to our configuration in a timely and cost-efficient manner instead of trying to predict large investments in
infrastructure and processing capacity far in advance. In our view, GENIUM will thus strengthen OMX's position as a supplier of the most scalable trading solution in the market.
Have matching rules on the Nordic Exchange changed to accommodate algorithmic and automated trading? If so how and why?
In terms of matching rules, it's very much a question of whether there are any exceptions from the principle of time price priority. We have exceptions for orders from the same broker in that these can be matched even if they do not follow time priority. Clearly this is not done in order to facilitate algorithmic trading; it's more of a cost-efficiency issue for our members. Some broader market infrastructure issues that are very important from this point of view include the fact that we have tightened our tick size tables to allow potentially narrower spreads for our most liquid shares.
Also, at the beginning of June, we launched pre-trade anonymity for the first time. Traditionally, Nordic markets have been - and indeed still are - extremely transparent. Broker identities have always been visible on a pre-trade basis, but this will no longer be the case going forward. We expect the introduction of pre-trade anonymity to remove any fears that trading firms might have of being front-run by other market participants, i.e. firms that use trading strategies based on following the trading patterns of others. This should result in more volume through the order book instead of matching trades somewhere else. We have also launched a consultation exercise on post-trade anonymity, in which we are proposing that broker identity will no longer be visible on the posttrade data feed. We expect the consultation to conclude in the very near future.
What is your platform's typical response time and process in terms of acknowledgement of limit and market orders?
We send immediate confirmation on all orders, but we're in the midst of reviewing these issues and are not able to disclose any detailed figures at this point. Performance and capacity are more than sufficient; any variations are largely dependent on order flow, which varies from day to day.
What depth of historical data sets is OMX providing to users of algorithmic/automated trading models?
Today, the depth for both real time and historical data is up to 20 levels of equity cash information. Exchange members may use full order book information for algorithmic and/or automated trading in our system. However, with effect from November 12, we will launch the GENIUM Low Latency feed for equity cash, which will enable all customers, including vendors, that are non-members to have access to full order book information, i.e. all level of bids and asks and order by order.
Do you see mostly execution improvement algorithms or is there more sophisticated multi-asset class algorithmic trading (e.g. across equities and options/futures)?
It's always very difficult to estimate the relative levels of different types of algorithmic trading. Clearly the main activity from both investment banks and buyside firms is the use of algorithms to change their risk positions in the most efficient manner. But also arbitrage between different asset classes is there and is quite sizeable. The field of use for computer-based trading is getting broader all the time and there'll soon be a time when there will be no trading without some kind of computer-based support.
And what kind of challenges do these changes pose for exchanges in the future?
The challenges are twofold. On the one hand, exchanges need to be able to provide sufficient latency which enables higher frequency and higher velocity trading and also allows firms to manage their risk positions by utilising market opportunities that open and close very quickly. The other key challenge arises from the fact that the continuous increase in volumes puts further constraints on capacity, not just at exchange level but along the entire transaction chain. In our case, we don't control the clearing and settlement infrastructure, but clearly our competitiveness and desirability as a venue for algorithmic trading depends on the efficiency of settlement and clearing providers.
As President of FESE (Federation of European Securities Exchanges), how are European exchanges adapting to the growth of algorithmic and automated trading?
From FESE's point of view, algorithmic trading is a wonderful thing because it is bringing welcome liquidity to central order books and showing the efficiency of existing market mechanisms. From a European point of view, there are a number of regulatory issues to be resolved. Currently, the rules are the same across Europe, but they are implemented and interpreted in a different manner at the local level. That is clearly a concern from FESE's perspective. Another issue that the growth of algorithmic trading raises is the fragmented nature of the European clearing and settlement infrastructure - or to be more precise the lack of a European clearing and settlement infrastructure. In fact, it is almost as fragmented now as it ever was and as such is a major obstacle to the effectiveness of European financial markets when algorithmic and other traders can act regardless of borders and national rules and procedures.