Exchange Views: LSE Multi Asset Futureproofing
Technology first came to the forefront at the London Stock Exchange in 1986 when the exchange abandoned floor trading. Since then, technology has only continued to increase in importance, particularly since the exchange's switch to listed company status in 2001. AT talks to David Lester, the exchange's CIO, about how it has responded to the effects of challenges such as algorithmic trading and the futureproofing of its new trading platform.
Reduced scalability costs were one of the reasons for the exchange's switch from Tandem to commodity Intel/AMD technology. Was anticipated/actual growth in message traffic due to automated/algorithmic trading a major factor in this need to reduce scalability costs or was it just a case of growth in general?
Our increased focus on scalability dates from 2003, so I think you could say it was a combination of growth in general and also more specifically algorithmic trading. We were trying to align the technology cost base with increases in market volume. We wanted the ability to do an upgrade in near time rather than as a six or nine-month project and for that upgrade to cost hundreds of thousands of pounds rather than millions, as it did in the Tandem environment. With the benefit of hindsight, our efforts in this area have certainly put us in a strong position today as regards the growth created by algorithmic trading.
The data platform element (Infolect) in your upgrade strategy is already live, but what is the status of the new trading platform?
We are about to embark on customer testing, which is due to start in October. The platform has already been built and fully tested internally. The customers now have a timetable for the next nine-month period of testing. We also have an early access service that is starting in November. Once customers have passed conformance testing, they can move on and start testing against the early access service, which is essentially the production system itself. We have more than three hundred customers to take through all that testing between now and our projected production go live date some time in the second quarter of 2007.
When the new trading platform launches, how quickly will it be possible to implement significant capacity upgrades?
"It would be possible to upgrade capacity by ten to twenty percent over a weekend if not over an evening."
How much capacity headroom do you allow?
At this point in time we have at least fifty percent headroom over and above the busiest day seen to date. Since we put Infolect into production last September we have seen forty nine of the fifty busiest trading days in the exchange's history. We have fifty percent spare capacity over and above those busiest days. However, because we can now perform upgrades far more quickly than in the past we can effectively have as much headroom as we want up to levels far in excess of what we see today in any market globally. The use of commodity hardware and the plug and play manner in which it can be fitted into the infrastructure has also given us far more flexibility as to exactly how and what we upgrade. For example, we can now expand horizontally to accommodate more securities but also vertically in order to address the capacity needs of a particular security that is very actively traded.
How much fluctuation in performance as regards finalising trades do you experience on the current platform? And how does that differ from the anticipated performance of the new platform?
It usually takes around sixty milliseconds to finalise executions to an order on the current platform and we expect that to drop below ten milliseconds on the new platform. We also anticipate that the new matching engine's performance will be sub two milliseconds, and we should be able to confirm that in near-production conditions when we run various dress rehearsal tests with customers over weekends throughout the spring of 2007. As to performance variance, I would say that we currently experience more than we would like. On the existing Tandem architecture under peak load on very busy days, trade finalisation times undoubtedly increase. However, the new trading platform architecture will be far less susceptible to this, so its performance will degrade far less under load. In fact market activity would have to go to very severe extremes before there would be any discernible degradation.
Many exchanges have found that the growth in algorithmic and automated trading has resulted in a substantial rise in the ratio between message volumes and completed trades. Have you found that growth in this ratio has had implications more for your hardware platform or for bandwidth?
Probably more for the bandwidth, but while we have seen a massive increase in our message and trade volumes, the ratio between the two has remained relatively constant - or even declined slightly. I think the growth in that ratio has been more typical of derivatives markets, such as futures and options exchanges. One probable reason for our relatively stable order/trade ratio is the reduced latency of the Infolect platform. Now the information platform for the exchange runs so much faster than before, people are able to get the trade that they want. They can achieve their desired trade execution without having to spray a lot of contingent orders at the market.
Some exchanges and trading platforms have experienced difficulties with ghost orders (orders placed and then almost immediately withdrawn). Have these caused any problems for the LSE in capacity terms, and do you actively monitor the level of such orders or use technology that prevents their withdrawal within a certain time limit?
Again, this hasn't really been a problem for us and we don't have any formal rule about how long an order must stay in the market once it has been placed. This may have something to do with the way in which we handle customer orders, which differs from many other exchanges. When an order add/delete message is entered we don't issue an acknowledgement for this (as many systems do) and then tell you some time later how it has traded. By contrast, we process every message we receive from start to finish so when a trader receives an acknowledgement back, he/she can be sure what is on the book and can then cancel it immediately if desired. There is therefore no time limit between a trader knowing that their order is on the book and being able to remove it.
In the past, you have mentioned the need to enrich the data the exchange provides to facilitate IS revenue growth. Do you see the idea of providing machine readable news feeds (similar to Dow Jones News for Algorithmic Applications) as part of that?
We have no immediate and specific plans to introduce such a feed, but we are actively exploring possible added-value information services of this type. Both data vendors and their clients are definitely looking to exchanges to provide more enriched data services such as this and we have been providing a Level 1 Plus enriched data service since 2004. This was one of the reasons when we built Infolect that we were keen to ensure that it was as flexible and as agnostic with regard to data type as possible. As a result, there is no technical reason why we could not use it to produce and distribute an algorithmic newsfeed.
One growing requirement for those engaged in the building of
automated and algorithmic models is access to historical depth of
market data. Does the LSE provide this?
In addition to distributing such market depth data in real time we also offer both historical trade and order data packages going back to the implementation of SETS in 1997.
The timing of your switch to commodity hardware seems felicitous
given AMD's increasing competitiveness versus Intel. Has it
brought much in the way of practical benefits?
Absolutely - it can only be good to have a genuine choice among suppliers at any level in the hardware hierarchy. That applies whether it be Dell/HP/IBM at the machine layer or AMD/Intel at the processor layer. The introduction of AMD's Opteron processor was obviously welcome in terms of price/performance ratio and we will be deploying 250 HP servers containing Opteron 275 and 875 processors to power the new trading platform. The performance characteristics of these processors will assist us in terms of both space optimisation and heat dissipation. However, while we have obviously been impressed with AMD's Opteron, we have also been equally impressed by Intel's response with its Core 2 Duo technology. The bottom line for us is that competition in this CPU space has indeed combined particularly well with our new architecture in terms of giving us exceptionally high levels of scalability at a very low price.
When you were designing the exchange's new architecture in 2002/3 many would have argued that Linux was the OS platform with the stronger pedigree in the type of high performance computing and clustering technology applicable to a securities exchange. Nevertheless, you selected Microsoft - why?
Once we decided to move to new generation technology we examined the various ways of accomplishing this. Our technical team found that Microsoft .NET outperformed on a whole series of tests that they applied. From a business perspective, I was also particularly interested in the partnership that we were able to form with Microsoft all the way up and down their organisation. I think that is something that would have been harder to do with the provider of a Linux distribution. One common argument in favour of Linux is its lower cost. In practice, we simply didn't see that it had a cost advantage once one included the price of support. We also felt that .NET outshone in the areas that really mattered to us. The partnership with Microsoft has served us phenomenally well, as both organisations are obviously keen to be seen to be part of a high profile success. It has also been quite gratifying three years after we took the original decision to go with .NET to see other exchanges following suit.
The timing of your decision was also convenient in that in 2002 Microsoft was pushing particularly hard for a footprint in financial services?
I think that is true but there was still an element of risk in that Microsoft itself was in the process of moving technology and leaving the Windows NT era behind. So for us it wasn't about leveraging Microsoft's desire to get into financial services, but more a combination of the partnership possibilities and the fact that our technical personnel were adamant that Microsoft's technology would do the business for us.
There has been growing interest in multi asset class algorithmic trading of late - will the new trading platform be capable of supporting that?
Scalability was one of the big things for us, as was latency in
terms of turnaround time both for the distribution of
information and trades. Latency is today's story and
scalability is clearly something that benefits the exchange and
services for its customers going forward by ensuring that the
costs scale in line with revenues.
"It will be technically possible for any combination of asset classes to coexist and interact with each other, both individually - and also as synthetic instruments ..."
However, the forward looking element in the trading platform is that it can indeed handle multiple asset classes, which we think is something that could be key to the way in which the securities industry develops in the future. You can already gain an inkling of this from the way the sellside are now putting their desks together with some personnel cutting across asset classes, when in the past they were far more compartmentalised. Customers also appear to want a more homogenous multi asset capability and vendors are also responding to this by producing technology that provides a single interface to multiple asset classes and liquidity pools. Hedge funds of course are adding to this trend.
So can the new trading platform be configured to allow one to execute a multi asset trade as a single synthetic product rather than having to execute the various legs separately?
It is designed with that very much in mind, though we haven't as yet built all the functionality on top to do this. We have assembled the necessary underlying architecture, database, and message structures around events rather than specific types of security. That has been one of the fundamental design principles and also one of the most challenging and complex tasks in putting the platform together. When estimating the size of in-memory databases we have considered the characteristics of markets such as options, which are hugely demanding in this respect. Therefore we believe that the architecture we have built will not be constrained in capacity terms to just cash instruments. It will be technically possible for any combination of asset classes to coexist and interact with each other, both individually - and also as synthetic instruments that can be dealt in their own right via a single order.