Tech Forum: Data - The Exploding Supernova

First Published in Automated Trader Magazine Issue 04 January 2007

With: - Mark Palmer, general manager and vice president of Apama Products, a division of Progress Software - Dr John Bates, founder and vice president of Apama Products, a division of Progress Software - John Coulter, vice president of marketing and business development, Vhayu Technologies - Kirsti Suutari, global business manager for algorithmic trading in the enterprise business division at Reuters - Paul Geraghty, director of customer propositions for the Reuters Tick Capture Engine - Ary Khatchikian, president and CTO of Portware


What pressures are there on firms to invest aggressively in improving their current market data infrastructures?

Suutari: Market conditions are causing enormous pressures on today's market data infrastructures. First is the general trend over time for market data update rates to double annually, forcing capacity expansion for not only the throughput of real-time market data systems, but also the capacity of storage systems attached to them.

Second are regulatory reforms. The Securities and Exchange Commission's rules of 1997 encouraged a proliferation of electronic markets (ECNs and ATSs) that caused the onset of liquidity fragmentation, meaning an increase in the number of markets to monitor in order to properly observe liquidity. Decimalisation in 2000 had the effect of reducing spreads, margins, and fill sizes, generating the omnibus effect of less attractive economics while increasing market data update rates and increasing the pressure for efficiencies for which the core element is the market data system. And with the imminent implementations of Reg. NMS and MiFID, best execution obligations will be added to the list, where more data will need to be processed to be compliant, and more data will need to be stored to prove it. Regulations are saying you need to hold on to data for five years, and with annual storage already measured in terabytes and doubling annually, pressure on the systems to accommodate the data is immense.

Third is the competitive factor. More buy sides are transacting with fewer brokers, causing the broker community to compete more strenuously for their business. Competition has had the effect of increasing automation, increasing the brokers' range of services of which the growth of algorithmic trading offerings is one, and driving costs out of operations to be price competitive without eradicating margins. For all of these the market data system is key. With traders striving to be the best, to cover more assets classes, and to deploy more technology, pressure is increasing on what systems need to do.

Fourth is technology itself. There is a noticeable migration from humans consuming information as part of the trading process, to applications. And once an application is the consumer, the rules change. Speed becomes paramount. More data inputs can be processed simultaneously, more forms of data become interesting, and more of the trading process can be automated. So the pressure mounts on the market data system to deliver the throughput at the performance levels demanded by machines.

Khatchikian: I think there's a lot of pressure for several reasons. One is algorithmic trading, another is regulatory issues, and also multi-asset platforms. Market data really drives execution algorithms, as algorithms are driven by current market conditions, so you need to support low-latency, accurate data feeds. If data is delayed you are not going to be able to hit the market at the right place, so you're late to the game or dealing with inaccurate market data and are therefore behind the market. With regard to multi-asset, you need products that can combine and aggregate all market data from futures, options, and foreign exchange, to enable people to see everything in real time.

Mark Palmer, general manager and vice president of Apama Products

Mark Palmer

Coulter: Regulatory pressures are probably the most glaring problem forcing companies to look at market data infrastructures. They are looking at best execution, which comes down to a resource and technology challenge for most. Typically firms are used to storing trade data, but now they also have to store trade quote data as well to prove they have got the best price.

Palmer: What is different about algorithmic trading is the style of the data, which is different from other traditional types of data. It is streaming data for news feeds and so on. There are different needs here, needs to pre-flight test algorithms and using benchmarks to create new algorithms, so preserving the time series of the data is critical. Using this approach, you can constantly evolve on the effectiveness of the algorithm by developing it by benchmarking against historical data.

In this market algorithms are fighting algorithms, so firms are constantly evolving and making their algorithms stronger; you have to improve yours to be effective. The real point is to be able to analyse the data without storing it. We really view it as two different halves of the whole - you have the act of monitoring, analysing and acting on data as it flows by, and then, as a separate thing, you record it and store the raw and derived data, just like you would using a TiVo or Sky Plus. You can then replay specific parts to see what occurred, and ask questions like, 'Why did I lose 5 million in the last five minutes? What happened?' This is not just a metaphor, it's happening in today's algorithmic trading market.

Bates: Market data rates have been doubling year on year over the last 10 years, so there is massive pressure to ensure your infrastructure can handle that kind of throughput. Secondly, you have to think about how you're going to store that data and preserve the time series order so later you can go back to it to back test and run through trading scenarios. The impact increasing market data is having on infrastructure is that the infrastructure has to store data and have the ability to analyse it in real time with low latency. If you can spot patterns in that data as it's flowing past, before your competitor does, you have an advantage.

Dr John Bates, founder and vice president of Apama Products

Dr John Bates


Why has it become increasingly difficult for many firms to build in-house data analysis and storage solutions?

Coulter: The sheer volume of data, which globally is doubling year on year, has become the biggest worry for most. In the past, when you looked at firms that first started out in algorithmic trading, they had the advantage over others in terms of resources and so they built their own proprietary systems to gain competitive advantage. Now, it makes less sense to build out infrastructure yourself. Building your own data feeds handlers to hundreds of different exchanges and databases, and having that published out everywhere isn't sensible.

Khatchikian: Disk storage is the most challenging aspect of a highly transactional system when trying to attain optimal performance. If you're routing hundreds of transactions to the marketplace, you want to ensure that your data is persisted. Traditionally, transactional performance correlates directly to the performance of the underlying storage, which is why distributed caching has become more prevalent in highly transactional environments.

Another difficulty arises from the need for disaster recovery and continuity. If my data centre or storage facility is not available, I need to be sure I can hit another. Data needs to be replicated elsewhere, from one geographic region to another. This can of course increase the cost and also presents a challenge of how to transfer data in real time to multiple storage sites.

In terms of data analysis, there is a lot more data to review. Algorithms need to analyse, and databases can really affect the effectiveness of an algorithm. If I have to query information from a database before making a trading decision, the chance it will negatively impact my trade performance increases tremendously. If your algorithm needs to query information on how it should behave in the marketplace, that query needs to be extremely fast.

Geraghty: The challenges for our customers are really enormous.. It isn't just about a platform for data analysis and storage, customers need to build an end-to-end solution, linking high speed data feeds with Tick analysis and storage, which then needs to be connected to to all the applications in the trade cycle. This presents a huge integration challenge.

On top of that, they've got to cope with the massive data volumes we're seeing in the market today, and while coping with a tidal wave of data, they have to maintain data quality. This is going to be difficult and expensive.

The good news is that there are off-theshelf storage and analytics platforms proven in the market and increasingly deployed by customers.

Palmer: This is now becoming a source of competitive differentiation. Firms are starting to differentiate themselves based on the quality of their algorithms, and this is increasingly difficult because of the different styles of data and increased volumes.

Bates: I don't think it's a case of the expense of building in house data analysis and storage solutions, I think it's just a new science, and it's a case of finding the technology that can cope with the throughput. You don't need to be a JP Morgan or Deutsche Bank to be able to afford this kind of technology now.


What advice would you give to firms currently shopping for a suitable market data platform that will allow them to "stay ahead of the curve" in supporting their automated trading strategies?

Geraghty: In terms of staying ahead of the curve, as automated trading is becoming increasingly cross asset, so too must storage and analytic platforms to support cross-asset, next-generation trading. That sounds straightforward, but it's pretty complex to provide these high performance tools and meet the challenges of integrating the data.

Additionally everyone needs to be working from the same information, so it's important to ensure that all the applications in the trade cycle are looking at the same analytics and market data. This means that firms need high performance platforms to distribute analytics and market data to applications across the trade cycle, and often across the globe.

Suutari: To remain ready for the requirements of automated trading strategies, market data systems need to be aware of a number of things. As previously noted, capacity is a key element. Either systems will need to have the ability to accommodate all the data that comes their way, or they will have to have clever tools that will manage the data content, and subsequently throughput. This means planning and testing for anticipated volume peaks, and knowing the impact of breaches in advance of occurrences.

When market data throughput increases, undoubtedly there will be an effect on latency performance. How much data can your system take before it starts to slow down? The astute company will understand the performance profile of their system under various loads and have a plan to address the performance scenarios.

But more than the effect of any individual element of the infrastructure, where performance profiles are easier to map, is the performance of all elements together. The fastest data feed is of no value if there is queuing between it and the market data platform. And an effective market data platform can easily overwhelm an analytics engine or storage facility with an excess of updates. In a latency-based world, the whole should be no more than the sum of the parts. Automated trading is moving data systems into new frontiers.

John Coulter, vice president of marketing and business development, Vhayu Technologies

John Coulter

Not to be ignored is the openness of the market data platforms to the potential for new asset classes and instrument types. This could include the ability of the market data system to represent new data types for transport, or to accommodate them via an API, and to do so without engaging in contortions that could impact the level of complexity or performance.

Coulter: I would encourage firms to look at the best of breed approach, but also to look at the ease of integrating these best of breed products. There are vendors out there that can do all these things, and also take the burden of integrating everything away. If you can find a single provider to do this, it will save six months on your time to market, as we see the largest amount of work on this type of project is in the integration.

Palmer: At the highest level, there's a black box and white box approach. Many black box systems (OMS or EMS) now come with predefined algorithms that you can change the parameters of, but that you can't easily combine algorithms, or add a new algorithm you have created to, for instance, trade across multiple asset classes. If you have an algorithm everybody else has, where is the competitive advantage? In white box trading, you can create and deploy your own unique algorithmic strategies - this is the area we see ourselves sitting in.

Khatchikian: Some of our clients go to multiple market data sources for reliability, for instance Bloomberg and Reuters, and can then determine which is more accurate by observing both feeds in real time. This also ensures a high level of availability to market data because if one source fails, you're still receiving the other, thereby preventing any hiccups in the flow of data.

Our advice to firms is to be selective when evaluating your market data distribution system and your market data handler, and don't think you have to take both from the same provider. Sometimes firms buy just one system for both areas, but occasionally clients may need a higher level of accuracy in one half of the system. This best of breed approach not only gives you market access depth, but also provides a highly scaleable solution.

Bates: Firms need to analyse what their market throughput and storage data requirements are, and think about what their future strategies may be, to make sure the systems they buy meets these needs.


How close are we to reaching consensus on the definition of what constitutes "real-time" and how much pressure is the need for achieving "real-time" performance placing on the data management environment?

Khatchikian: For us, real time is really still subjective. With the added requirements of high availability and disaster recovery, it becomes a trade off. Some clients say, 'I want to make sure the trade goes out the door' others say 'I want the better trading performance'. They want synchronous, real time applications over multiple disaster recovery sites, which can be expensive. Sometimes, our customers will do the trade first before they distribute it globally, to save cost.

Palmer: Real time is one of those controversial words. I don't think there is a consensus on what real time is. High-frequency trading in equities and fixed income relies on the time it takes to turn around a Request for Quote to be effective, so in this instance real time speed is really important and you find yourself talking about milliseconds. However, in government bonds for example, sometimes the requirements aren't so high. Time is always a tricky thing to treat simply. Really, real time is about low latency.

Coulter: I think real time is different according to the practical use of the data. Trading on computer strategies means you need to break real time down to milliseconds. In a sense, real time is suited to its purpose, or what is satisfactory to the end user.

Kirsti Suutari, global business manager for algorithmic trading in the enterprise business division at Reuters

Kirsti Suutari

Suutari: 'Real time' is the term that has been used to describe the vendor offerings of the last several decades. Based on the technology and market structures over time, those that have been labelled 'real time' are those that have been the most advanced offerings available in the market. Indeed for some asset classes, they continue to constitute the best available performance.

For other asset classes, in particular those that are exchange traded, the latency arms race has driven quicker delivery where acceptable performance has fallen from, say, 50 milliseconds down to single digits. This redefines real time to the point that this class of service is now termed low or ultra-low latency. The bottom line here is that for a programmatic trading entity, 'real time' means anything faster than your competition. Ask any given market player, and they may say only partially in jest that they would like the data delivered before it is sent.


Options-quote growth has almost doubled in the last 5 years and options exchanges have experienced what amounts to a data onslaught. With the volume of data in the option market set to grow even further what issues should brokers consider when trying to plan an effective data management strategy to ensure they maintain competitive advantage?

Coulter: This is a problem the industry is being whacked over the head with. It's an area that is about to explode. The exchanges in the US are about to go to penny quotes, and when the pilot is over at the end of summer 2007, rates will go from 177,000 messages per second to 450,000 messages per second at peak times. This area is really going to be a challenge. It took us two years to build a product to scale up to one million transactions per second, which gives us a two year buffer zone only given the rate of data increase. It's about selecting products that are one step ahead of the volumes in the market.

Geraghty: Options update rates are going crazy, doubling year-on-year for the last three years. The quote trade ratio has gone from 300 quotes per trade, to 3,500 thousand quotes per trade over the same period. It is exploding and it's a huge problem that is going to get worse. Before long OPRA rates will approach 500,000 updates per second, which will require renewed investment in delivery, analytics and storage platforms.

Brokers need to consider using vendor solutions rather than building their own platforms. You want to be able to keep up, but building in-house systems and throwing hardware at the problem is expensive and ultimately ineffective. There are proven solutions in the markets that can handle the options problem now.

Khatchikian: The strategy is to be able to distribute market data analysis over multiple resources in the network. If you're getting lots of options data into systems and you need to process and store it, people are opting to move to inmemory databases versus traditional relational databases. With in-memory databases, everything is optimised inmemory, so the results on queries come back quicker.

But to make this work you really need that centralised trading platform that can aggregate all this data.

Palmer: As we have already discussed, being able to monitor, process, and act in real-time, then store that data in time series order and apply that 'new science' is key.


As algorithms continue to proliferate increasing bandwidth and reducing data latency are clearly essential. Do you expect to see increased interest in colocation services and would this solution be suitable for everyone?

Khatchikian: Co-location is not a suitable option for everyone. A lot of larger firms want to hold everything closer to them, and they also have the resources to spend on these solutions and the housing of these solutions. But smaller firms want lower latency and lower cost solutions. Larger firms are concerned they will lose control over the infrastructure, whereas for smaller hedge funds starting up, co-location is probably the best option because they don't want to spend a lot of time and management setting up a network. Smaller hedge funds want everything in a packaged solution, which can give them the ability to operate and compete with the big boys.

A lot more providers are starting to supply these solutions. We are actually seeing a few larger firms co-locating as they don't want the headache of managing things for themselves. As volumes rise and challenges become more complex, an increasing number of large companies will move to co-location because it is not their core competency.

Coulter: I think we're starting to see an increase among hedge funds for colocation services, as they don't tend to have the resources that large brokers do. Most are trying to house facilities as close to the exchanges as possible, so latency is reduced to the amount of cable needed to get the order to market. Yet the picture is very unclear when you look at the cross asset trading aspect. If you're trading FX in London, futures in Frankfurt, options in Chicago, and equities in New York, are you going to co-locate servers in all these places? It might be faster to have centrally located servers with WAN connectivity to all the market centres you need to reach globally.

Suutari: Certainly. Co-location has been important for more intensively competitive firms, and for those attempting to compete for liquidity where the strategies are 'vanilla'. The more competition there is for an order, the more co-location will increase in importance because of its impact in reducing transport latency. The closer I am located, the shorter the distance to convey my order, and the better the chance it will get there first.

Now that competition in algorithmic trading is starting to escalate, any advantage you have to be faster than next guy is important. Generally, areas to look at for speed are the efficiency of the application, the technology running the application, the interaction of all the moving parts, and how to defy natural laws of physics as we receive and respond to data at the speed of light. While being sure that you can execute on the opportunity while it exists will always be essential, finding areas where competition is less fierce will continue to be a lucrative frontier. So yes, co-location will grow in importance, but it won't be important for everyone.

Bates: There is definitely a strong desire to move as close as you can to the source of data, so co-location is important. Eventually exchanges will offer the ability to host data and possibly even algorithms in the exchange. What people want is the ability to locate themselves somewhere with very low latency and fast data connectivity.

Palmer: An issue that comes up here is the need to process more than one event stream. How do you coordinate multiple event processing nodes in some sort of wide area grid?

As firms use data coming from different locations, for example for cross-asset class algorithms, or algorithms that combine market data and news, where do you co-locate?

It's an advanced topic, but very interesting one.


Finding the right data storage technology is only part of the challenge that clients face. How much of a hurdle is maintaining data quality throughout the trading lifecycle likely to become and how should clients go about tackling this issue?

Coulter: I think this is probably the biggest part of the equation being overlooked right now. Everyone is looking for low latency, but the faster you go, the more opportunities for mistakes there are. If you're taking in as much raw data as possible, you've got to make sure it is good data. This means involving data cleansing solutions, which adds latency.

The more that low latency solutions become commoditised, firms have to find ways to mine data to stay ahead of the competition. So the quality of data is going to be more important than the speed ultimately. The key is to optimise the speed and leverage the quality of the data to get a leg up on your competitor.

Palmer: The interesting thing about etrading systems is they are not free from errors. There can be a lot of errors as the data flows through - this becomes a huge problem when you're making low latency decisions. Exchanges will emit a quote, and if there was an error, emit another one later - if you have already acted on the data, what do you do? This is where analysis and cleansing of data in real time becomes important. The question is how do you do that? You may not have to cleanse it, but smooth it. So, if a firm sees a price that's 20% outside the moving average in the last 10 seconds, then they might identify it as an error. This requires low latency data scrubbing and is one of the biggest challenges firms are facing today.

The more traditional challenge is storing data properly, which is where we come back to the importance of the time series order. The effect of compliance is starting to be seen in this area. When you store the data in time series order, it's also an audit trail of what you've done. This is becoming increasingly important. Companies need to store everything and record everything, so that they can go back and say 'we did this and it was the best we could do at the time'. This hinges on your ability to store data in time series order.

Khatchikian: As more firms move to global trading platforms, the complexities involved with maintaining data quality throughout the trading lifecycle will obviously increase. Clients therefore should find a combination of software that will handle these complexities and hardware that will address performance issues.

 Paul Geraghty, director of customer propositions for the Reuters Tick Capture Engine

Paul Geraghty

Geraghty: Data quality maintenance is a major hurdle. We're working with customers to ensure data quality by consolidating high-performance market data storage and real-time analytics. This allows applications at different stages of the trade lifecycle to use consistent data and analytics, otherwise what looked like a good trade in pre-trade may look terrible in post-trade. You also need to ensure that reference data (e.g. corporate actions) can be applied to the market data you are analyzing. So for data quality you need a storage and analytics capability where you can have consistent market data, reference data and analytics which are linked to applications throughout the trading cycle.


Conventional database software typically runs too slowly to update trading algorithms. In response some vendors have developed new proprietary database structures. What advantages do these have over more traditional relational databases?

Khatchikian: It really comes down to performance. New databases are faster, and they are more capable of handling event-type capability. Rather than a traditional database query, they are more event focused and can provide scalable event processing. This makes them more efficient. Why should an algorithm continuously request or query a database for a condition to be true, when new types of databases can let you know it's true immediately? That's the advantage.

Geraghty: The advantages that proprietary databases have, very simply, is speed. Traditional databases can't keep up. They are not designed for, nor are they optimised for, market data. This shows in latency and throughput performance, and data quality; traditional databases don't understand the difference between a quote and a trade, or what trade flags signify, or what options are related to an underlying security and so on. They just are not built for the new challenges facing financial market data space.

For customers looking at traditional relational databases there is a lot of work they will have to do. This work is expensive and delays your time to market for getting your automated trading desk up running.

Finally, you can't divorce the analytics from storage, and traditional databases aren't in the analytics space. Using traditional databases for storage and another platform for analytics introduces latencies that are unacceptable in the market today, whereas the best proprietary databases bridge that gap by coupling storage and analytics together.

Coulter: This is an area of interest to the market. In memory databases are taking data tick by tick, analysing it and storing it simultaneously. Relational databases are just too slow. For event stream processing you also need storage and fast data retrieval as many real time trading strategies rely on historical data.

Palmer: Vendors have always claimed there is some new paradigm that didn't exist before; in this case, it's true. Relational databases are designed to store 100 to 500 transactions a second. In algorithmic trading, if you do things at the speed it takes a human to press a button (20 milliseconds) you've lost. Algorithms automate processes, which means between 20,000 and 50,000 events per second. The models are very different; relational databases are still used for over 90% of global applications, but there is a place for this new science of event processing.


What capabilities and functionality are high-frequency trading groups, running algorithmic trading engines and writing strategies against real-time data feeds, going to be looking for from next generation market data platforms?

Suutari: Let's observe the term 'market data platform'. It has historically implied simply the provision of market data, so it's been thought of as great for looking at information, but not so much for transacting. To serve high-frequency trading, a market data platform must evolve from the standpoint of functionality, capacity, and performance. It's possible not all in the marketplace will think of market data platforms as suitable for algorithmic trading, but market data platforms are evolving into something more than a system for disseminating market data. They are becoming trading and transaction integration platforms.

And this includes other elements. It must accommodate current asset classes and have the flexibility to accommodate new ones. It must enable quick adoption of new data sources and types from streaming to historical and reference data to news. It must offer entitlements for the data provided to all downstream consumers, including humans and applications. It must carry multiple forms of data, such as bespoke data, trades and quotes and orders. It must embrace multiple protocols, of which FIX and SWIFT are primary examples. It must permit automation between the front office and the back office. And ideally, it will also be able to tell you where latency is meeting expectations and where it is not, so you can act accordingly.

Bates: There are two main parts to highfrequency trading strategies. Part one is when to trade. This is about performing analytics on data in real time, checking the raw data against trading rules, then taking action based on that. The second part is how to trade. You can slice the data into the market in chunks, for example, or you can route it to a liquidity pool with the best price. There is a lot of pressure here on that market data system, because of the requirement to take in a stream of data that exceeds 1000 updates a seconds, or 10,000 a second. One of the challenges is creating and managing your own streams coming in on top of the raw data. So, you have to be able to take in the data, perform analytics and take actions based upon that.

Palmer: We have seen the first wave of algorithmic trading. Data volumes and rates are key to that, but we're seeing a lot of mixing multi-data streams into that now. Some people have different OMS systems according to asset class. A new requirement for next- generation systems is putting multiple asset classes into one algorithmic strategy. Being able to mix different types of event streams into a single platform is a very important next-generation requirement.

Ary Khatchikian, president and CTO of Portware

Ary Khatchikian

Khatchikian: Event stream processing is an extremely popular topic right now. This provides a firm with the capability to construct more constructive analysis on market data models. It makes it easier to take from multiple data sources and aggregate that information.

Geraghty: High frequency trading groups who are writing strategies against real-time data need more than a streaming analysis platform. They also need an optimized market data database to drive their analytics. The next generation platforms need to combine these two capabilities together seamlessly.

Whatever you're doing you have to make sure it is capable of supporting crossasset trading. Everything has to be analysed together. This is also important for data quality. It's important to ensure you're working with someone who is aware of the end to end challenges and that the solution you are taking on board has the ability to share the same market data and trade analytics across all applications in the trading lifecycle.

Coulter: The trend right now is being able to trade cross asset from a single platform, running data feeds from a single engine for equities, FX, options, futures, and fixed income. You then need analysis that can look across every market at the same time, and come up with a unique strategy to enable you to beat your competition.


How much of a logistical concern has storage capacity become for securities firms who need to retain all pricing data for compliance purposes and if this problem is going to get significantly worse how should the industry as a whole tackle it?

Khatchikian: It has become a big concern frankly. I believe the vendors will come in and try to solve the specific problem before some sort of industrywide consortium does it. We are currently working with vendors that are trying to solve this issue already.

Suutari: Those that have looked at it are probably hyperventilating! These securities firms are starting to talk about the cost of warehousing their data. Reg NMS and MiDIF both require five years of storage commencing from their respective effective dates. This can be an enormous quantity of data, perhaps five terabytes compressed for Reg. NMS markets alone today, and is therefore a requirement beyond the scope of some firms. The storage requirements for MiFID could well be a factor more, however these numbers are still a matter of guesswork.

As market data update rates escalate, these sizes will only grow, as will the cost of managing them. It is interesting that neither regulation obliges firms to store their own data, and it is realistic that the industry will expect a central utility for storage and retrieval of data on their behalf.

Palmer: There's no doubt it's an issue. If you have to store tons and tons of data, there's going to be a best way to do that. Financial services firms have been coping with lots of data from a hardware logistics point of view for a long time. Our clients don't come to us saying, 'I'm not going to do algorithmic trading because we can't store this amount of data'. At the end of the day, a disc with a terabyte of storage on it isn't that expensive, relative to the amount of money these guys make.

Coulter: Everyone is pretty well aware of the need for storage because of regulations that are coming down. There is a lot of talking about having shared utilities in the US and Europe, as the larger firms can afford data storage facilities, it's the second tier of firms that are going to have a problem. Having shared data storage will level the playing field. When you're talking about terabytes of data, it makes sense to house it in a centralised storage facility.


Looking ahead what are the main challenges facing market data platform providers in meeting the needs of their clients?

Palmer: Clearly, the increasing data volumes are a major challenge. Being able to process that data from not just a historical perspective, but a low latency perspective, is important. Systems have to be able to change quickly to adapt algorithms for competitive differentiation and rules must be able to be changed rapidly, to respond to changes in regulation. The need is critical.

The biggest challenge isn't about compliance and storage, but figuring out what to store. Our clients are thinking about MiFID and Reg NMS, but the regulations are still vague. A lot of our clients are starting to put compliance checks into their pre-trade operations, but many are still not sure exactly what they should be doing in terms of storage. However, they are putting processes in place.

Khatchikian: The biggest challenge right now is the sheer volume, the need to process a lot more data. Another important area is exchanges, which are getting more stringent on how their market data is used. This is causing headaches for market data providers, who believe it is not how many eyeballs are looking at the data, but what value it adds to the consumer that is key. This particular issue places restraints on how market data platforms can use their data.

Coulter: It goes back to being able to support cross asset trading and handling capabilities to find people qualified in each area, who can support each type of asset class. If you are truly going to bring value to customers for market data management, all market data should be stored in one platform. The challenge to vendors is to hone up in the area of specialisation; they need to step up to meet the growth paths of the assets they are working with.

Suutari: Market data platform suppliers will be challenged by the outputs of all the other services they need to accommodate. Largest among them will be to accommodate capacity efficiently enough to continue to offer competitive latency targets, to be flexible in so doing, to provide economies of scale and good total-cost-of-ownership metrics, and to demonstrate the ability to evolve in order to remain relevant as the industry continues to change.

  • Copyright © Automated Trader Ltd 2014 - The Gateway to Algorithmic and Automated Trading

click here to return to the top of the page