The Gateway to Algorithmic and Automated Trading

It's decision time

Published in Automated Trader Magazine Issue 10 Q3 2008

But is that the whole of the problem, that the decision-making process itself adds latency to data? Or are data volumes and the proliferation of trading venues outstripping data vendors’ capacity to deliver in real-enough time? William Essex investigates.

Velocity equals distance over time, unless you co-locate to the source of your data. In that case, you've cut out distance and redefined time as how long it takes you to make a decision and act on it. "Nobody's ever talked about how long their algo takes to run*," says Jeff Hudson, CEO of Vhayu, describing the evolution of an industry that runs all the way from smoke signals ("really pretty advanced for their time") to adaptive algos capable of thinking for themselves in difficult market conditions. For Hudson, "the market is data", in the sense that the only way a trader can view a market is through the data that market gives out. Given that everything important therefore happens in the trader's head (or CPU), as he (it) reacts to incoming data, this is close to saying that the trading venue is the trader.

Which is a problem. The trader doesn't know everything, and more importantly, doesn't necessarily know what he doesn't know. Perhaps this explains the ongoing increase in data supply and consumption: this is big business not because firms want to know more, but because - wait for it - they know that they don't necessarily know what they do and don't need to know. The credit crunch may be having the effect of persuading some finance directors to challenge old buy-it-all notions of comprehensive data supply, but let's face it: nobody sharpens their competitive edge by cutting their data cost. The commercial imperative is still the same: more data; more efficient data processing.

Jeff Hudson
Jeff Hudson

"Nobody's ever talked about how long their algo takes to run."

To start with a company that developed an entire data business out of "dissatisfaction with the existing consolidated feeds", Essex Radex offers a co-location-based model where "speed is the key ingredient;" in Chicago, the company's data centres are co-located with the CME and in New York, with the NYSE. Mike Eichin, director of operations at Essex Radez, says, "We maintain extensive communications with a number of trading venues so that clients can place their orders." Discussing latency, Eichin makes a case against consolidated feeds in general. "First, you've got to look at your application layer. What kind of latency is your order-management system introducing? Then you need to look at the feeds. Our customers have noticed a quarter-second difference between direct and consolidated feeds in regular markets. Then the next step is geography."

If you're trading in Chicago, put the data centre in Chicago. Same with New York. And if you want to link to an exchange in Europe … well, yes, exactly. This logic could take us in either one of two directions. Either stick with trading what you know, which might be a way of saying: stay in Chicago. Or, depending on your scale of operations, perhaps begin to ask yourself whether you might need to develop … some mechanism for … what's the word? … yes, consolidating the various data feeds on which you're now beginning to base your trading activity. Joking aside, the underlying point here is that if there's no realistic value-add beyond speed, then it follows that any form of data intermediation is a function of necessity rather than desirability.

Perhaps not surprisingly, therefore, data oversupply can be an additional problem. For Phil Slavin, head of European product strategy at Fidessa, the multiple-source availability of data is itself potentially self-defeating. Slavin says, "Data is interesting, because it's a very complex area. People want ultra-low latency, which means getting their systems as close as they can to the feed, yet they want normalisation across several feeds, so if they get close to one exchange, they're not close to the other exchange. Whatever people want, there's always some compromise." The data may be arriving at sub-millisecond speeds, but once it's arrived, it's taking time to get through to the decision point. Again, we're back to what one observer suggested might be termed the "airport paradox"; the flight's fast, but getting to the airport is not. Do we continue to focus on shaving off flying hours, or decide to look at "shuttle-bus time"?

Big day, big crash?

Mark Mahowald
Mark Mahowald

"One of the things about being real fast is that you're more efficient so you use less of the CPU."

When the trading gets tough, and the data flow increases, will your system hold up?

"Everybody talks about latency, and yes, what I really care about is how long before my system acts on the data," says Mark Mahowald, CEO and Founder of low-latency messaging provider 29West. But what Mahowald really, really cares about, he goes on to explain, is that everything continues to work through the interesting times. "Latency is very important, yes, but so is stability. People want a solution that can handle the load. And any solution is only as good as the weakest link in the chain." If it's reasonable to suggest that a big, heavy, eventful day on the markets will trigger a surge in data traffic, then it follows that weaknesses in your system will show up when you least want them.

Signs of a potential problem would include any evidence of data-loss events in the past, and Mahowald makes the point that end-to-end speed in a system needs to be supported by end-to-end monitoring. But there's also an argument here for speed that goes beyond the straight front-end trading argument. Mahowald says, "One of the nice things about being real fast is that you're more efficient so you use less of the CPU." The faster your system operates, the faster it can operate.


One answer might be to develop low-latency data manipulation middleware to offer as part of the data-feed package. Discussing this, Slavin says, "We write our own feed-handlers and we run our own ticker plant. We run those feed handlers inside our own data centres around the world, but also we run them on site at a number of large tier one brokers." Slavin's point here, of course, is that the speed of a data feed isn't relevant unless the product as the end arrives in a state to be used. This suggests the wider point for the end-user that comparing data feeds needs to be comparing like with like. Speed is important, so long as you don't forget to examine what you are getting.

Jeff Wootton
Jeff Wootton

"The traditional concept of everyone taking their market data through vendors that consolidate the data is falling by the wayside."

But speed is still attractive even where the small print excludes all cleansing or normalisation. It's also expensive. As Frank Piasecki, president and co-founder of ACTIV Financial, says: "The constant problem has been, how many sources is it economically feasible to collect and bring to the one location?" Piasecki acknowledges that the old models of data aggregation "don't go away", but observes that the current approach is to "bring data to the edge and trade at the edge". It's as if the consolidation process is not so much kept in check, as kept on the road. Piasecki says, "There are many more trading venues, and if you're trading globally, which is the case for many of our customers, the cost of aggregation goes up." Solutions have to be lightweight and deployable, says Piasecki. "Your solution has to be engineered to process data economically in hundreds of locations with thousands of servers."

This gives a possible clue to the future, but now we're talking about "normalising" the whole concept of consolidation. Jeff Wootton, VP of product Strategy at Aleri says: "What I see happening in the market is that the traditional concept of everyone taking their market data through vendors that consolidate the data is to some extent falling by the wayside." Not that this means such vendors are exiting the business; as Wootton explains, what's happening is that their business model is changing. "Some firms do struggle with the economics of going direct, which is leading to the development of hybrid offerings." The new model typically entails the evolution and delivery of "lightweight consolidation", whereby the consolidation process is kept tightly in check.

Frederic Ponzo
Frederic Ponzo

"People are asking for things they don't necessarily reequire."

It's time to bring in Frederic Ponzo, MD of NET2S, whose focus in on the demand side. Ponzo says: "There is a tendency to get the quickest, most comprehensive feeds with all the instruments, all the products you can get your hands on, even though you don't need it. The problem is not that the data is too comprehensive, but that people are asking for things they don't necessarily require. Who needs real-time data, as opposed to summary data?" Not everybody. Perhaps, as Ponzo suggests, there's a case for interrogating that demand as well as the supply; a case for arriving at "lightweight demand" that can be met by lightweight delivery.

But really, what we need now is a reasoned, calm, fully thought-through exposition of an inspirational, all-embracing solution to the multi-faceted problem that there's such a vast abundance of data being generated, at multiple sources, for a multiplicity of users in a wide range of locations - and the trend is towards more, deeper data from more sources for use by more people. So if you find such a solution, you will let me know, won't you?

williame@automatedtrader.net