The Gateway to Algorithmic and Automated Trading

Cloud formations - MarketPrizm's tech chief scans the horizons for different ways of doing business

Published in Automated Trader Magazine Issue 28 Q1 2013

Jacob Lindeman became chief technology officer for data and connectivity group MarketPrizm as part of its acquisition by Colt in 2011. He talks about a down-to-earth, business-like mind set for solving problems. But as Automated Trader's Adam Cox learns, his head is also in the clouds.

Jacob Lindeman

Jacob Lindeman

Adam: Let's start by you telling us a little about yourself and how you came to MarketPrizm.

Jacob: I was a Fidelity person for many years. Colt acquired MarketPrizm from Instinet, a Nomura company. Colt owns the majority shareholding. I was brought in as VP architect from Fidelity's capital markets technology team to oversee the technical aspects of due diligence for the acquisition. So I started working on the MarketPrizm space in September 2010, and then signed on as CTO in May 2011.

Structurally, we wanted a design to knit together the customer business functionality opportunities along with the technology and try to cross the often great divide between product and sales teams and technology. Both of those are under my organisation. We have a team of developers that are native to Hong Kong. Some of them have been with MarketPrizm in its prior incarnations - for 12+ years in one case, 18+ years in another - so they're quite seasoned with the problem space, the markets and with the technology.

Adam: So you have an interesting perspective, coming in from the business side, to run the technology for a vendor.

Jacob: It was an interesting experience for me to be, if you would, a consumer wanting the MarketPrizm type of service. One of my tasks in my Fidelity role was to migrate the trading products into co-lo data centres. These were the primary electronic trading engines and the ATS. Our team job was to design, engineer and deploy all of the products and the technologies into co-location data centres in New Jersey and New York.

Each customer is quite different in complexion or their maturity level or their business needs, so it is a layered problem to get servers deployed, to get racks provisioned, to get data flowing, to get the people who are working in these companies access to the equipment, and the ongoing maintenance and monitoring. In order to get one of these setups working, you really have to cover a lot of different bases. MarketPrizm's position is to offer the capability to address those problems soup-to-nuts or on an à la carte basis. If you want a server in Singapore, we would provision the equipment and give a handoff at a monthly fee. We have the entity in Singapore [and many other locations], we buy the equipment so that a company does not have to have an entity in these countries. And yet they can take on the challenge of trading with a lower barrier of entry.

In some countries with difficult legal hurdles, just getting translations of documents is a challenge. The mechanics of doing it are tough. Part of our offering is we've set up the legal entities, we have the relationships with the vendors, and we're able to do it in either a pure-service model, where the company is staking no capital, or we'll do it in more of a provisioning mode.

This kind of thinking and this equation is true in a number of other areas. One is around circuits. You know, it's the same market data going everywhere, and lots of companies buy their own dedicated circuits. And the vendors love it because they're selling more and more of the same stuff, servicing the same data. The cost point for the customer can be quite high. What we've done is set up as a vendor of record and redistributor of market data and we mutualise that across multiple customers and can offer a more competitive cost point, and also an easier or quicker time to release because we've already provisioned it.

MarketPrizm service delivery team

MarketPrizm's service delivery team

Adam: Someone wants Taipei Stock Exchange data. You will have an agreement with them that, instead of making numerous round-trips with different customers, you will pipe the data once over to London and then redistribute it from there, to whichever customers are in London. Have I got that right?

Jacob: That's correct. The customer could be in a co-lo where two or three customers would like LSE data, for example. We have the LSE data coming into our racks and switches and they can cross-connect to us and receive that data from us in an ultra-low-latency fashion.

When you are managing the layer one, the physical infrastructure, there's a whole set of powerful functions that can be had. Provisioning a multiple10-gigabit circuit for example. Because we're on the layer one, we can provision a huge number, like 400 gigabits worth of bandwidth. We actually put the light waves on the glass for those. And that also means that we can segment the traffic.

A very key differentiator: we acquired and continue to develop market feed normalisation. We've taken all of the exchanges and set it up that if you want to talk to the LSE or if you want to talk to Eurex you use the same software, the same application ware, the same kind of naming conventions - like bids and asks and quantities and order-book updates - so that the markets look much more similar to each other.

We've got our team of developers writing all this software, and normalising all these markets.

Adam: Across asset classes, across everything?

Jacob: That's correct. The asset classes today are primarily equities, derivatives and commodities. We have some work we're doing in FX. Fixed income is just such a tough place, we haven't gotten fixed income.

Adam: When did you develop this technique for normalising the bulk of the data you deal with?

Jacob: The origins of our market data expertise and IP are 12+ years and we've worked very aggressively to add many, many markets to that normalisation pool and optimise the latency and processing.

First we did derivatives in Europe and CME, which is popped out of Europe in London, and then moved to focus on Asia where we've deployed in seven locations for 10 markets. Again, these are in co-los, so in Australia the ASX, in Chi-X Australia, in Singapore SGX, and of course in Tokyo where we're the first vendor inside the TSE, the first vendor authorised to resell and deploy customers inside the TSE. Other Japan markets include OSE, Chi-X Japan, SBI Japannext, and also we have a pop in KVH TDC 1. There's a very strong partnership between Colt, MarketPrizm and KVH.

It's been exciting to see Chicago in particular and how those buy sides are looking to the Asia markets as new testing grounds. They are asking the tough questions, like 'Is there an opportunity or not? What are the complexities of doing business in these different exchanges and countries? How do you do clearing? Is it properly audited and is it safe to do?'

My experience has been that the Asian markets are more mature than they are perceived to be. The TSE and the OSE, they've been doing business for quite some time. SBI and Chi-X are working hard to create diversity in the markets. And also struggling, as is everybody, with how regulations help or don't help, are going to appear or aren't going to appear, including how that would inform whether these are good places for buy sides or brokers to invest in or not. I think there are still quite a few people trying to feel their way through.

Adam: On the data normalisation, what are the practical issues for end users in concrete terms?

Jacob with colleague Paul Scott

Jacob with colleague Paul Scott

Jacob: Concrete term number one, when there are changes to the market, to a greater extent we can insulate the customer from those changes.

These markets do change. They're very cranky. Difficult, esoteric data. You know, all this data's now going to come in a different field or it's going to be a different precision or it's going to have some different characteristics to it. Most companies are not market data consumer software development people and increasingly they don't want to be those people. They don't want to spend their money maintaining market data feeds, they want to spend their money on finding liquidity and improving the way trades are executed. So for our mid-sized shops, for sure, they don't have the fortitude or the appetite.

Number two is, it takes a certain amount of computing power to take in market data and process it and make sense of it, so that the consuming application and algo engine can then look at the data and make decisions. A big chunk of the workload is now done on MarketPrizm's hardware and that's part of our service. So we take it all in, we make sense of it, we normalise it, and when we send it on, it's ready for consumption. The amount of work that the customer's computer has to do is much reduced.

Adam: How fast is that taking place? For low latency, I would think that intense data normalisation could be adding a little bit of delay.

Jacob: There are many kinds of strategies that are more or less latency-sensitive at these levels. The underlying process of a single message is in the one-microsecond kind of range, maybe sub-one mic. But markets don't really behave like that. I see companies advertise these kinds of numbers and the people who use the technology know there are big bursts of traffic that come in, at market open or when a book is getting re-priced or the way the network packets are delivered.

There are some important characteristics of market data. They really have to come in in sequence and they need to be sent out in sequence. That order is very important. If you get a burst of a hundred updates on a stock, they all have to be processed serially. The amount of latency starts to add up as you do each of those messages serially. The actual latencies - and this is well understood by those in the trade but is often glossed over in marketing information - is that a particular symbol will behave different than another symbol, even within the same market in terms of latencies.

Adam: We're still talking about a tiny fraction of the time in a typical roundtrip.

Jacob: Right. There are some scenarios, with some arbitrage or where, for market makers, the first one to take action to get the data really does win.

With some of our customers, we provide the data raw as well for use with their own application development.

They potentially take the raw data for those applications that would be in co-lo. We offer a layer two cross-connect so that it's not even routed. It's really fast. And then the same customer might take normalised for much of their other needs.

Adam: In terms of this normalisation process, who else is doing it?

Jacob: Reuters is normalised and Wombat is normalised. Certainly, the market data has been transformed. Some of them are more latency impactful than others. I think Wombat tried to take market from Reuters early on in terms of direct exchange technologies. Activ Financial is another one that does data normalisation. We do have a differentiator in that we do normalisation that is cross-market and cross asset at low latency.

Increasingly we're using our own data for value-add products. That's been a really cool exercise because we have to consume our own data, working with our other partners, and apply it to the business problem. We have partnerships with Ullink, Orc, Ften, among others. They've written to our normalised content. That's one scenario, because we don't deliver smart order router OMS technology ourselves. It's too niche and a lot of people have worked on it for years and years. Also, we have a product that we've developed which consumes our normalised data from all these markets and makes it available for OneMarketData's OneTick analytics software. They've been writing their feed handlers to consume all of our normalised content. And because it's normalised, the time to market is much improved. They don't have to deal with all the quirks from all the different markets. It's just been great to see the value prop in action. It is working faster because it's normalised. We are actually getting to market faster because of that value prop.

Adam: What are some of the things you're working on now?

Jacob: Analytics in the cloud. Not only is it consuming our data format, but the whole deployment model is big data-ish. So there are some very big cloud deployments in Colt's facilities. In partnership with Colt, we are bringing on this service and it's going to be run in the cloud.

The ability to actually get into an even more flexible and lower cost footprint for our cloud services is, I think, a big differentiator. NYSE and others have been working on making cloud available to customers. The value proposition is there.

Adam: Is this something people really looking for low latency wouldn't be interested in, or is it getting to the point where it's fast enough for people who are in that space to want cloud-based solutions?

Jacob: The virtualisation does come with some overhead. That said, with the new generation servers and updated kernel codes, hypervisors are much faster and there are some significant chip clock improvements. If you took yesterday's stand-alone machine and compared it to today's cloud-services machine, you're actually not too far off. But really the comparison is today's stand-alone machine with yesterday's stand-alone machine if you are considering ultra-fast HFT.

There are many functions that are not HFT and are not so ultra-sensitive, where, if things are in the microseconds to low milliseconds that's more than adequate. And in a modern cloud infrastructure you can do much better than that these days.

Adam: Presumably the cost savings are substantial?

Jacob: You're not on a three-year depreciation schedule, for example. You're on a service model. These are some important cost exposure metrics for some customers.

There's a perception that it's not fast enough. There's a reality that for many of these people it is fast enough. Does anybody really know? Has the market really tested it? What's the appetite for the perception risk? Certainly for many use cases outside of those HFT trading engines, companies are very interested in cloud services. Part of our thrust is to push on that from the financial services point of view and see if we can't both test it and move the market.

Adam: The word cloud is one that has a lot of different meanings for people right now because it has become a marketing term as much as a technological one

Jacob: Yes, it's now an amalgam of anything that you want to stick under the hood.

Adam: As a technology-oriented company, you must always be watching for what's in the pipeline in terms of broader technological advances and some of the things that other firms provide for you. What are you looking at, what developments and research do you find most interesting in terms of your business?

Jacob: There's a little bit of legacy around FPGAs and GPUs. These are historically more esoteric disciplines, niche things that come and go or change. I think one big advance is a number of these server manufacturers are trying to get more of that kind of functionality embedded on the motherboard, with the associated programming and compiler tools so that you can either write your own FPGA-style code using FGL languages that are native to the system, that are not add-on boards. They're not over PCI, over network. It's much closer to on-chip. That said, PCI-3 has helped make the speed to access these add-on card technologies much improved. That the cost-performance footprint I think is much improved, if not fantastic, has been a big game changer in the industry. They've been working on it for quite a few years and they know that it's an industry problem and they know that it's going to be a big game changer. And I think we're starting to see some of that actually come to life.

Also, the global nature of things. The WAN that is just thousands and thousands of kilometres long, I think, is going to change the way information has to be handled.

We have some customers who are operational in Australia and in London. They need to reliably get orders and executions and data and business internals - it comes in many flavours, it's not just the market data. Increasingly, this reliability and efficiency on a broad WAN, a long-distance WAN, already is creating a new set of pressures. Just setting up a TCP connection between London and Sydney is not reliable enough. It's kind of like what routers have done for underlying networks. There are multiple routers all along the chain to get from London to Sydney, but there isn't equivalent maturity in application-aware infrastructure, a messaging system at a more business-aware level, like the routers, that's actually helping you ensure globalisation and communications across the globe are reliable.

Adam: Coming back to this initial question about being a customer before, are there specific aspects about that which affect how you do your job and run your team?

Jacob: So often there is so much focus on technology - a faster server, a better storage system, a better circuit - that technology folks are often viewed by business teams as completely missing the point. There are things that really do matter in business: 'You know, if only those technology people could understand, life would be much better.' Instead, technology people tend to only know about technology, and may even only know about the niche that they are in of that technology. It really is about the business and trading model.

These are traders and they're quants and people who are really trying to solve things. They're not trying to deploy equipment and they're not trying to deploy technology. It's only that those are necessary evils to get the job done. We focus specifically on the head of trading business person. I would say it's a dialogue, a technique of conversing, where we're always focusing on the business problem and in the same meeting trying to tie them back to some concrete implementations.