AT: Are there certain triggers that you look for with regards to a technology or a process that would make you think - yes, that's the one for us?
Carl Ververs: There are a few overarching themes throughout financial services, that are especially true in high-speed trading. One thing is that no day is the same. No trading strategy lasts for more than about a quarter, if that. Any algorithm that you have made typically applies to only +/-3% of the market. So you should be in a position to deploy new algorithms, if not at the drop of a hat, at least within the timeframe of a month or so. By 'deploy' I mean release into active trading. To do this, you should also be able to test drive your P&L, your risk exposure et cetera, in a lab environment, with those algorithms, on information that you already have.
There are two challenges now. First of all, you have to be able to change the guts of your system within a very short time frame. You also have to be able to test drive the proposed new brain in a controlled environment that is very lifelike.
Now. Most systems have been cobbled together over years of abuse into a sort of Rube Goldberg system - the equivalent English expression would be a Heath Robinson system. Many trading systems I've seen are like that. That doesn't just impede your ability to move; it completely blocks it. We refer to it as technical debt. Unless you pay that technical debt off, you are not moving. So this creates a situation where you can't change the brains of your trading operation fast enough to keep up with market opportunities.
Secondly, because systems are so closed and so tightly coupled, the notion of test driving things, with lifelike information, is out of the question. That's the situation we have. After about fifteen years of electronic trading, you might think that would have been solved. There have been many attempts, but from what I have seen, they have typically attacked the problem from the wrong angle.
You need your trading systems to be componentised while still fast enough. This means that your algorithms and systems should be pluggable, very much like a games console. Additionally your system needs to have external taps in which you can loop through new components without impacting the core system. It's like audio, where you might have a plug-in to a mixer: you can plug whatever you want into the mixer, and it doesn't affect the mixing technology at all. You need to be able to feed certain pieces of data, say, into your test systems and watch what that does to your volatility calculations or risk exposure, et cetera. That requires open architecture but unfortunately open architecture is typically slower than you need.
Here's the solution, which dates back to the late nineties. The key is that you apply the right level of coupling at the right points. For instance, to have self-describing market data is probably not a good idea because you just can't keep up with the speed. Slower data, like volatility, risk or real time P&L, is the kind of stuff you don't need by the micro-second. You can have a couple of seconds and that's probably fine. A few minutes is even fine. Most companies still go by a daily P&L or risk exposure. Anything faster than that is already a gain.
That creates an opportunity to apply the right level of encapsulation and coupling at the right points. If you want to have an electronic eye, you site it literally on an exchange machine, or right next to it with a really short wire. Nothing in that is pluggable, of course. However, the further you go away from that cherry-picking notion and more into algorithmic trading, where someone has to think about what to do even if it's just for a few nanoseconds, the coupling needs to decrease. That gets us into the kind of architectures that one should be looking at and it's the same at RTS.
From a very tightly coupled architecture we are revisiting the coupling at the various spots. We have extremely tight coupling at the points where we must, and we start to loosen it where we can. It needs to be like pieces of a digital audio studio in the way you can put in your plug-ins in the order you want and you can set them up and you can route information the way you want. I keep on coming up with this idea of a digital audio workstation like ProTools; the trading system architecture should be just like that. You should be able to plug things in as it suits you and get the information through in the way that serves your business.
AT: Do you think that the concept of things becoming too tightly coupled has been an inevitable by-product of people trying to squeeze speed the whole time?
Carl Ververs: I think it is rooted in a myth that everything needs to be ultra fast. An example, where Reuters figured out this wouldn't be necessary, would be their RMDS. They said, you can have our premium wire, which is super-conflated data, and you can't have any replays. If you miss your tick then too bad but it is also super fast. Then you have the median service which is slightly slower but it is tick by tick. Then they have the standard service, which is all in the same system, where things just trickle through at a higher latency. You don't burden the core system with requests or
subscriptions to data at a speed that some pieces don't need. That's brilliant and exactly how it should be. It should be the right coupling at the speed required. The myth is that speed is required everywhere in the system.
A concrete example would be a a click-trading screen or even a
strategy screen. The human eye can barely detect ten changes per
second and the human brain probably can only deal with one change
interpretation per second. Add to that the notion of actually
doing something with the data and we are adding a few more
For a trader to demand that market data is absolutely real time on the click trader's screen is folly, right? To have your volatility curves demand real time data on a tick by tick basis is folly also. Same thing with P&L. Do you really have to have all that data exactly at the right point? No, it can be a second later.
If you start stratifying this need for data speed as an inverse of the openness, now you get some possibilities. That's exactly what I did at Hull Trading/Goldman Sachs. We left the market data alone as being byte-encoded broadcast messages. When it got to things like risk or volatility, we were able to convert the hardcore data from the options trading system into TIBCO Rendezvous messages. They were self describing so you could send them out and they could be consumed by all platforms that TIBCO Rendezvous supported. Now all of a sudden we had Excel literally plug into volatility data and we had this 3D dancing volatility service. The entire system became open while speeding up the overall speed because the core system wasn't overburdened with needless real time data requests. We could start plugging in whatever we wanted at the periphery.
AT: You were actually doing this at Goldman in the 90s?
Carl Ververs: Yes, it was 1999.
AT: So for twelve years or so you have been thinking about this anyway. What has everybody else been doing?
Carl Ververs: That is a very good question. I have seen that people have basically been sitting on their hands. I have worked with a range of trading companies over the years as a consultant, mainly on the methodology side. I have noticed that the developers they have hired are good craftsmen but terrible disciplinarians, terrible managers. That is understandable but if the company doesn't hire managers who understand how to get a tight production chain going, then they are literally flying by the seat of their pants.
AT: By discipline, do you mean just simple things like missing deadlines?
Carl Ververs: Missing deadlines is actually the result of what happens with lack of discipline. Traders say, I want this and I want that, and then the developer doesn't have the nerve to say "I don't understand what Kurtosis means." So I mean discipline first around understanding the handling of features, and then going from that to actually understanding simple requirements, and then implanting exactly what the request intended, which is often different from what the trader said. That's fine and doesn't mean that anybody's lying, but it just means various individuals in the production chain have a different perspective on how to describe certain functionality. That discrepancy, the business/IT dichotomy, can be bridged but it requires both sides to first acknowledge they don't understand the other side. That often doesn't happen because traders try to talk tech talk and the tech guys talk trader talk. They start saying things like "the delta between my expectation of lunch and the factual …" I mean, really? Must you?
This is the methodology that comes into play. First, agree that we don't understand each other. Therefore, when we are talking about requirements, let's not go nuts about spec-ing everything out in detail. Let's write down, feature by feature, what is expected to happen when this feature is live. It then becomes clear how I, as a trader would test this feature. There is a pretty clear exit criterion. Either it does this, I put in six factors and I get the right theoretical value, or not. It's black and white. That narrows down the room for error already and also ups the level of understanding of what is truly required. Subsequently, when these features are analysed into smaller chunks that you could call Agile stories, then in these you can literally write the required formulas.
"The business/IT dichotomy requires both sides to acknowledge they don't understand each other."
Not assuming anything from either the side of the requester or the implementer is a big part. As is writing down exactly what is expected but then leaving the "how" to the craftspeople. It is exactly the other way around right now. Traders are talking about classes and methods to developers. Then they are somehow forgetting to explain how formulas work, or why it is important to know exactly what your risk exposure is at all times, as opposed to your P&L. That's where the breakdown is.
Here is where Agile comes into play. Agile business analysis pushes people to not assume anything at all. It bridges the business and technology dichotomy by nailing down what the exit criteria are - plus quantifying pieces of work into relative sizes so you can actually see how fast you are going. That goes back to the deadline notion you were talking about before. Let's say that I have twenty features to do something. I know my team can do five features per week. That means my little project is probably going to take four weeks. I can already make a pretty good forecast of when I would be done with this chunk.
Then in week number one my team gets three features done. Typical business reaction would be - lazy developers, I'm paying them all this money and they don't do anything. A better thing to do is that the team should look at why we only did three and not five in the week. They might find out that the job is more complicated than they thought, or they don't have the right information, or whatever. That can be fed back to the sponsors. If that doesn't happen, it's still all about lazy developers.
If the project is more complicated than expected, they may only be able to do three story points per week, so the project would actually take seven weeks. If you know that from the beginning,that's pretty nice. It's better than waiting four weeks and then getting the "we ran into problems" response. It's too late for the trading manager to make any adjustments. He may have hired someone else already and got them trained on this new product and everything is ready to go except for the brains. You can save yourself a whole month of exchange fees with a little bit of communication.
Conversely, there may be something I have to have developed in five weeks with twenty points. If I get three points in a week, I know that in five weeks I can only do fifteen. So I can look at which fifteen I must have on day one and which five I can have two weeks later. I can make strategic decisions, as my product unfolds, in terms of implementing what I absolutely need. The concept of on-time/not on-time disappears. You're always on time because you know what you'll get, and when. You have the ability to put in what you need at which points. That's agile project management.
Now, you see that you have two approaches to use in turning round how things work in trading companies. Let's combine the two. You have componentised pieces that you can plug in, in the right spot, wherever you want. You can go about developing new logic in a very pragmatic, very predictable, way. What do you see happening now? That's where we're taking RTS.
AT: Thinking generally, you are doing things in that much more incremental way … it's almost like comparing a high frequency scalping model with a long term trend following model. You don't have those great dips in your equity curve when you screw up on a long project date with a long trade?
Carl Ververs: Exactly right. You combine two seemingly opposite requirements into one thing and marry them. From a technology perspective, the componentised approach with the right level of coupling - that's otherwise referred to as Service-Oriented Architecture (SOA). A lot of people have hijacked and perverted that idea, so that service orientated architecture is by definition always some kind of web application server. SOA, like agile, is a mindset and it is not a thing.
With SOA, you have the right level of abstraction at the right point, and the right level of coupling at the right point. It doesn't mean one-size-fits-all. It also doesn't mean having one big integration point. It means that you have the right sized Lego blocks, so to speak. With Lego Duplo they have those enormous blocks made for kids developing dexterity, and the more advanced the Lego set becomes the smaller the pieces get. It's like that, the right size for the right job, and with the service oriented architecture you can splice your functionality and your coupling the way that is necessary at each point. You can also enhance the plug-ability of the pieces you can permit rather than just demanding that everything is in byte-encoded messages. That is something I have seen hardly any trading shop doing.
AT: What are the alternatives? Were there ever any credible alternatives to the combination of agile and intelligent user SOA? Was there ever any other thing that you thought could be a promising approach but decided to go with this?
Carl Ververs: Yes, I have tried stuff with grid and application frameworks. Very early on I did things with Java, it was still in its infancy then. They had this concept of the whole reflection aspect of it, which made for very simple programming. You could have very powerful stuff, like a command pattern. The idea is you have a standard plug-in idea where you have standard interface for stuff that does something useful. This is not my idea; I got it directly from Digital Audio Workstations. You take these plug-ins that either take MIDI commands or take sound. Then they do something on the output and that gets on the bus.
I applied that same idea to this application framework where instead of the data being sent around on a messaging bus, the data is a static virtual shared memory, and then there is this grid cloud concept of application agents that can start up pieces of functionality as demanded, and the functionality then wakes up and acts on a piece of data and then goes back to sleep. The scalability of this is absolutely massive because the data is no longer shipped around; it is the algorithms that are shipped around.
AT: You looked at that concept but you dropped it?
Carl Ververs: Yes because the speed was not viable at the time. We are talking about the beginning of the 21st century and Java wasn't ready yet. I tried this with C++ but the language did not lend itself to doing this effectively, because it doesn't have reflection. I have experimented with an idea of more of a plug-in interface and it was kind of static and it made it better but I didn't really have a chance to finish this up. 2001 happened and the funding from Goldman dried up. I have not had a chance since then to flesh this out some more. Private clouds as a continuation of the grid idea with application frameworks where algorithms live in a range of machines … this is something we are planning to do with RTS server solutions.
AT: Day to day you are presumably busy with
all the things you just outlined. You are at the same time
trying to keep an eye on what is going on elsewhere to see if something else is coming round the corner. How do you do that?
Automated Trader: Do you feel that diversification by type (e.g. trend, reversal, AI etc) of trading model is a viable way of dealing with reducing model lifespan? Or do you feel that specialising in a particular type of model is a more effective approach?
John Reeve: The BlackCat fund uses momentum and mean-reversion models trading a broad set of futures markets. Each type of model adds useful diversity to the portfolio resulting in a high overall Sharpe ratio. We use models that trade with a time horizon of minutes to weeks but with an average holding time of about eight hours for the whole portfolio. It doesn't help with model life but diversifying as broadly as possible helps with portfolio robustness. In the event a strategy were to fail it would only result in an incremental reduction in trading performance rather than a catastrophic failure of the portfolio. Ultimately, I'd prefer to be a cat with nine lives rather than having all my eggs in one basket.
Automated Trader: Do you feel that diversification by type (e.g. trend, reversal, AI etc) of trading model is a viable way of dealing with reducing model lifespan? Or do you feel that specialising in a particular type of model is a more effective approach?
Miles Kumaresan: It's the first one. I personally am a big fan of diversification at every level, whether it is securities markets, asset class or inefficiency type. Trading many different frequencies, momentum as well as mean-reversion, all form a natural part of this view.
As for AI, I am a big sceptic. Artificial Intelligence was a name coined in the 50s during a frenzy of over enthusiasm among respectable computer scientists to create synthetic intelligence. There is nothing intelligent, synthetic or otherwise about it, although it is good for playing chess and many other tasks.
There is some value in a different class of methods such as neural networks, fuzzy logic, genetic programming, etc. I believe they can potentially contribute to parts of a larger systematic trading system effort. I have successfully used them in robotics many years ago, but their role in a trading system is still limited.
Carl Ververs: I read a range of web and paper publication that are not necessarily technology oriented. One of my favourites is actually The Economist. You get such a broad view about anything happening in the world. One can agree or disagree with the economic or political point of view, but in the meantime the breadth of coverage in there is astounding. There are a lot of new ideas from geopolitics to microeconomics to biotechnology, all the way to water cleaning in Kenya, for example. It all gets some attention and if you read it from the background of your craft then there are always little nuggets that you can think about and apply.
For example. Africa is literally texting itself out of poverty. You don't need the middle man to do banking anymore. You basically do microfinance by transferring balances on your cell phone bill. Well that's innovation if ever I've seen it. The only thing you need is a cell phone network which doesn't require electricity to be in every hut. So why couldn't we apply something like that to system maintenance? Let's say that I am looking at our server and seeing that it is about to run out of memory. Wouldn't that be nice to know, as opposed to hearing it from an irate client calling you after it has happened? Think about how people are monitoring nuclear power plants, or rather should be. It is fascinating how real time monitoring of critical infrastructure happens - and how it links to a story about African farmers.
AT: That's interesting - you don't spend
your entire life sitting there with IT Week, as it were; you're
trying to do something more lateral.
Do you do this on a formal basis by setting aside a specific time or is it just - "I'll do it when I have a second in the evening with a glass of something?"
Carl Ververs: I have a constant scan out on the news. I get email updates from The Economist and several
other sources, and I read another online Dutch paper - I'm Dutch by the way. I keep an eye on CNN and The New York Times. But what I do is, I scan the headlines really quickly. The last thing that caught my eye - it's completely unrelated but you'll see how it applies - it was an article about how a mother said schools were going to the dogs because they are teaching kids how to think in a uniform non-creative way. Therefore, innovation is already killed at the beginning of our children's lives. I'm reading this and thinking - here's this idea of conformity and measuring people on execution of some kind of prefabricated pattern rather than the outcome. So that's why innovation gets killed.
So you take that into your daily work with your teams, and tell them: I don't care how you execute, but I do care that we are getting better all the time. So if you want to make up how you do things, then that is desirable. People then get uncomfortable, and I say: well, if you make mistakes, then I know you are innovating. If you don't make mistakes then I am going to give you a bad grade. But that's from that article, right? It has nothing to do with IT, trading or management. It's about child-rearing. But you take out the little nugget and there it is, a transformational message to my team. So as I say, I do a rapid scan of a lot of stuff out there, and when something seems to have a nugget, I dive in or park it and read it later on my way home. If something requires more reading then it might get done on the plane. It's a scan and schedule idea.
AT: There must be, in the most general sense, criteria in the back of your mind that say to you: can I project this so many years ahead, or can I see this working x number of years ahead? For example, when you were looking at grid computing at the beginning of the millennium, it didn't work then but were you mentally shoving it away in the back of a cupboard thinking, I will revisit that in ten years?
Carl Ververs: Absolutely. It was the same thing with InfiniBand back then. It was such a good idea but you needed such specialised hardware and it was so expensive, plus the infrastructural fibre was rare then. The infrastructure just wasn't there yet. It was like introducing the light bulb and not introducing electricity yet. The real genius of Edison was coming out with the whole infrastructure behind it. The light bulb itself was just a parlour trick - and this is the same idea. The idea of early grid computing with InfiniBand was a parlour trick. Similar to the FPGAs; when they came out they were way ahead of their time.
You file it away - for instance algorithms on a chip that you can erase when you feel like it. It's interesting; if the chips are fast enough then perhaps we can blaze an FGPA at 2Mhz; it's typically much less. Now we are talking; now we have something to work with. You do various layers of coupling into having the FGPAs go directly into the data, but then chop that up in a more palatable fashion to something that is more generalised, like a graphical processing unit. Then you stick that in the matrix calculations and now we've got something. I am still waiting for FGPAs to get that kind of speed and then I might pull out the soldering iron.
Another example is adaptive computing. I have been working for my entire career with artificial intelligence and software that writes itself. It is really fun, and scary sometimes, because it actually comes to life and you can't kill it. I created a self-maintaining program that had to be 'ultra highly available' and it was basically a network worm. It would jump from machine to machine until it found the machine with the most resources. Because it was almost alive, I had to shut the network off. That was in 1996 and it was very scary. But that is now filed away as the idea of having virus ideas actually working for you and finding the best resources for what you want to do. That is also an idea that no one is using and I have no idea why.
AT: We've spoken to a few readers recently who've been using AI to gain environmental insight into which times of day and markets were optimal for particular categories of model. What's been your experience of AI?
Carl Ververs: At Goldman we were looking at genetic algorithms but not necessarily genetic algorithms in search of the absolute maximum in a data set but more in the evolutionary code. You would make changes in the parameters of a piece of functionality and I was trying to experiment with changing pieces of the functionality itself. You would have pieces of code being written by your engine and it would test it. Then you would deploy that side by side over a particular environment or market. Then you would see which one has the highest P&L. That's why I started looking at Java because you have the compiler built in and you can cough up code and compile it and then run it on the fly. That was another really scary thing because it almost looked like Skynet and it really wasn't that hard to do. To have a computer write its own software and subsequently judge whether that software is any good or not is pretty scary.
AT: You'd be out of a job.
Carl Ververs: Well that's when I decided to go into management! But seriously, this is the idea, if you combine that and instead of the software writing itself entirely, you have a situation where you're doing your thing and you have the AI to evaluate that, and it suggests, maybe you could do it one of these three ways out of the ten ways you could do it … that's an interesting idea.
AT: I assume you would apply that approach to a single discrete software module. Did you also try doing that by using the AI to manage the development of multiple modules that would then have to interrelate - so that if you made changes to one it would have implications for another, and how would you manage that?
Carl Ververs: We didn't get that far. That's a really good idea, and come to think of it, that's how an organism works. That's why I get a lot from cellular biology, applying principles that have been around for three hundred million years; applying these concepts to building systems. I definitely have an eye on that but I haven't taken any steps on it. You would think of computing cells as part of a whole organism.
I have applied that to management principles where the company is actually the organism and that has been extremely effective. I've even got as far as coming up with treatment for corporate depression. There is a concept I have been working with for the last couple of years called corporate behaviour therapy and it has been surprisingly effective. In a nutshell it is getting people past the point of "oh, our organisation isn't good enough and we are too dumb and things will never change." It circles back around to the whole Agile thing and how everyone wants to "be Agile". Every CEO on the golf course says, we have an Agile culture. They do all the work, and they pay for it, but it's not that easy, and $100,000 later, somehow, nothing seems to have changed.
AT: It's so much more than just ticking the boxes and doing what the book says. Some organisations, as they're structured today, might as well forget about it.
Carl Ververs: We're speaking about organisms. The organism itself has got that way through evolution. The reason it's still around is that it's probably pretty robust in how it came to be that way. But also, to change it, you almost have to turn it on itself and defuse its own immune system in order to make any changes at all. Any change is counter to its instinct to defend itself. That's how organisations are. You cart in your new methodology, agile or whatever, and the little pilot project goes swimmingly. Everybody is happy to come to work again because they can really get something done. Then comes Monday morning, back to the grind, and nothing changes. That is literally rooted in people's behaviour. People behave in certain ways because of how they measure each other. If you don't change how they measure each other, then they cannot change their behaviour. It's a huge challenge and it is extremely risky. If you don't change the behaviour then it doesn't matter what methodology you use because it is going to be exactly the same.
This is something I got from Cognitive Behaviour Therapy, often used to combat depression and phobias. If you apply that to people's fear of changing, and this works both on the managerial side and on the technical side, wherever people say SOA - never gonna work or AI - never gonna work or grid computing - never gonna work - this fear of changing anything at all is rooted in the same principles that keep people depressed or that sustain people's anxieties. Once you crack that then you crack the impasse and you've put people on a path where they can actually do new and almost revolutionary stuff in extremely short order.
AT: I can't wait to see what happens next. Carl, thank you very much.