Andy Webb: Could we start with a bit of background? I'd be interested to know how you came to be running Global Advisors, how Global Advisors came about, and what was the original idea behind its formation?
Russell Newton: Danny Masters and I established Global back in 1999. We'd worked together for quite a long period at various institutions, starting with Shell back in the mid-eighties. We were both in the trading room, both involved at that time in physical oil trading. We followed each other round the industry, from Shell to Phibro and eventually to J.P.Morgan, and in the process, we started to refine our ideas about how we wanted to trade, moving from seat-of-the-pants discretionary, physically oriented, fundamental trading, to trying to be a bit more scientific about it as computers got easier to use and more powerful.
By 1999, we thought there was an opportunity because we were seeing a lot of flow coming through the J.P.Morgan desk from big global-macro funds - people like Campbell, Caxton, Tiger, even Soros - but it was relatively unsophisticated in the way they were approaching the commodity markets. Meanwhile, in the commodity space we were seeing lots of pretty chunky participation from what you would consider to be the commodity market professionals - Shell, BP, Trafigura, Glencore - but there was always a certain mystique about what the hedge fund players were doing.
So there was a gap in the middle, between looking at the market from a very fundamental perspective and understanding the flows of physical crude that you were seeing if you were a Shell trader, say, and the macro picture that the big global-macro guys were using to drive their decisions. So we decided to set up Global to try to sit in between those and use money-management disciplines to trade more efficiently and use computers to help us to do that, while knowing a little bit more about the nitty gritty of the commodity markets we were trading than maybe the average global-macro guy took the trouble to know.
So that was the genesis of the idea. We had been lucky at J.P.Morgan that we'd met quite a few of the large money managers, so there was a natural platform for us in terms of raising assets in the early days, which gave us a head start with a couple of reasonably chunky investments from people like Moore Capital and one of the meaningful proprietary oil traders. That was the beginning, and really, the last twelve years have been spent moving further away from a purely discretionary approach, to a blend where we computerise as much as we can of our understanding of the commodity markets we trade, and only really use discretion if we think something's happening that the computers might not recognise as being a serious operational risk to the strategy - like deliverability risk; things that you've had in the past like a Piper Alpha or a Brent Bravo where it's impossible to deliver that commodity, or even Katrina, where it became an issue in base metals and gas.
Andy Webb: When you first started, in '99, would you say that you were still overwhelmingly discretionary in your approach or was there a quantitative element even then?
Russell Newton: Danny is a pilot, so a lot of the analogies we use are flying-related. By '99, we were still driving the 'plane, but we had a pretty good instrument panel in front of us, telling us what was worth looking at, where did markets look like they were breaking out, where were the fundamentals most interesting, so we could drive the 'plane in that direction. Typically, we would have, for example, a suite of fundamental models that were trying to evaluate relative S&D-type stuff, and boil it down to - is anything in this space (let's say liquid hydrocarbons) very cheap or very expensive? And if so, let's look at that in much more detail. What's the price action like? It's all very well finding that heat cracks are cheap, but they could well get much cheaper in the short term. As a discretionary player, you want the radar screen to give you the heads-up, that it's worth looking at this particular trade, but then, as a discretionary trader, you want to determine the timing and the sizing yourself.
What really happened from '99 to '03/'04 - when we started rolling out pure quant - was that each element of that pilot was getting replaced by the machine. How do you size your positions, where do you stop out of your positions, how do you enter in the first place? All of those questions. We gradually came up with what we thought were satisfactory answers so that we could leave just this kind of operational risk for what I call the dead man's hand.
Andy Webb: Had you had much exposure to the quantitative side of trading before you did that for yourself?
Russell Newton: Yes and no. I'm a child of the eighties, I suppose, born in '65, fifteen in 1980 when Clive Sinclair was rolling out the ZX80. With my first-ever paycheck, from a Summer job, I bought an Acorn Atom, which had a mighty 12KB of RAM. The first thing I tried to do with it was model equities and punt my own account in the equity markets. So I guess from an early and very modest beginning, here I am thirty-two years later still doing basically the same thing.
Danny's the same. His undergraduate degree is Physics, and then he went on to Imperial to do an OR (Operational Research) Master's. So we've both got scientific, computational backgrounds, and although Danny wasn't immediately using that in his trading, he was probably much more numerical than the average Phibro guy by the time he arrived there. So it's been an evolution, and sometimes that's hard to explain to investors, because for most investors there is this very strong divide between discretionary and systematic quant.
Andy Webb: When you started to take a more quantitative approach to trading the fund, what were the tools that you were using? Excel spreadsheets? MatLab?
Russell Newton: The first guy that we hired was an engineer who had been working for DERA, which became Qinetiq. He liked an engineering language that's also used in the aerospace industry called PV Wave, which is a proprietary command-line code language, and for a long time we used that. It had a very powerful toolbox of transforms and manipulations that made a lot of sense to us. One of the things that we felt was a way that we wanted to articulate, or at least filter, our opportunity set, was to try to find the dominant time frame in each of the markets we trade, to see how noisy or not-noisy they are, and concentrate on the not-noisy ones. For that, you might want to use Fourier or derivative Fourier transforms, and PV-Wave wasn't bad for that.
In the same way, we were doing similar things with the fundamentals. We'd take a big data puke of every fundamental piece of information we could find about the liquid hydrocarbon space, and then try and use various types of toolboxes within PV Wave to just narrow down the landscape to those things that were boring and those things that looked potentially interesting. And so it would come up each week (because the DoEs only come out once a week in the US for example) with a set of charts that were just - this is what this particular time series has done recently, maybe it's heating-oils cracks or Brent/TI, this is what it's done in terms of the trading range over the recent past, this is whether there's any seasonality in that, and these are the fundamentals that seem to be the drivers of it. This is how they stand, and it looks like a good fundamental story. Price-wise, it's cheap, and it's starting to break out to the upside. That's something that a discretionary trader is going to get interested in. You get fifteen or twenty charts like that and just plough through them. You could narrow it down to maybe three or four opportunities.
That was really where we started. There were two main angles apart from just the price-based grind of - what does the price action look like and do we think there's actually some signal in here, or is it all just noise? One was the pure fundamental overlay that I've just described, and the other was some work we did, trying to see whether we could come up with a proxy for the speculative position that other players were holding. We felt that the spec that comes out from the US CFTC is an interesting number, but it's lagged, it's only once a week, and it doesn't apply to non-US markets. So we figured that if we could sit and think about all the different ways that we'd seen our colleagues or competitors trade each of the markets that we were interested in, maybe we could build something like a proxy for the spec position in each of those markets and get some interesting insight into whether other people were getting long, or how that was changing with the price action - in the hope that we would eventually stumble across two or three markets, from the thirty or forty that we trade, where there's a big divergence between the spec position that these guys are holding, and the price action.
Andy Webb: That's very interesting. In effect, you were creating your own synthetic COT report for multiple markets, and in that report you were trying to synthesise, based on your experience as fundamental traders on a physical desk yourselves … were you trying to model how the physical guys were trading, or just the speculative guys, or both?
Russell Newton: Anyone that we thought was a market participant. Obviously, the commercial versus non-commercial aspect is relevant, but if you think that there are speculators out there looking at fundamental data, physical premia, whatever, that's a relevant factor to include in this strategy that would be appropriate to test whether it has any real correlation with the real COT numbers - and actually we got pretty good results in terms of the correlation between what we were proxying and the real COT numbers when they came out. So we thought that it was a promising starting point, and in fact it has become one of our models now, in a somewhat more sophisticated form.
Andy Webb: How would you say that you've moved on from where you were in 2003? You've got an engineer working for you by that stage, you're doing some quite heavy quant stuff - what then happened? At that point, did you feel - this is going well, we'll do more in this direction?
Russell Newton: By the time we'd had a chance to raise some capital for the quant strategy and launch it formally as a fund, and all the other things, we were talking about the middle of 2007. At that point we looked at the two or three years of live trading with the quant, and probably eight years of trading with discretionary, and what we found was that for a unit of return in the discretionary we were using a lot more leverage - it was a much more hairy operation in terms of round terms per million, and therefore the intervention of mid-office people cleaning up trades, et cetera, and volatility was higher in the discretionary as well. So the systematic seemed to be doing what we'd hoped, which is extracting the most relevant bits of fundamental data and making us more efficient at generating returns.
So at that point we elected to spend more resource on developing that. The problem there is that you get into a bind with - well, here's the story. Most people would agree that quant is a Darwinian space where you either eat or get eaten in terms of your research process. So I think it was unlikely that anybody out there was going to claim that the best thing to do, once you've developed a model, is just sit with that model in perpetuity.
So, following on from that, you've got to say - okay, I need a process, in terms of these are the models that I've been trading, and these are what I've been selling to investors. How much do I want to keep changing it every month, by adding new models, and how much do I want to say, we are working on research, but for a time, we're going to run with what we have? At that point, remember, we were still raising assets. By mid-2007 we had around £150 million in the quant. The latter is what we decided to do, so that we had a stable story to tell investors while we were raising money, and it didn't feel like we were confusing our pitch every month with something new.
The problem with that is, eventually you build up a head of steam on the research side where you've got a whole bunch of ideas that are not really going anywhere, and really by late '08, early '09, we had quite a lot on our plates that was looking pretty interesting. So, after a phase from probably '07 to early '09 where we didn't really do anything to the strategy, we didn't add any markets, we didn't add any models that hadn't already been there for a couple of years, we started again to look in a number of new directions.
In the fourth quarter we added two or three new models and in the first quarter we'll probably add two or three more. That brings its own problems, because after a period where returns have not been spectacular - which they haven't over the last couple of years - there's a danger that investors are going to see this as a knee-jerk reaction to a period of poor performance. In fact, it's more the case that this work has been going on for a couple of years and it takes time to come to fruition. It's just coincidental that everything seems to be getting to a point where it's both interesting, in terms of low correlation to what we have at the moment, and pretty much ready to roll out.
We've gone from having one quant working with me in '03 and '04, to having six or seven guys now, on a much more formal development and production cycle. It's a much more interesting research and development environment, and it's producing a richer palette of models.
Andy Webb: How do you manage that research process? You've got six or seven quants there, who have probably all got ideas of their own. How do you manage the pipeline? Different people work at different rates, different ideas take longer to research through thoroughly; how do you get a consistent degree of progress? Is there a formal management process?
Russell Newton: There is to some extent. There are two ways that we could start a project. A quant may have a hobby-horse of their own, which they present as something they'd like to follow. We're prepared to let them follow it for a while. Or Danny or I, or one of the other team members, may have something that's been on the back-burner for a while, but which we feel is deserving of attention. So we assign a quant to that when they finish up on something else. Having given a quant a task, there would be a period during which they're working on a proof of concept, just to see if the basic idea has any real merit, and if it does, at the weekly research meeting, they would effectively do a pitch - this is the basic premise, this is what I've done to test it, this is what the results initially look like. That may take a month to three months to get to that point. Really what they're looking for is further funding or support from us to continue. If the result is that it doesn't look at all promising, we'd probably say - have a think about something else. But if they get through that hurdle, they are moving into a more formal development phase, and that could last three or four months. They would come out with a Version 0 beta that would go to the investment committee, and that would go to the traders for testing.
Andy Webb: Do you have any formal criteria for an initial idea? Does a quant have to come up with specific numbers, like, say, a simulation of x months of data, or a Sharpe, Sortino, K Ratio on that performance, or simulated live trading?
Russell Newton: Initially, we'd probably just ask them to think about the in-sample period that they want to start looking at, so they've got some significant out-of-sample period that they can come back to later, that the model's never seen before. Also, with that in-sample period, what kind of results are you seeing across a selection of commodities? We'd be interested, for example, in the disposition of return across the range of commodities. I'd be concerned if I saw all the returns coming from one or two of our thirty or forty markets. That suggests that it may be a feature just of a couple of markets that you're capturing, and that's not really going to generalise very well. So we'd look at all of that; we'd look at how the tail of returns correlated with our own other models' returns during similar periods, looking in particular at shock events. But how much data we would want them to show in that proof-of-concept stage - that's very dependent on what kind of model it is. If it's an intra-day model, we might just look at one year's worth of five-minute data. If it's a daily trading model, it's going to be a different story.
Andy Webb: When an idea is initially set out, have you got in the back of your mind any gaps in your model portfolio?
Russell Newton: For sure, yes. I have maintained a document for the last two or three years, which is my own pipeline, and of course it just gets longer and longer. With a decent-sized team, you hope you'd clear the decks of some of these ideas, but it doesn't seem to work like that.
In the commodities space, there are certain areas that seem particularly fertile: the way that the term structure behaves is pretty variable in commodities, so that's of interest; fundamental drivers; it's good to have some pure price-based element to the portfolio, because we have seen environments where the fundamentals were really a bit of a head-fake. So we do like to have models that are just going to be able to follow very strong price action. Then you've got different time-frames that you can be looking at, but mostly, it's about additional non-price elements, so more fundamental data, more sophisticated models in terms of the way that they use that fundamental data.
For example, in the grain markets there's reasonable data from the Department of Agriculture in the States on inventories, but there's extremely rich data on crop progress during the season. You might only get inventory data once every week or couple of weeks or month, depending on the time of year, but you get this much richer data set of what kind of year it has been in terms of crop progress - and that's heavily dependent on the weather, obviously. What we do is use those two sets of data together to try to produce some sort of proxy for how the inventories are expected to move - rather than just waiting for the number to come out. That kind of thing, where you're trying to blend sets of fundamental data, is reasonably specific to our space, I think.
The other angle is, we have a lot of research going into the risk side of the equation. At one point in the past, when we were discretionary traders - we were operating from the Merc floor - we felt that there was some edge to be gained from understanding the flow when the Nymex was a really significant part of the flows in the energy markets in particular. One thing Danny learned from that experience was - sometimes it doesn't really matter how you got into a position, it's what you do with it afterwards that really counts. Obviously that applies very much to the locals who are making markets, but for us, we felt that it meant it's all about how you size positions, and the risk management in terms of whether you shrink your position into a loss or whether you stop it all out on one point.
That's one of the things that has differentiated us in the past. Even if we've had poor performance in a given period, it has tended to be very attenuated because of the way that we're managing the downside risk. I think it's all about living to fight another day in the markets. So although it doesn't feel all that interesting to spend your time thinking about how you size positions as opposed to building some sexy model that's predictive, it can probably make you almost as much money.
Andy Webb: And, as you say, living to fight another day. Quite so. One thing intrigues me, though. As you pointed out, there's lots of specificity in the data you're dealing with on the fundamental side. Potentially, you've got a data-management nightmare. You're not just having to look at historic price and volume; you've got all the fundamental data coming from multiple different sources. How do you manage all that, and warehouse all that, and make it available to the quants to do their job every day?
Russell Newton: Pretty much from day one, we used a data warehouse based down in Austin, Texas - the XMIM from Logical Information Machines, now Morningstar? It's very commodity-biased, and although the user interface is a bit clunky, there's this colossal amount of all sorts of data, some of which you would be a little disturbed to find your managers using. For example, they've got all the astrological as well as the astronomical data. If you only want to trade when the moon is full …
It's amazing, though; for a weather-dependent sector like commodities, it has precipitation data that could be relevant to hydro-electric as well as agriculturals, it has temperature data for pretty much every city in the States and around the world, and all sorts of other highly relevant stuff that you could try to tie together. So that was our chosen data provider. The nice thing about it is that it's also quite an open architecture, so although the interface is clunky, obviously real guys never use the interface and we can poke our own data in if they've missed it.
Andy Webb: How much normalising of the data do they do? Have they got consistency across things like reporting units and things like that?
Russell Newton: The way that the database is structured, all of that information is also available to you when you use it. It will tell you in the header for the particular time series, what units of measure they're using, and what the frequency of that dataset is, and so on.
Andy Webb: On day-to-day analysis these days, with your team of quants, what are most of them using?
Russell Newton: Most of the guys, if they're not doing really hard-core stuff that requires a lot of computation, in which case they're probably writing something in C, we're using R. There are so many libraries now available in R. Most people have used MatLab, it's free, we like open architecture generally. If we can take Linux and R-based libraries and concentrate on spending money where it counts, with quant heads rather than with buying closed-architecture stuff, then we're going to do that.
Andy Webb: With the execution of trades, you've got a trading desk as well, but how much of your execution is now automated?
Russell Newton: Bizarrely for what's now a quant shop, we're still a little leery of completely abdicating responsibility to the box for execution. The reason for that is that, unlike the S&Ps, the liquidity in some of the commodities that we trade can be still a little patchy through the day. So what we really want, and what we have, is an order-management system that can allow the trader to drop everything into the market and have it executed electronically, but if he wants to work it over voice because he thinks that it's going to scare the market, or if he wants to delay that execution just a little bit because he thinks it's going to be working better at 9am than 8am, we'll let that happen. The only thing the traders know, these are held orders so they do need to be executed.
Now, as we move more into the high-frequency space, that becomes less possible. We will have to get used to the idea of passing orders directly to the market without any human intervention. The way that we're dealing with that initially is that we're putting, effectively (based on the traders' advice) windows where we're comfortable with orders going to those markets during these market hours. We're not comfortable passing a palladium order at 1am, for example.
Andy Webb: With the automation, what are the platforms and tools you're going to be using when you do automate fully?
Russell Newton: At the moment, where we pass orders they're mostly going through TT's API, and the traders seem to like the platform. They can have the middle-office guys build them a spreadsheet interface that gives them better oversight on what's working and so forth, just for visibility. Where we're working now on automation, it's mostly based around passing orders using their APIs either in C or Java. Obviously, longer term, it's nice to be more flexible and use a generic fix, and the way that we're tackling that is to try to generalise with a layer in our order-management system that allows you just to flick a switch to say in which direction and what protocol you want to use to do that.
Andy Webb: What about time-frames? What range of time-frames are you operating as regards your trading models?
Russell Newton: For the most part, the fundamental stuff tends to be influencing fairly long cycle changes in market behaviour. So the agent-based approach I mentioned earlier (that came out of the work we did on modelling the commitments of traders) that's generally going to have positions that last twenty, thirty, forty days, and it's not changing much every day, although it will make small position adjustments based on prevailing volatility and whether any of the models are getting cut out within that general framework.
But at the other end of the spectrum we have started to look at much shorter time-frames. We've had a couple of price break-out models that have been running on daily data since inception, and they're ten- to twenty-day time-frame, and what's happening now is, we're looking at iterating those into using five- or ten-minute bar data, so they'll come down to having positions for three to five days as well. That's probably going to get rolled out in the next few weeks.
In parallel, we've been working on a version of the agent-based approach. While the fundamental data may not change on a day-to-day basis, we have been using it mostly as overlays, to help avoid making bad decisions by getting involved in markets where you're kissing the lead buffalo. That doesn't really change, and if you're looking underneath that, at what are considered technically good markets that you can judge as being fundamentally good as well, then we think that the idea of maybe even trading down to fifteen- or thirty-minute bars, but still being influenced by longer term fundamentals, probably makes sense. That's probably about six weeks from roll-out.
Both of those approaches have remarkably low correlations to any of the longer term stuff that we're doing at the moment.
Andy Webb: Are you considering anything much higher frequency than that?
Russell Newton: We've discussed this internally. We're trading roughly forty commodity markets, and then we're trading some of the spreads between those markets where real commodity guys would tell you that's a real thing that's tradable. Brent/TI is a tradable thing, so there's going to be liquidity on that spread. Front-to-backs in some of the markets are going to be liquid. But if you start to look at what has liquidity down to the millisecond time-frames, your best answer may be half a dozen markets: gold, crude oil, natural gas. You're not getting much diversification.
Andy Webb: What about spreads versus outrights? Do you have a notional percentage, or is it just whatever happens?
Russell Newton: We've been about 15 per cent historically, and as I say, it's mostly things like front-to-backs in the energies, and crack spreads, and some of the inter-commodity spreads. We've started to look at and trade some conceptually linked instruments, so we'll have portfolios of all the precious metals, or all the base metals, and do it on an RV basis which is partly fundamentally driven and partly price-driven. So that has increased the spread/RV component of the portfolio in the last three months, and we'll continue to add to that. We also want to add more agricultural spreads, because there's decent liquidity on some of the July-Decs in some of the big ad markets in the States.
Ironically, there's a little bit of pressure not to add too many markets. While we run managed accounts, there's pressure to accept relatively small managed accounts until we're at capacity, and at that point, you can get into integer-lot rounding problems if you add too many markets. We're not really sure how a global diversified systematic fund would manage a $5 million managed account with two hundred potential commodities. We find it extremely difficult to keep the tracking error minimal on such an account with just forty markets.
The percentage going into spreads is fifteen, probably rising towards twenty now, and it'll rise further in 2012.
Andy Webb: Any correlation between time-frames and whether you're trading spreads or outrights?
Russell Newton: Not that I've really noticed. The only constraint would be that, with a few exceptions, spreads can become very expensive to trade on very short time-frames.
Andy Webb: What about the relationship between the trading desk and the quants?
Russell Newton: One of the reasons why we merged the offices into Jersey in '09 - the quants used to be in London and the traders used to be in New York - was that the traders used to get very frustrated. They'd get an interface in the order-management system that was unworkable, for example, and they wouldn't feel that the quants understood their problem. And the quants would say - you broke it.
Now, it's a very, very positive relationship. Everybody's in the same room, the traders sometimes come to the research meetings so that they can influence the pipeline, and they're very happy. They're an integral part of a decision to go with a model, in terms of checking liquidity, checking slippage, making sure that everything reconciles on a day-to-day basis in that final paper-trading phase.
Andy Webb: You're getting ideas not just from the quant side of the fence?
Russell Newton: Absolutely. That's what the quants want. What we don't want is just data-mined solutions. A model has to be inspired by some kind of real-world behaviour or observation or feature of the markets that we're trying to exploit. Not just noticing some weirdness in the data. That's okay, but it seems like a fragile way to run a business.
Andy Webb: I get the impression that you've got, not only a lot of stuff hitting the production space now, but also a pretty good pipeline behind that, for later on next year?
Russell Newton: I would say so, and I think along the way, you tend to find that the late stage of any new roll-out catalyses other ideas. I'm sure that what will happen along the way is that somebody will stumble across some other feature that they want to exploit.
Andy Webb: Do you monitor your models even when you're not running them? Say you've got one that's effectively died, or performance has dropped off to the point where it's no longer viable. Do you shove it in the cupboard and forget about it? Or do you monitor it? I spoke to somebody once who dropped a model years ago, stopped using it, started trading it again recently - and it's doing well.
Russell Newton: Interesting. Historically, once we've done an upgrade or an abandonment, we've tended just to draw a line under it. Mostly, that's because we've been adding new things. Where we've done an upgrade, we know that version 2 is better than version 1, so we're not that interested in tracking version 1.
Andy Webb: Russell, This has been so
Thank you very much.