Is Aspect using both automated and algorithmic trading? And does it combine the two for complete end to end automation of the trading process?
At the moment about 90% of our electronic trading is managed by our algorithmic execution model. That represents a major increase over the past two years. We already have the capability to completely automate the execution process right from our "alpha models" (which generate the raw trading signals) through to our execution systems and the back office.
However, while we already have this capability, we still prefer to take advantage of the value that human traders bring to the trading process. We find that this approach is well suited to the characteristics of our core investment strategies. (If we were running high-frequency statistical arbitrage models then our viewpoint would be different.)
I think some people tend to overplay the importance of complete automation; either because they are looking to remove the overhead of running a professional execution desk or because they are selling automation as a means of capturing order flow.
So how do you see automation adding value to Aspect's trading
Given the medium to long term nature of the investment strategies that we run, we see automation as being of value in three areas:
- Improving workflow on the trading floor
- Enhancing productivity by enabling our traders to focus on high value add trades
- Removing our footprint in the market by keeping our order flow under the radar
Is the general approach to pass routine order flow to algos and
leave the more awkward trades for human traders?
I wouldn't say that we just use algorithms to handle routine order flow, because we now also have some quite sophisticated algorithms that are capable of sensing liquidity for us in more challenging markets. However, it would certainly be true to say that we use algorithms to handle orders where a human trader has no expectation of adding value.
When did Aspect first move into automated/algorithmic trading?
Late 2004 - the catalyst was an invitation to participate in EBS Prime. While this was obviously attractive, as it gave us a unique opportunity to access the core liquidity in the FX market, it was only possible to do this via an API, as no manual/keyboard interface was available.
Therefore, in November 2004 we began building a prototype automated trading system for foreign exchange. This not only connected to EBS, but could also be extended to connect numerous other FX liquidity pools. From a trading perspective, these were very disparate liquidity pools that a human trader would have been stretched to cover simultaneously. We went live with the technology in January 2005 and during the remainder of 2005 we went on to extend its connectivity to every major futures exchange in the world. As regards algorithmic trading, we spent most of 2006 designing and deploying our first proprietary algorithmic execution models.
Were you taking existing alpha models that you were already using in FX futures and tweaking them for trading on EBS?
We weren't actually trading FX futures at the time, but FX spot on a variety of liquidity pools. However, most of this FX spot trading was being done manually. The API requirement on EBS meant that we had to change this approach, although I think we always anticipated that we would move into automated trading at some point anyway.
Would you say that your FX trading was automated or algorithmic?
A bit of both, though strictly speaking the automation deliberately isn't end to end. As regards what we are doing across the various liquidity pools in FX, I would say there are definitely opportunities to build algorithms that interrogate those pools. These algorithms can alert us when an execution opportunity arises or our traders can simply set up algorithms that will automatically react to such an opportunity.
Has automated trading allowed Aspect to explore increasing its
level of diversification, such as into higher frequency time
We aren't doing any research at present into higher frequency trading with our alpha models, so the short answer is no - high frequency trading was never an ambition for us as regards automation. However, that is a separate issue from our research into execution algorithms and how they behave. We now look at high frequency data as part of that process, but not as part of our core investment strategies.
"At the moment about 90% of our electronic trading is managed by our algorithmic execution model."
Are your execution algorithms operating within a vendor or proprietary framework/platform?
Proprietary - over the past two years alongside the development of our algorithms we have built a fully systematic process for managing and controlling our execution risks. In this business one inevitably has situations where the actual current market position differs from the ideal position that the alpha model has prescribed.
A common way for systematic managers to deal with this problem is to throttle their alpha models to control the workflow. One way of doing this is to snapshot the market at a limited number of points throughout the day and only feed those few data points to the alpha model. However, they may not necessarily have followed the same approach when creating/simulating the model in the first place - thus a significant disconnect between simulated and real environments is created. At the same time, a huge amount of information has been discarded.
By contrast, we have designed our technology so that our core investment strategies can operate in real time and have access to all price/volatility changes throughout the day. This greater granularity has allowed us to create a concept we call "position deltas" across some 150 instruments. These deltas represent the difference between actual and ideal positions and are used by the traders as inputs when configuring execution algorithms. The net result is an efficient execution process whereby we can run a truly real time, 24 hour execution desk with fewer than ten traders, without incurring any backlog of trade tickets.
How much flexibility are the traders allowed when executing
All signals generated by the alpha models are published as core position requests on the trading floor. However, we allow the traders a degree of freedom around these core positions, which we refer to as a "risk corridor". For example, if the ideal position in a particular security is 1000 lots, then we may allow the trader to have a position of anywhere between 1300 lots and 700 lots. That permitted corridor may or may not be symmetrical either side of the ideal position.
Intriguingly, you sometimes find that the traders who outperform are those who do less trading and are at the lower boundary of the corridor. In range bound conditions the alpha model may wish to trade more than is actually ideal for the market conditions, so the trader can actually add value by doing less.
This melds well with the algorithmic models, in that a trader might input a comfort zone within the risk corridor of +/- 5% of the ideal position and leave the model to get on with that while they can focus on something requiring more manual attention.
"I've always found it rather strange that some people are prepared to delegate execution of quite substantial amounts of money to an algorithm developed by a third party"
So there is a deliberate policy of allowing traders the opportunity to add "execution alpha" within the risk corridors?
Yes, that was very much our intention, but we wanted to do this within a framework where we could have visibility and control of the risks they were taking. The results certainly appear to have justified this approach in that it has definitely allowed the traders to add alpha both manually and in their deployment of execution algorithms.
What were your initial expectations about your development of
We set out with the mindset that this would very much be a learning process. Furthermore it was one in which we were determined to include the traders. For us to make any real progress it was essential that they were willing participants and did not feel that this was something that would compete with their jobs. The intention was that they should see it as something that they would control and was in their best interests to use. Every week I sit with the traders and go through all the numbers in every single market in order to better understand the reasoning behind the decisions they made. Ultimately we are trying to understand why things did or didn't go well, which may have nothing to do with the trader, but be driven by market conditions.
Overall we want to derive the greatest possible understanding of how the various factors that can affect execution quality interact. We are deliberately using this empirical route; we originally considered addressing this from a market microstructure angle using terabytes of tick data to try and ascertain how order flow would have impacted a set of prices had we been in the market at the time. However, we quickly came to the conclusion that this was not a viable approach.
What effect has your adoption of algorithmic trading had on the
type of trader that you hire? Do they have very different skill
sets to those you were employing five years ago?
Yes, I think there has been a change, but there is still a blend of experience and expertise on the desk. In addition to the introduction of execution algorithms our core alpha models have changed quite a lot over the past few years as well. If you look back just two or three years then we were doing ten or fifteen times more volume per million dollars of investment in the fund than we are doing now.
Furthermore a great deal of that volume was being done through brokers and on the floor in outcry markets. That obviously dictated that we employed the type of trader with the skills and experience to be able to handle that sort of order flow. Now we are generating the same sort of returns as then, but with an 80% lower order turnover and mostly in markets that are now electronic. That, combined with our use of algorithms obviously dictates a very different trader skill set.
How much leeway do you give your traders as regards whether an
order will be traded with an algorithm or not? And how much order
flow have they elected to put through algorithms?
It is entirely their call, we don't dictate that. In the first year of operation they have chosen to use algorithms for 90% of our electronic volume. While we don't have any hard and fast targets to increase this to 100%, I think this type of execution model will expand to allow the traders to focus on a small number of high value-add trades.
Ultimately, the focus for us is to use the execution strategies to reduce our footprint in the market. Our order flow is now definitely more difficult to identify and prey upon. An important input into the algorithm design stage is that our counterparties are becoming more sophisticated at exploiting systematic order flows. So delivering an order flow that is invisible is a key requirement for us in building up capacity in the trading models we are running and also protecting the returns generated from them. Minimising predation of our order flow is absolutely critical for us.
Have you therefore always developed your own execution algorithms?
In the very early days we did briefly use some third party algorithms, but now we develop everything inhouse because we want to have full control of the algorithms and be sure of how they will behave. I've always found it rather strange that some people are prepared to delegate execution of quite substantial amounts of money to an algorithm developed by third-party - particularly a third-party that might have a vested interest in seeing your order flow and influencing the outcome.
Do you notice any trends as regards the markets that your traders
do/don't use algorithms for?
It is generally pretty broad based, though last year there was a substantial increase in the amount of algorithmic trading we did in commodities. Our use of algorithms also isn't inextricably linked to the presence of a particular trader on the desk. Although we operate a 24 hour desk, a trader on the day shift can set up algorithms in advance so that they can run during the night should any relevant liquidity emerge.
What methods of transaction cost analysis do you use?
Our main focus is on the difference between our pre-cost and post-cost returns - ultimately that is always the bottom line for us. In 2006 we started trying to place our execution performance into some sort of context by exploring the distribution of possible outcomes during any given day or month in any particular market or group of markets.
Obviously, the fact that our traders can operate within a risk corridor means that they could choose literally thousands of different order execution paths. They could range from inactive to highly active so we have assembled a framework that allows us to simulate the possible paths.
As part of the measurement of execution performance, we developed a new metric last year that we refer to as net trading cost (NTC) which we feel updates the traditional slippage metric. NTC consists of a combination of the commissions we pay in the market, market impact and also the real time P & L difference between the position our models would take in a theoretical environment and the ones we actually hold in the market. We measure that P & L difference every 30 seconds - essentially we are marking the actual and theoretical positions to market every 30 seconds.
So do you feel that this metric is an important element in
It has given us the most comprehensive view of the true cost of implementing our strategies. It has also allowed us to evaluate the various execution paths and where our actual execution path ranks in the total distribution. That in itself has proved valuable.
However, I don't think we are likely to add a raft of further metrics on the back of this. I've been always instinctively wary of introducing multiple benchmarks, having seen the effect this has had, for instance in the equity markets, where you can pretty much find any metric to justify whether an order execution was good or bad. Therefore I think the focus for us in the near term will remain on narrowing the difference between simulated and actual performance.
Has algorithmic trading tangibly benefited performance?
NTC for our largest programme (Aspect Diversified) was in aggregate fractionally over 80 basis points last year. That is an order of magnitude improvement on where we were before we implemented algorithmic trading models. Though we weren't using NTC to measure execution performance in 2003/2004, I would estimate that it was then probably the equivalent of approximately 400 basis points. This improvement obviously has a directly linear impact on the performance we can generate for our investors. We believe we can reduce this even further over the coming months as we learn more about areas such as the relationship between our trading signals and imbalances in market liquidity.
Have you benefited in any particular markets through your use of
Obviously, we have seen all round benefits, but I would say commodities have stood out over the past year in terms of reduced trading costs. That has been partly due to the fact that some commodities are trading side by side on the floor and electronically, which has thrown up opportunities for price improvement.
In addition, we built some liquidity-seeking algorithms midway through last year that allow us to trade instantly once our size and price conditions have been met. I think these algorithms have given us a real edge in commodity markets over the last twelve months because we have been able to respond immediately while other participants have been looking at the floor price and the screen price and taking time to decide which price to trade.
Would you say that your work on algorithms has opened up new
markets for you that were previously non-viable?
No, but I would say that it has radically reduced our transaction costs in certain historically high cost markets. For us the key question regarding market viability is the size of the core position we can open and hold in a market. If it is not big enough to be worth our while (perhaps because an exchange has low maximum position limits for a particular tradable) then using algorithms won't change that situation.
How do you discriminate between the effect of the
trader/algorithm's activities in the risk corridor and other
When a trader makes the decision that they wish to close a position delta then that decision point marks the end of us measuring their P & L in terms of the risk corridor. The loss or profit we make from then on is attributed to slippage.
While our simulation environment does contain an assumption about slippage, it doesn't make any specific assumptions about how the traders are going to manage their delta P & L. That is something we measure completely independently, so we know on a minute by minute basis whether the traders are making or losing money on their delta book relative to the baseline book, which assumes that we execute everything every 30 seconds.
Are your traders multi-disciplinary or do they specialise?
They are specialised, though for reasons of operational risk everybody is capable of trading everything. The desk is organised so that we have two people focusing on commodities, two people that alternate on the Asian shift, one person focused on cash equities (we have two cash equity models) and two on fixed-income and foreign exchange.
How do you model your traders' performance?
Slowly - it is an area where it is dangerous to come to quick conclusions, because there are a lot of variables that will contribute to whether a particular trader appears to be performing well or badly. These can be very simple things like whether we were trying to buy into a rising or falling market. Or how was the alpha model trying to build the position? Sometimes it attempts to build a position quickly and sometimes very slowly. What was volatility doing? And so on.
How reactive are the execution corridors you give the traders?
The corridors are set on an instrument-by-instrument basis not on a trader-by-trader basis. The shape of these corridors is not always symmetrical throughout the day. We're certainly looking at exploiting the opportunities of the risk corridor more aggressively this year than last. Last year we started with a fixed corridor that was similar in every market, which was very conservative. We did a lot of research on how wide we could make the corridor and in practice we made it only about a tenth of the width that was theoretically safe. This year we have already become more sophisticated about how we leverage this capability.
How much technology overlap do you have between the development of automated and algorithmic models?
They are completely separate platforms in terms of simulation, as they are intended to do very different things. For example, our alpha strategies are medium- to long-term so we don't need to worry about how we deal with tick data - there is clearly no point building a system that can handle tick data if you don't need it. By contrast that functionality is needed for the development of execution algorithms.
Do you have separate teams of quants working on alpha and
Yes, but there is quite a lot of interaction between the two groups. We have a team of three people mostly focused on the execution algorithms and around twenty working on alpha models. I think that ratio will probably remain similar going forward because the bulk of our investment in research and development will be on the alpha side.
Given the types of strategies we are running at the moment then the level of investment we have on the algorithmic side is appropriate, but should that change and we move into using higher frequency alpha models then we would revisit that. Given the trading frequency and timeframe of our alpha models I think the balance is about right at the moment.
Both teams sit on the same floor and there is good communication between them and also between them and the trading floor. Therefore an idea that crops up in one area might actually end up being used somewhere else.
So, so for example, a weak form model deemed inadequate for
production use as alpha model might end up as a component in an
It could do. I wouldn't rule it out though it hasn't happened yet. From our perspective it is perhaps less likely than for some other managers as the two disciplines are very separate and we aren't working on short-term trading models from the alpha perspective.
Finally, are there any approaches that have particular appeal at
present as regards execution algorithms?
No - we are now at the stage where we have a year's worth of execution data and are examining how the various algorithms behaved to see which ones have performed well and which haven't in the context of the environment at the time of each trade. Therefore it is too early to say which approaches seem more promising than others.