Proprietary algorithms are not enough. Automated trading in general, and high-frequency trading in particular, requires an additional combination of three basic foundation points: very high speed; technical reliability; and an ability to capture, store, retrieve, process and analyse enormous volumes of trade and price data. Any weakness or failure in data feeds, comms, applications or processes will quickly relegate a front office to 'infrequent trader' status.
Such a slow reacting trader will not just be losing out to faster competition. Whether the driver is enhanced technology or traders' need for speed, or a spiral of both, the trading world itself is accelerating. For example, we have seen how exchanges have responded to the demands of their traders by introducing new levels of automation, low latency connectivity options, and higher volume market data services. In Europe, the new MTFs have been setting the pace, with Chi-X for example offering roundtrip latency of as little as 400 microseconds in co-located data centres. This compares with the 6 milliseconds and above historically delivered by the incumbent European exchanges.
Traders want speed, and the providers have the technology to give it to them. But if the new reality is that we're all going faster (or we're out of the game whether or not we realise it), what does that do to the search for a competitive edge? Answer: it complicates it. You're still looking for an edge, but you're doing so in an environment where anything that might be classed 'anti-competitive' has doubled its potential impact. To use a racing analogy: you're not on bicycles any more; this is Formula One. You may still be looking for the gap through which you can squeeze past the competition. But now, you really, really do need to be sure that a tyre isn't going to burst at a critical moment.
Translate that back into trading language and you get an upgrade to the old principle that a significant part of any competitive edge is your own efficiency in exploiting it. But not just that the more sophisticated the trading architecture, the wider the range of potential impacts to which it might be vulnerable. Even the smallest knocks can have a disproportionate impact. A misplaced bug on the windscreen, obscuring the driver's vision at a critical moment, could have done for that racing car. But if you'd been on a bicycle, you might not even have noticed it.
If trading desks are moving faster and the trading venues are keeping pace, it follows that the potential 'missing link', to which we should turn our attention, is the back office. The challenge companies face is how to monitor and manage performance information so they can be sure their automation is working effectively. Higher levels of flow are the route to success, but for some institutions, being able to manage and co-ordinate the increasing speed and volumes has created new problems with system failures and delays in support.
After all, it is no good generating flows if your back office, compliance or risk management can't keep up or if you have to impose system limitations which restrict flow. And if you can't reliably handle the business you are winning it will quickly affect reputation, client retention and long-term profitability.
That's the problem. What's the solution?
The innovative response to the pressures created by automated trading is to implement automated support. This is not a new idea, nor is it simple; indeed, one of the lessons of its application in other fields is that it must be scalable to both complexity and architecture distribution. It must also be flexible and extensible.
Previous experience with automation tells us that it will exert its own pressure to automate even more functions throughout the trade lifecycle. Companies are thus being forced to re-engineer applications to take advantage of higher speed processors and having to redeploy their trade execution infrastructure to low latency co-location data centres - with the obvious downside that operator intervention is harder to manage on an ad hoc basis. Such re-engineering is unlikely to be a once only event.
The challenges of keeping flows going are thus truly formidable when combined with the complexities of co-location and remote hosting. These are, of course, increasing in popularity due to latency reductions and the competitive edge they provide. There is, in effect, a commercial imperative whereby organisations find themselves installing critical systems off site, in a remote location, to which support teams have no physical access - other than via the co-location provider.
Yes, co-location and remote hosting are well proven in reducing latency, and yes, they are great for generating flow and profit. But both present significant - and substantially new - challenges in terms of support, problem resolution and fixing failures. An institution's ability to do all these effectively is restricted by the limitations of the comms, and can, in some instances, be likened to trying resolve a problem in the comms room through a letterbox.
It's almost as if speed engenders interdependency: the components to be monitored/supported are not just distributed, some of them belong to other parties. But consider also the positive side of this. A support system that, as it were, checks the track as well as the train (or indeed the racing car) can supply a much higher comfort level than your basic workshop mechanic. If, that is, it can be made to operate effectively.
As this account might suggest, much of the work in automated support is based on the monitoring of incoming data and predictive analysis as well as the management of systems. Indeed, one of the major steps currently being undertaken by many tier-one institutions is the implementation and utilisation of a combination of automated and predictive support. This is proving to be a real breakthrough in removing the major headaches of process failures, system breaches and slowed applications.
A key distinction here is between reactive and predictive support. The concept of automated support and predictive analysis has developed out of the need to monitor massive amounts of data, thousands of applications and feeds across multiple geographic locations and do it all in real time whilst proving to the regulators that you are in control of your business. Humans are not best suited to such tasks, which is why so much support activity is reactive only, responding to an event or a breakdown which has already occurred rather than in anticipation of it.
The goal of automated support is to fix or avoid problems before they impact on the business. More to the point, the case for automated support is precisely that ability to fix a problem before it has its impact. Crudely, with automated support, you don't need the impact to tell you that there's a problem.
The regulatory angle
The limited access to systems imposed by co-location and remote hosting can create a serious potential barrier to meeting regulatory requirements. Indeed, one of the requirements of MiFID is that an organisation must be able to demonstrate that it is in control of its infrastructure and applications. In general, tougher system controls are being introduced as part of the new rules designed to enhance firms' liquidity risk management practices. But without the right tools, these are virtually impossible to achieve.
So the question arises in a regulatory context as well: how best to maintain control, keep the flow going and support the bottom line? As a first step, obviously, institutions need the right organisational infrastructure to be able to support their services adequately. But that alone is insufficient, because you also need to be able to see what's actually going on. Given that automation is likely to be more effective than a human agency in overcoming those colo/remote barriers, this is another argument for automated support.
Visibility, measurement, monitoring, predictive analysis and, ultimately, artificial intelligence to enable the institution to take the appropriate action when things start to go wrong, are all essential to keeping the business running and profitable.
So how's it done?
One of the most interesting aspects of automated support is the analytical side, having the tools to track and analyse recent history, perhaps going back 2 or 3 days, and, from that, being able to identify the peaks and troughs in capacity requirements. This type of analysis allows an organisation to predict the times of day when capacity may be reached, thereby giving time to add extra processing power, for example, and thus preventing a breakdown or a bottleneck.
In the short term, lacking the ability to do this could mean the fill rates drop and profitability would be affected. In the medium term, this not only has the potential to create client retention problems but also increases reputational risk and the possibility of failing to meet regulatory requirements. Another significant benefit of this application of automated support is to allow internal resources to be fully directed towards generating new business and increasing revenues, rather than playing catch-up. Late diagnosis of problems is like failing to find a broken water pipe - even a tiny hole means the water gets dissipated elsewhere.
With some exchanges experiencing relatively frequent problems it is important to have the option of switching to alternative trading venues as necessary. Two things are needed to be able to do this. The first is to have prior warning of a problem. Crucially, this should be combined with the knowledge that the problem is with an exchange (or another trading venue) - that is to say, the problem is external and not within the organisation itself.
The second is to have the right procedures and processes in place to be able to take advantage of the 'early warning system'. For example, if the organisation's price discovery is on an exchange that has gone down, then a switch to another execution venue may not be possible. We may not have seen many market participants dynamically switching venues as yet, but I believe that this will become an increasingly frequent occurrence in the future.
The next step is that configuration management systems need to be set up in such a way they can be easily adapted to new rules, new assets, et cetera, while still giving sufficient manoeuvrability to maintain competitiveness.
Predictive automated support also adds value. Consider this example. Towards the end of last year, one of our clients' monitoring systems alerted them
to a disruption in the flow [across applications and infrastructure]. Their predictive analysis technology warned that a major exchange was experiencing serious problems. They were able to inform their clients in turn about an impending problem with that exchange, giving them a very significant time advantage compared to other institutions.
In this case, monitoring a single application would not have resulted in predicting the problem. The proactive alert was the result of the automated, intelligent monitoring and analysis of three separate applications that resulted in the correct conclusion that the potential problem was an external venue problem and not an internal issue. This level of response is beginning to approach true artificial intelligence and it is highly unlikely that human monitoring would have produced such a significant result.
Given the tightening of regulations, lower margins and the pressure to produce profits, organisations can't afford glitches and system problems to eat into their profits and, potentially, cause loss of clients, so maintaining control is of supreme importance. Automated support can be put to good use here, helping to find and fix the weakest links in all parts of the organisation's systems and processes.
Tomorrow's technology today?
Technically, the market is moving from adding on more and more applications to fewer applications but with multiple parameters. This move away from existing 'legacy' systems will not necessarily make support any less challenging, but it will change the emphasis. We believe that automated support using artificial intelligence and predictive analysis will be as transformational as the market data revolution of the 90s. Given that the end game is to generate greater flow and higher fill rates based on efficiency, capacity and speed throughout the production cycle, then surely, automated trading requires automated support.
As Deputy CEO, Kevin Covington is responsible for all aspects of ITRS business in Europe, Asia and North America. Kevin has worked in senior management roles in banks and vendors specialised in creating innovative solutions for complex problems. Prior to joining ITRS Kevin was Head of Strategy, Portfolio and Propositions for BT Global Financial Services where he was one of the creators of the concept of proximity hosting. Kevin's early career was in research and development in the defence industry. He holds an MBA in Business Development and Change.