The Gateway to Algorithmic and Automated Trading

Parallel Perfection

Published in Automated Trader Magazine Issue 23 Q4 2011

For brokers servicing HFT clients and for direct members in the HFT business, life doesn't get any easier. More markets, more competition, more regulation and the most demanding of end users mean pressure to perform is intense - particularly as regards risk. Regulation and self-preservation make risk checks mandatory, but end users dislike the latency they add. Matt Dangerfield, Director of Trading Solutions at Fixnetix, outlines the optimum way to square this circle.

Matt Dangerfield

Matt Dangerfield

Managing risk has always been an integral part of the trading process, but recent years have seen radical changes to the risk landscape throw up multiple and rapidly changing demands. Apart from the holistic need to protect the integrity of the global financial marketplace, brokers and direct members face a raft of other risk management demands, including regulation (such as SEC rule 15c 3-51), conflicting order instructions, 'runaways'2 and the management of restricted/borrow lists.

Apart from the initial costs of compliance/management, these demands raise a number of other issues. For those in the HFT business, latency comes top of the list; when competitive edge is measured in tiny fractions of a second, no trader wants latency caused by risk checks to impact their model's performance. Another issue is ongoing cost; recent events make it inevitable that financial regulation will continue to increase and evolve, which means that those using in-house technology for risk checking will find continuous reinvestment and reengineering are equally inevitable. As a result, any broker or direct member planning its future strategy for managing trading risk needs to consider several key areas.


One of the most striking aspects of the way in which the trading landscape (and especially HFT) has changed recently has been how quickly the limits of conventional CPU-based technology have been reached, with pre-trade risk checks a classic example. Performance that was seen as outstanding perhaps only a year ago is now regarded as unacceptable. Today, even the fastest CPU-based risk check solutions will struggle to attain a wire to wire time below 20 microseconds. Furthermore, to achieve times even close to this level typically requires extensive specialised tweaking, such as OS kernel bypasses or individually customised hardware. Even the most lean code in compiled languages such as C or C++ runs up against the inherent limitations of a conventional software + CPU architecture.

The other major shortcoming of a CPU-based approach to risk checking is that it is attempting to apply a serial solution to a parallel problem. Risk checking millions of trades per day often necessitates simultaneously (or near simultaneously) processing a very large number of relatively simple Boolean tasks, which even when using multiple multi core CPUs is an almost perfect example of the wrong problem for the technology. The scalability of a CPU-based solution for this type of problem is also intrinsically poor, making bottlenecks during 'bursty' market conditions an ever present concern. Furthermore, as transaction volumes continue to rise, the incremental cost/benefit ratio of CPU based risk checking solutions will continue to decline rapidly.

Figure 1: iX-eCute processing timeline

By contrast, a hardware based solution that uses inherently parallel technology is ideally suited to the type of problem risk checking represents. Low level hardware access, combined with the ability to process a vast number of simultaneous (or near simultaneous) transactions, means that not only is single transaction speed far faster, but the process scales up with minimal impact on performance and stability.

For example, Fixnetix's iX-eCute risk assessment system is based upon FPGA3 technology using Xilinx Virtex 6 chips with the code written in hardware description languages such as Verilog and VHDL. As a result, it is able to make the best possible use of high speed message protocols such as NASDAQ's OUCH, to deliver OUCH in/out times of 740 nanoseconds for a single transaction, including symbology look up. Furthermore, the parallelism of the technology means scalability is not an issue; processing one million transactions as opposed to one would typically increase the in/out time by around one ten thousandth of a nanosecond.


Present performance is clearly important, but so also is future extensibility in order to allow for increasing regulation, transaction flows and additional trading venues. Dealing with this requires both flexible technology and strategy. On the technology side, FPGA-based solutions again score highly: their high density and low power consumption mean even a major step change in transaction processing requirements can be accommodated simply by adding a single FPGA card to an existing chassis. Contrast this with a CPU-based solution, where handling the same change would probably necessitate adding at least one further rack (plus additional network and telco plumbing) to a solution that already requires multiple racks.

iX-eCute exemplifies the compact advantages of an FPGA based solution outlined above, to the extent that even large brokerage operations can accommodate all their market connectivity requirements (for both HFT and click trading activity) in a single 5U chassis. The entry level version of the solution contains six FPGA cards, with each card capable of supporting four 10 gigabit Ethernet ports, each of which can service multiple different physical connections, with up to 32 logical sessions per connection. However, within the same single 5U chassis, it also possible to upgrade to an eight or 16 card solution, thereby also delivering a commensurate increase in ports, connections and logical sessions.

While FPGAs can deliver the requisite technology flexibility, that still leaves the question of strategy. In a major broking firm, the organisational challenges of producing and continually upgrading an FPGA risk checking solution in a timely manner in-house are substantial. Apart from the specialised programming expertise required, internal cost structures and hardware delivery timescales are also likely to be a significant barrier. By contrast, an outsourced solution from a dedicated specialist faces none of these obstacles. Ongoing development and rapid deployment are assured - for example, a fundamental change to iX-eCute's code base necessary to respond to an exchange upgrade typically takes less than three days. Finally, the administrative and legal/contractual overheads of dealing with multiple telcos and other third party providers no longer apply.


It is important to emphasise that outsourcing risk checking technology does not automatically imply a loss of control. While all the headaches associated with building and maintaining a solution may disappear, it is still perfectly possible for an organisation outsourcing to control its day to day trading risk, even at the most granular level. For instance, Fixnetix uses a real time command and control application called iX-Eye to deliver this, which uses push technology and a light weight programming language. There is therefore no need to install any software locally; everything can be controlled remotely via a supported browser.

All transactions flowing across iX-eCute can be monitored and controlled using iX-Eye. This includes changes to limits and restricted/borrow lists, as well as cancellation of single/client-specific/all trades. In addition, because iX-Eye connects to various ISV platforms, these changes can also be automatically propagated to any connected execution management systems. Furthermore, iX-Eye can implement any changes across multiple protocols, be they FIX or native.


The instinctive reaction in many organisations when contemplating outsourcing is: "We can do it cheaper ourselves". However, the common misconception when making this assertion is that only a single static technology product is involved, when nothing could be much further from the truth - especially in the case of risk checks. If using a CPU-based solution, frequent hardware upgrades will be involved, in addition to the upfront cost. Each upgrade will have further associated costs for larger co-location facilities, telco infrastructure, power consumption and so on. If developing an FPGA solution, there are issues around internal technology policy compliance, plus the costs of recruiting a specialised programming team and management. Then there are all the frictional costs associated with internal sourcing; the cost of procuring just the hardware by this route may be both prohibitive and unavoidable.

Then there are the cost issues associated with managing multiple third party vendor contractual relationships (which obviously do not apply with a suitable outsourcing agreement), plus all the costs of ongoing maintenance and development. A further cost is making and maintaining data and trading connectivity to multiple markets and tools, which is also reduced when using a specialised outsourcing provider, due to economies of scale. For example, Fixnetix at present provides connectivity to 78 markets, five forms of market data and multiple risk control products. Finally, there are the "risk costs" associated with the in-house development and maintenance of risk checking technology. These are particularly prevalent in an environment where exchanges are making frequent changes to their technology that are not always fully or punctually documented. Particularly where new venues are being added, in-house developers who may not have access to the exchange's user acceptance testing environment have to code using just documentation and canned data. This often leads to cost overrun situations where the technology is built, initially deployed, doesn't work, rewritten, redeployed, still doesn't work, and so on.

Contrast this with a specialist outsourcing provider which not only assumes those risks, but also minimises them. For example, because Fixnetix is already in co-location with all major trading venues, all its developers have access to these venues' user acceptance testing environments. As a result, they are always coding against current market reality with live data, rather than a hypothetical alternative. As a result, once all the personnel, administrative, hardware and risk costs have been factored in, the right outsourced solution will invariably be far less expensive than the in-house alternative. Combine this with the fact that outsourcing also frees up both management and staff to focus on the core business and the strong value add becomes even more obvious.

Conclusion: the extra edge

While the argument in favour of an outsourced FPGA solution for risk checking may be compelling, in a ferociously competitive market even more is required. Generic coding against an FPGA may outperform CPU-based solutions, but how can an FPGA solution stand out from its peers? The answer lies in having a deeper understanding of the specific FPGA's design architecture and exploiting this to gain a performance edge. Gaining this understanding, as iX-eCute's development team have done, requires frequent and in-depth dialogue with chip engineers and manufacturers. As a result, iX-eCute is always able to use the fastest and most efficient path across the chip, which is one of the factors giving it a x4 speed advantage over other FPGA-based solutions.

iX-eCute's performance edge would be valuable in any environment, but when it comes to one as demanding and unpredictable as risk control it is vital. When multiple factors, such as regulation, trading venue technology and market conditions can suddenly and unexpectedly change, being behind the curve is not an option.

iX-eCute - the perfect solution to an imperfect problem.