The Gateway to Algorithmic and Automated Trading

Games programmers play

Published in Automated Trader Magazine Issue 13 Q2 2009

One day, memories will be made of this. David Yip investigates the use of high-speed gaming GPUs to replace today’s CPUs for high-speed processing.

Here's an idea. You can use the computing power of a Graphics Processing Unit (GPU) to speed up software applications by fifty times, at low cost. Unlike CPUs, which are fast approaching their physical limitations, GPUs have been advancing technologically year on year, due to the ever increasing demands of the gaming industry (the need for faster frame rates and more realistic images, and so on) , to the point that they can deliver much more than just the graphics computations for which they were designed.

RackThey are also cheap. A computational GPU processor will cost around £1000 and provide up to 1 Teraflop of performance. You can't get that from a traditional CPU without spending serious money. Not surprisingly, there has been a lot of research into how GPUs can replace CPUs, and one significant switchover has been in high-performance computer (HPC, or 'supercomputer') environments. Incorporating GPUs into the HPC system environment delivers a unique opportunity - lower costs and higher performance from a commodity product.

With data that naturally lends itself to GPU processing, finance houses are already investigating GPU processors and, in fact, they are already proactively approaching HPC system integrators like OCF www.ocf.co.uk to run tests.

Early results are positive in terms of system integration, but potential problems arise at the point of putting the GPU to work. Evolving best practice is, not surprisingly, to take a step-by-step approach, as follows.

• Start with a single over-the-counter GPU and test it rigorously. Move on to a professional computational GPU provided by an integrator such as OCF. For more performance, build a full cluster of GPUs (an HPC system), ideally in partnership with an HPC system integrator.

• Find applications that can use the GPU as an accelerator. Increasing numbers of software vendors have started to write extensions or plug-ins to their existing applications to take advantage of GPUs where possible. Examples already on the market include MATLAB.

NVidia Tesla

"In the future, as organisations move away from using single GPUs to regular use of GPU clusters within HPC systems, organisations will need to restructure data."


• Be prepared to fine-tune applications. GPUs are not designed for general use, so you won't be able to run a Microsoft Excel spreadsheet straight off, for example. Without extensions or plug-ins, most organisations will need to fine tune or, more than likely re-architect existing software applications, to make use of GPUs.

• Consider APIs. There are free Application Programme Interfaces (APIs) available from graphic cards manufacturers designed to enable you to integrate software application code with GPUs.

• Remember memory. You don't want to be moving data from slow memory to fast memory - you must coalesce application data into a 'GPU memory friendly way' so that it will automatically run on a GPU's faster memory from scratch.

• Expect to restructure data. In the future, as organisations move away from using single GPUs to regular use of GPU clusters within HPC systems, organisations will need to restructure data from 'coarse grain' MPI-type data decomposition to 'fine grain' on the GPU to run it on multiple GPUs in the cluster nodes.

• Keep it busy. The processor will simply not provide the best performance if it is left idle for periods of time.