OpenCL Potentials: Investment-industry

This is the second in the series “OpenCL potentials“. I chose this industry because it is the finest example where you are always late, even if you were first. So it always must be faster if you want to make the better analyses. Before I started StreamHPC I worked for an investment-company, and one of the things I did was reverse engineering a few megabytes of code with the primary purpose of updating the documentation. I then made a proof-of-concept to show the data-processing could be accelerated with a factor 250-300 using Java-tricks only and no GPGPU. That was the moment I started to understand that real-time data-computation was certainly possible. Also that IO is the next bottle-neck after computional power. Though I am more interested in other types of research, I do have my background and therefore try to give an overview for this sector and why it matters.

Computer-assisted human decisions

Two of the most important tasks I know of, are comparing (groups of) investments with each other, and predicting how the current/potential investments will evolve. This is both very statistical, but the latter tells why the news is so important and it can go down to an emotional level when not taken care of thoroughly. As newspapers influence investors by being a little bit less independent than they claim (by i.e. repeating certain news and leaving out others), the investment-industry keeps reinventing itself to overcome such problems.
To get an idea what is expected from investors continuously, two examples of human behaviour. Do you prefer to lazily trust Google and click on the first link each time? Also what would you do if most of your friends sell their house and say it is the last chance before the prices go down with 20-30%?

The better the support by neutral computations, the better decisions can be made. The faster the results come in, the better the feeling for the matter.

Extremely fast computations

Accelerated computational part is where OpenCL can come in. It is full of statistics and matrix-operations, which can be accelerated a lot. It is very important this happens fast, as more alternatives can be looked in to underwrite the investor’s motivations. Also leaving a team of analysts without the results for 30 minutes can give others using faster computations an advantage. In for example the latest Matlab GPU-acceleration is already built in, but there is more software available to serve specific needs.

For the continuous report-generation OpenCL is best suited. I have experience in bringing report-generation for an investment company from 2 hours to under one minute using simple techniques like caching, shortening calculation-paths and simplifying code. Using OpenCL can reduce the time computations take even more; see below for a table with acceleration achieved.

The door to lift new innovation

There are quite some software companies solely targeting the financial industry. Why? For my sector (computation acceleration) the financial industry is extremely important. Banks and investment companies has always been a drive for new potential ground-breaking innovations as there is more capital to be able to invest in new (and thus risk-full) technologies. Once the financial world has accepted the innovation, the rest follows with much more ease. The first reason is that it has gained visibility, and the second reason is that investors have seen what it can do for them already and extrapolating its potential to other industries easier. The nice thing of OpenCL is that it is working extremely well with financial data computations. J.P.Morgan recently put a fact-sheet online to emphasize the importance of GPUs in financial calculations, and Bloomberg is using GPUs for a few years already. For automated trading, GPUs are much more common and now it is time the other financial institutes comes in.

What can OpenCL do?

Here are some examples where a few factors of acceleration have been reported. Having a server with 3 or more GPUs together with the latest OpenCL-supporting CPUs from AMD (Fusion) and Intel (Sany/Ivy Bridge) these computations even go faster:

  • Monte Carlo
  • Binomial Option Pricing
  • Black-Scholes
  • Mersenne-Twister
To my opinion the most important work is in more furbished, accelerated algorithms and integrating OpenCL-kernels into existing software.  It is for example also possible to integrate acceleration-techniques into Excel to have heavy computations being finished within a minute again.

Why it works

It is better just to see a research that it works. See this article “Design Exploration of Quadrature Methods in Option Pricing” of a research done by IEEE on what speed-ups are possible using GPUs. Add to that the current addition of Intel to the OpenCL-family to have even greater speed-ups using a hybrid approach. It discusses the other vector programming language CUDA, which is comparable to OpenCL except it only works with NVidia-hardware. Below is a table with the speed-ups and power-efficiency; not mentioned are accelerations using modern AMD and Intel processors.
You see that FPGAs have a much higher energy efficiency, but don’t beat GPU in acceleration. Also are GPUs cheaper to program than FPGAs, based on the number of lines needed for the code.

Interested?

StreamHPC offers several trainings to get an idea what OpenCL can do for you. We also offer integrating OpenCL into your C, C++, Java or C# code.

For more information give a call or let us call you.

Related Posts

4 thoughts on “OpenCL Potentials: Investment-industry

  1. MySchizoBuddy

    the XiIlinx FPGA used in this study is quite old. It is Virtex 4 right now Xilinx has Virtex 7 on the market (released 2010).

    Virtex 4 used in the report has 0.15m logic cells, and Virtex 7 has a huge 0.8m to 2m of them depending on model. This is an order of magnitude higher.

    Would love to see a more upto date report specially for DSP applications.

  2. MySchizoBuddy

    you can use Labview and Matlab to program your FPGAs, which makes your code lot smaller. Ofcourse you can also use Labview/Matlab for GPUs as well

    • Vincent Hindriksen Post author

      Then you should compare Matlab+FPGA versus OpenCL/CUDA, but I don’t think the speed-using FPGAs up will be higher when they’re programmed with Matlab. Moreover VHDL/Verilog and FPGAs are a specialist’s job (arranging the timings for example), whereas CUDA/OpenCL can be learnt by any good C/C++ developer. And also a very important reason for choosing OpenCL is that I can go to an average computer store to (cheaply) replace defunct hardware.

Comments are closed.