Does GPGPU have a bright future?

This post has a focus towards programmers. The main question “should I invest in learning CUDA/OpenCL?”

Using the video-processor for parallel processing is actually possible since beginning 2006; you just had to know how to use the OpenGL Shader Language. Not long after that (end 2006) CUDA was introduced. A lot has happened after that, which resulted in the introduction of OpenCL in fall 2008. But actually the acceptance of OpenCL is pretty low. Many companies which do use it, want to have it as their own advantage and don’t tell the competition they just saved hundreds of thousands of Euros/Dollars because they could replace their compute-cluster with a single computer which cost them €10 000,- and a rewrite of the calculation-core of their software. Has it become a secret weapon?

This year a lot of effort will be put to integrate OpenCL within the existing programming languages (without all the thousands of tweak-options visible). Think about wizards around pre-built kernels and libraries. Next year everything will be around kernel-development (kernels are the programs which do the actual calculations on the graphics processor). The year after that, the peak is over and nobody knows it is built in their OS or programming-language. It’s just like current programmers use security-protocols, but don’t  know what it actually is.

If I want to slide to the next page on modern mobile phones, I just make a call to a slide-function. A lot is happening when the function is called, such building up the next page in a separate part of memory, calling the GPU-functions to show the slide, possibly unloading the previous page. The same is with OpenCL; I want to calculate a FFT with specified precision and I don’t want to care on which device the calculation is done. The advantage of building blocks (like LEGO) is that we keeps the focus of development on the end-target, while we can tweak it later (if the customer has paid for this extra time). What’s a bright future if nobody knows it?

Has it become a secret weapon?

Yes and no. Companies want to brass about their achievements, but don’t want the competitors to go the same way and don’t want their customers to demand lower prices. AMD and NVidia are pushing OpenCL/CUDA, so it won’t stop growing in the market, but actually this pushing is the biggest growth in the market. NVidia does a good job with marketing their CUDA-platform.

What’s a bright future if nobody knows it?

Everything that has market-wide acceptation has a bright future. It might be replaced by a successor, but acceptance is the key. With acceptance there always will be a demand for (specialised) kernels to be integrated in building blocks.

We also have the new processors with 32+ cores, which actually need to be used; you know the problem with dual-core “support”.

Also the mobile market is growing rapidly. Once that is opened for OpenCL, there will be a huge growth in demand for accelerated software.

My advise: if high performance is very important for your current or future tasks, invest in learning how to write kernels (CUDA or OpenCL, whatever your favourite is). Use wrapper-libraries which make it easy for you, because once you’ve learned how to use the OpenCL-calls they are completely integrated in your favourite programming language.

Related Posts

4 thoughts on “Does GPGPU have a bright future?

  1. Max

    I don’t think parallel kernels are so easily componetised. If you want the maximum acceleration you have to completely rethink the problem solution. Sure, people will use third party routines and wizards to hand off a chunk of data to be processed. But if you have a subsequent chunk of data that needs to be processed, you might then think, is it not better to do the whole thing as a parallel solution? At that point, standard routines are no longer useful and you have to start writing kernels. No easy, but then who dares wins as they say.

  2. Vincent Hindriksen

    Max, learning to write GPU-kernels is indeed not just unwrapping double loops. Accelereyes has a nice video-blog about this: http://blog.accelereyes.com/blog/2010/02/20/a-case-study-in-cuda-optimization/ (source-code and explanation here: http://blog.accelereyes.com/blog/2010/03/04/median-filtering-cuda-tips-and-tricks/ )

    Kernels just happens to be written in C-like code, but the it is nothing like normal C-code. That’s why I said you should invest your time in kernels (and kernel-optimisation).

  3. alexey

    Does GPGPU have a bright future? Yes, definitely! OpenCL is a right choice. Writing kernels is a peace a cake for devels who experienced with threads.

  4. Pingback: StreamHPC » Blog Archive » X86 Systems-on-a-Chip and GPGPU

Comments are closed.