
This is a message to GPU-programmers only.
It is a simple question, and has many answers: what are GPU-brains? How is it possible your brain can code GPUs and only few friends and colleagues understand what you are doing? Is it thinking in parallel, focusing on one kernel and having the architecture in the back of the head. Is it simple loop-unrolling? Is it a web of thoughts? Is it just cool, as not many people can do it?


Recently AMD announced their new FirePro GPUs to be used in servers: the S9000 (shown at the right) and the S7000. They use passive cooling, as server-racks are actively cooled already. AMD 
If you are looking for the samples in one zip-file, scroll down. The removed OpenCL-PDFs are also available for download.












If you want to see what is coming up in the market of consumer-technology (PC, mobile and tablet), then NVIDIA can tell you the most. The company is very flexible, and shows time after time it really knows in which markets is currently operates and can enter. I sometimes strongly disagree with their marketing, but watch them closely as they are in the most important markets to define the near future in: PCs, Mobile/Tablet and HPC.





Say you have a device which is extremely good in numerical trigoniometrics (including integrals, transformations, etc to support mainly Fourier transforms) by using massive parallelism. You also have an optimised library which takes care of the transfer to the device and the handling of trigoniometric math.


With the launch of twitter-channel 
There is a lot going on at the path to GPGPU 2.0 – the libraries on top of OpenCL and/or CUDA. Among many solutions we see for example Microsoft with C++ AMP on top of DirectCompute, NVidia (and more) with OpenACC, and now