Rant: No surprise there’s a shortage of good GPU-developers

notyetanothergraphicsAPI
Another Monday, yet another graphics API

We could read here that software is critical for HPC – a market where accelerators/GPUs are used a lot. So all we need to do is to better support all GPU-developers as a whole, not? Unfortunately something else is happening.

Each big corporation wants to have their own developers, not to be shared with the competition.

Microsoft was quite early in this with Ballmer’s “developers, developers, developers” meme. Tip of the hat to them for acting on the shortage, a shake of the head for how they acted. For .NET is was a success to steal away developers from Java and C/C++, increasing market share of Windows Server, SQL Server and more.

GPU-vendors want that too – growing the cake together they find too slow – best is to start the fight while the cake is tiny.

Result: GPU-developers are forced to learn many, many languages…

…if they choose not to only serve one hardware-vendor. So the developers are the ones who are the victim.

Large companies created their own programming languages and all kinds of APIs that only work on one specific platforms or within a tight scope – knowledge and written software cannot be used outside these artificial borders. It currently is that bad, that competing “standards” use different terminologies for exactly the same functionality. Sometimes it’s due to IP issues and such, but sometimes this is deliberate!

Just think that in CPU languages the differences would mostly be that standard words like “void” would be “blank”, “empty” or “null”. This is what is happening in the GPU languages a lot.

With all those languages the time to learn takes a lot longer – this in turn delays the training of the new wave of GPU-developers. On top of that, more time is put in fanboyism (“the red/blue/green team is better”) instead of educating each other.

So, what’s the difference between all those languages?

If you compare them to the huge variety of CPU languages – unfortunately not much.

Here at StreamHPC we know many languages to program GPUs: OpenCL, CUDA, Shaders-languages, OpenMP, RenderScript, Metal, C++AMP and more. We can remember them, because most concepts from one language can be implemented in the other. We actually prefer to use our time in getting to know more specifics of the different hardware, than to remember the different languages.

There is a lot of overlap in the various unique languages
There is a lot of overlap in the various “unique” languages

GPU-languages can be divided in two groups: host-kernel languages and pragma/magic.

The first group is so much alike, that one can be implemented in another for the largest part – unique conveniences are or hardware-dependent or easy to implement. The discussion on what’s missing is mostly on what’s not yet built in, not on capabilities.

The second is to mostly serve CPU-programmers and promises to take care of the complexity of GPU-programming – until now they all succeed in a rather easy-to-parallelise N-body simulation, not much more. We have been promised compilers that understand half-baked code since the beginning of programming – no difference here.

So what to do?

Do you really want to have this small group of GPU-developers put their time into porting between hardware architectures?

Stop reinventing the wheel to win developers. Build on top of existing languages.

For instance SPIR-V has been created to enable higher-level languages like Domain Specific Languages. Also many libraries are already out there, ready to be integrated and the 13 types of GPU-algorithms are also known for years.

With only knowing C, I can program SPARC, ARM, X86 and other CPUs. This is what also should be possible with all those different GPUs.