2011 Q&A with NVIDIA’s David Kirk on CUDA and OpenCL, analysed two years later

Reading Time: 9 minutes

chess-finishIn an Q&A with NVIDIA’s David Kirk at HPCwire more than 2 years ago, a view on the GPGPU-world was drawn. I wanted to write about it back then, as I disagreed with David Kirk a lot, but never published. Most of my opinions still count, but I’ve added some backing now. Since NVIDIA is still mixing up facts and marketing as of today, this Q&A is a good way to see what is going on. If NVIDIA had supported OpenCL well, I could have ignored their marketing, but unfortunately they oppose OpenCL.

The below Q&A is copied from HPCwire – no copyright infringement intended, just taking the freedoms of the blogger’s world. If you never heard of the HPCwire, now you did.

June 22, 2011

GPU Challenges: A Q&A with NVIDIA’s David Kirk


At ISC this year, there are plenty of sessions devoted to manycore processors, especially in the role of HPC accelerators. Not surprisingly, a lot of these are centered on the current sweetheart of manycore: GPUs. One of the most well-attended sessions here at ISC’11 was “The GPU Debate” between NVIDIA Fellow David Kirk and LSU professor Thomas Sterling, where the two bantered about the architecture, its evolution as a general-purpose HPC processor, and its roadmap to exascale.

The discussion is still going on – it is still hard to decide for which algorithms a GPU makes a difference – or more precise: which specific GPU makes the difference.

HPCwire caught up with Kirk and asked him about some of the specific challenges of GPU computing today and how he views the role of integrated CPU-GPU architectures as they come into play.

HPCwire: Is there any thought at NVIDIA to proposing CUDA as an open standard for the GPU/manycore computing community?

David Kirk: There are no plans to turn CUDA into an open standard at this point. Right now, the only processors we see being deployed widely in servers are x86 CPUs and NVIDIA GPUs and these are all supported by CUDA toolkits today. NVIDIA offers developers choice – choice to use CUDA C, CUDA C++, CUDA Fortran, OpenCL, or DirectCompute to program CPU-GPU systems. We chair the OpenCL working group, we have collaborated closely with Microsoft on DirectCompute and continue to do so as they evolve these platforms. But CUDA is our platform for innovation. We recently released CUDA 4.0, which is a huge leap forward in programmer productivity with features like unified virtual addressing and the new Thrust C++ template library. We continue to move CUDA forward at a rapid pace.

I found it quite a start of the Q&A, to ask about CUDA’s position as a proprietary language. David ignores AMD Radeon GPUs completely and positions CUDA as a language that covers all hardware (X86), and positions NVIDIA GPUs widely supported by various languages. By ignoring what else is available, CUDA is thus the only option, and therefore no problem it’s proprietary?

As of now CUDA-FORTRAN is not publicly available yet.

HPCwire: There has been plenty of talk about the problems involved in hanging a GPU processor off of a PCI bus for use as an external accelerator – I/O overhead and the software messiness of having to do explicit data transfers. What do you think are the biggest limitations of the current GPU processors from a hardware point of view, in regard to high performance computing?

Kirk: The PCIe bottleneck concern is hotly debated and we hear about it a lot. We are aware of very few applications that are bottlenecked by transfer speeds. Incidentally, the PCIe bus is often not the slowest bus in the system. Network and disk interfaces are slower, and in many systems the CPU memory path is slower!

Two wrongs make a right? So because also other buses are slow, this is not a problem? He cannot be talking about the overhead is terms of speed (GB/s).

I’m very curious to those systems where the “systems the CPU memory path” is slower.

That being said, there are two things that have changed since this concern first surfaced. First, we now have 6 GB of on-board memory and second, our new NVIDIA GPUDirect technology is eliminating the CPU and GPU memory bottlenecks from the path.

GPUDirect decreases the latency (sort of ping-time), not the speed (GB/s) of the path. Instead of clearing this out, he continues to claims based on mixed terminologies. I find this very irritating, as I have to explain this to my customers, who (understandably) have a hard time distinguishing marketing from technical facts. So dear vendors, please put teaching above selling.

And yes, the PCI-bus remains a problem when porting software to GPUs. due to that the CPU (using AVX and SSE via OpenCL) can be faster than a GPU. Therefore the high performance hybrid CPU-GPU is important for the future, something NVIDIA doesn’t have at the moment.

These enhancements reduce the PCIe bottleneck. Data can directly stream from storage to the GPU memory via GPUDirect and the larger GPU memory enables more data to reside on the GPU without communicating to the CPU. Our future GPU architectures will continue to reduce dependence on and communication with the CPU, thus eventually very significantly limiting the PCIe bottleneck. By the way, Vincent Natoli summarized it nicely in his recent HPCwire article.

The “PCI-e bottleneck” is not reduced at all, but was mixed with the “latency-bottleneck”. Even on their own webpage they are explaining it reduces lantency. For if you were curious, AMD’s version of GPUDirect is SDI-link + DirectGMA.

My thought on the reduced CPU-dependency: NVIDIA wants 100% of your code to be CUDA, so it only runs on NVIDIA hardware only. However this is in line with plans of other companies who are developing CPU-GPU hybrids.

I personally believe though, that the biggest limitation of GPU computing is the misconception that it’s too hard. Put this into whichever bucket you wish — ease of use of the software, the programmability of the hardware, the performance, per watt, per dollar. However you slice it, there have been many reasons cited as to why not to adopt GPU computing.

We’ll be the first to say that parallel computing is challenging. I personally co-teach the parallel computing course, along with Dr. Wen-mei Hwu, at the University of Illinois at Urbana-Champaign, so I know first-hand what it is like to switch the mindset from a purely serial based model to thinking about problems in a multi-threaded parallel environment.

Finally some agreement: the most difficult part of coding GPUs is the changing of the mind-set, the unlearning of CPU-programming concepts.

But the rewards are significant. Change two percent of your code and in many cases you can see up to a 10X increase in performance. That’s a pretty big bang for your software development buck. And, we live in a parallel computing world now, so serial programming is no longer a viable option.

Well, who ever has programmed GPUs, know that “2%” and “10X” are applicable to some software. The results vary a lot – from worse (slow-down) to better (100 times).

HPCwire: Same question for software side. What are the biggest limitations of the current GPU computing software frameworks?

Kirk: One of the most common concerns I hear from the community is the portability aspect of CUDA and the fact that it only runs on NVIDIA GPUs. As I said before, we remain agnostic on language. Fortran, Python, C, C++, Java, OpenCL, DirectCompute – we support all these languages, either internally or through 3rd parties. If you choose to use NVIDIA GPUs, then we will ensure that have you the widest choice of languages.

Here is where he really got me. He mixes GPU-language wrappers with the actual GPU-language! I was baffled. Again: what is more important, getting a short-term sales or informing the potential customer?

With regards to the portability of the hardware platform, PGI has just announced the first version of CUDA x86, that enables CUDA code to be compiled down to x86 CPUs. This facilitates easier-than-ever deployment of CUDA-enabled applications across hybrid GPU/CPU systems and is an important milestone in the increased portability of CUDA. There are also several tools created by universities and 3rd-parties to convert CUDA source code to OpenCL source code, which can be compiled for any platform that supports OpenCL. So, portability is no longer a realistic objection but more of an excuse.

First CUDA X86 was never benchmarked by external companies (other than PGI and NVIDIA) to null claims that it will even be slower than native CPU-code. Though it has the same potential as OpenCL-on-CPUs to better access the CPU’s vector-extensions. This is a whole story on it’s own, but for now remember there is a difference between “it works” and “it performs“.
Second this is where OpenCL is actually the stronger one: as you can see in below diagram, OpenCL is the shortest path to all hardware (yellow line).
cuda or opencl?

Training the millions of software developers who are already in the industry to program in parallel – that is the biggest challenge facing HPC and parallel computing in general. This is where the elegance of the CUDA parallel programming model really helps and the reason why it has caught on so quickly and so widely. CUDA C/C++ is an incredibly powerful language of authorship, and we have found that it is quite easy to learn.

NVIDIA does this by their CUDA Teaching Center Program. By giving away free hardware (which have margins over 54%, by the way), students are dropped on the market with knowledge of NVIDIA-hardware only. The universities get the hardware only when they follow some guidelines, and I quote: “Include CUDA C/C++ as a substantial portion of the curriculum in any graduate or undergraduate level recurring course on parallel programming.” So they’re allowed to teach OpenCL as a smaller part of the CUDA teachings.

I won’t nag too much about him now saying the challenging parallel programming is quite easy to learn – he probably just refers to a small subset of language-learning.

HPCwire: Do you think the appearance of heterogeneous CPU-GPU processors portends the demise of discrete GPUs – for GPU computing or otherwise? Do you think it will spell the end of “pure” CPUs?

Kirk: A lot of folks believe that integrating CPUs and GPUs together is a panacea. As you well know, this is easy for NVIDIA to do. We have the highest volume integrated CPU-GPU SoC shipping today: our Tegra mobile SoC. But if you scale this to HPC, the challenge is that you have to compromise either on the performance of the CPU or that of the GPU. The silicon area is fixed, so you have to put a medium performance CPU with a medium performance GPU. Not exactly HPC! We find that none of our customers ever ask us for less performance.

Back then it was all Tegra2 and they had good times, but not anymore in 2013. And later they will come back, and then fall again. This is how things go in the processor market.

As of today AMD and Intel have 100% of the X86 integrated CPU-GPU SoC market, taking away market from discrete GPU market at a high rate. The moderate market acceptance of Windows RT was not only a problem for Microsoft; but also for NVIDIA as they need an ARM-based OS to increase sales of Tegra-processors.

For the foreseeable future, there will be a market for a discrete CPU and a discrete GPU – the performance users, whether in HPC or in gaming or CAD workstations, need the best of both. But a swing we already see happening is that applications are leaning more on the GPU for performance than on the CPU — both gaming and HPC. This is because performance scaling on CPUs seems to have reached an end. Laptops are not going beyond dual-core x86 CPUs. Even on HPC, application performance is not scaling beyond 4 cores. They end up choking on memory bandwidth.

Clearly, the personal computer experience is going to be dominated by SoCs with integrated ARM cores and GPUs. This is happening today and will be solidified by support for ARM in Windows Next. But as I said above, we expect that there will be a CPU + GPU market for a very long time to come.

The question is when this will happen. And how long “very long” is. For instance configurable computing (CPU + FPGA) is also making progress, likewise new materials to replace silicon which let us go to 1000 GHz computers.

HPCwire: How will users be able to port codes developed today with CUDA, OpenCL and accelerator-directives to the future shared-memory architectures of CPU-GPU integrated processors envisioned by “Project Denver” AMD Fusion, etc.?

Kirk: The beauty about the CUDA programming model is that it was designed for CPU-GPU based heterogeneous architectures. Whether the CPU and GPU are integrated does not change the programming model. Integration is simply a cost consideration. After all, we have been working on Tegra — ARM + GPU SoCs — for just as long as we have been working on CUDA. Other driver-level APIs like OpenCL treat the GPU as a device that is separate from the CPU (host) and this means that OpenCL as defined today has to be extended to support an integrated CPU-GPU device. This means that applications written with the CUDA toolkits will just work on our integrated CPU-GPU devices.

I totally agree CPU-GPU hybrids are the future. Just look at what we have now: AMD APU, Intel Ivy Bridge, various ARM SoCs from ARM, Imagination, Freescale/Vivante, Qualcomm, NVIDIA and several others.
It is clear OpenCL is CUDA’s main competitor, else he would not spread FUD. CUDA has the same model as OpenCL, so it is really very strange he FUDs this way. The sentence “just works” is very dangerous – especially as NVIDIA has attacked OpenCL on not being performance-portable. When going from Fermi-architecture to Tesla, they suddenly got quiet on this as different optimisations needed to be done to get the maximum performance out of Tesla.

Why this comments on an old Q&A?

Here in the Netherlands around ’98 the computer-shops noticed that the advice PC-buyers got was: “Get the PC with the most memory”, as more memory resulted in the most relaxed experience with less swapping. So they renamed “hard-disk space” to “external memory” and “RAM” to “internal memory”. Only two years after the sale it became clear that more internal memory was badly needed to run the latest programs, even though techies tried to explain back then. Same applies here: two years later it is easier to see what is marketing and what is useful information.

When using a standard that can be implemented by various vendors, the marketing is easily caught. While most of the answers in this Q&A were disputable at the least, some of these still are used in current conversations – most behind closed doors (!). The result is that I have discussions with people who have been convinced that NVIDIA Tesla is the only choice for solving their problem based on false premises like the ones ion this Q&A – and well, sometimes it is actually the best solution (certain algorithms and when ECC is needed) and the other times it isn’t.

What do you think? Is everything allowed in war, love… and marketing?

 

Related Posts

default

Join us at the Dutch eScience Symposium 2019 in Amsterdam

...  and research have been carried out for the last fifty years: shifting from the standard practices of publishing research results in ...

sequence_alignment

We accelerated the OpenCL backend of pyPaSWAS sequence aligner

...  year we accelerated the OpenCL-code in PaSWAS, which is open source software to do DNA/RNA/protein ...  --device_type=GPU ...

screenshot-python-is-scripting

Do you have our GPU DNA?

This is the first question to warm up. Python-programmers are often users of GPU-libraries, not the builders of those libraries.In January 2019 I  ...

ISC-HPC-logo

Stream Team at ISC

...  we got known in the HPC-world for our expertise on OpenCL, we now have many years of experience in CUDA and OpenMP. To get there, we've ...