CUDA Compute Capability 6.1 Features in OpenCL 2.0

On the CUDA page of Wikipedia there is a table with compute capabilities, as shown below. While double checking support for AMD Fijij GPUs (like Radeon Nano and FirePro S9300X2) I got curious how much support is still missing in OpenCL. For the support of Fiji it looks like there is 100% support of all features. For OpenCL 2.0 read on.

CUDA-features
CUDA features per Compute Capability on Wikipedia

Feature overview

The below table does not discuss performance, which is ofcourse also a factor.

CUDA 3.5 or higher OpenCL 2.0
Integer atomic functions operating on 32-bit words in global memory yes
atomicExch() operating on 32-bit floating point values in global memory function: atomic_xchg()
Integer atomic functions operating on 32-bit words in shared memory yes
atomicExch() operating on 32-bit floating point values in shared memory function: atomic_xchg()
Integer atomic functions operating on 64-bit words in global memory extensions: cl_khr_int64_base_atomics and cl_khr_int64_extended_atomics
Double-precision floating-point operations if device info CL_DEVICE_DOUBLE_FP_CONFIG is not empty, it is supported. For backwards compatibility the extension cl_khr_fp64 is still available.
Atomic functions operating on 64-bit integer values in shared memory extensions: cl_khr_int64_base_atomics and cl_khr_int64_extended_atomics
Floating-point atomic addition operating on 32-bit words in global and shared memory N/A – see this post for a hack.
Warp vote functions Implemented in the new Work-group Functions – see this post by Intel.
_ballot() Hack: work_group_all() with bit-shift using get_local_id().
_threadfence_system() Hack: needs a sync from the host.
_syncthreads_count() Hack: work_group_reduce_sum() + barrier()
_syncthreads_and() Hack: work_group_all() + work_group_barrier()
_syncthreads_or() Hack: work_group_any() + work_group_barrier()
Surface functions Images
3D grid of thread block 3 dimensional work-groups
Warp shuffle functions N/A – see the notes below
Funnel shift This is a bit-shift where the shifted bits are not filled with zeroes but with the bits from the second integer.
hack: bit-shifting both integers (one left N bits and the other right (32-N) bits) and then doing a bit-wise sum.
Dynamic parallelism Nested Parallelism

So you see, that OpenCL almost covers what CUDA offers – most notable missing is the workgroup shuffle, whereas other missing functions can be implemented in two steps.

If you want to know what is new in OpenCL (including features not existing in CUDA, like pipes), see this blog post.

2 thoughts on “CUDA Compute Capability 6.1 Features in OpenCL 2.0

  1. Jan Willem Penterman

    HCC has intrinsics for ds_swizzle (and more intra-workgroup ops), hope to see them soon in OpenCL.

    Does your extension implement shuffle through LDS or limited (max 4 lanes away) via ds_swizzle?

    • StreamHPC

      It’s not implemented in HCC. We’ll share more info on the blog later.

Comments are closed.