Learn about AMD’s PRNG library we developed: rocRAND – includes benchmarks

Reading Time: 3 minutes

When CUDA kept having a dominance over OpenCL, AMD introduced HIP – a programming language that closely resembles CUDA. Now it doesn’t take months to port code to AMD hardware, but more and more CUDA-software converts to HIP without problems. The real large and complex code-bases only take a few weeks max, where we found that solved problems also made the CUDA-code run faster.

The only problem is that CUDA-libraries need to have their HIP-equivalent to be able to port all CUDA-software.

Here is where we come in. We helped AMD make a high-performance Pseudo Random Generator (PRNG) Library, called rocRAND. Random number generation is important in many fields, from finance (Monte Carlo simulations) to Cryptographics, and from procedural generation in games to providing white noise. For some applications it’s enough to have some data, but for large simulations the PRNG is the limiting factor.

The library provides the most used PRNGs and QRNG (Quasi RNG) based on what we found on Github. Several you can find in cuRAND:

  • MRG32k3a
  • Mersenne Twister for Graphic Processors (MTGP32)
  • Philox (4×32, 10 rounds)
  • Sobol32

If you’re familiar with PRNGs, you see that from the most important families of generators there is an option. Now it’s easy to port software that uses cuRAND. But that’s not all.

rocRAND is faster than cuRAND in most cases

rocRAND works on NVidia hardware too. And in most cases it’s faster than cuRAND.

Here we compare rocRAND for normal-floats on the AMD Radeon Nano, rocRAND on the GTX 1080 and cuRAND on the GTX 1080. The professional grade GPUs, like the AMD MI25 are much faster – but this is just to show that the library written for AMD GPUs is faster than NVidia’s own library.


This is before the optimization-phase on AMD R6 Nano and Nvidia GTX1080 – rocRAND on par with cuRAND.

This is after the optimizations, where AMD GPUs get the upper hand due to higher bandwidth memory:

As you can see, it’s preferable to also use the library for NVidia-only projects.

Doing your own benchmarks

On the Github of rocRAND you find instructions to benchmark the library on your own hardware. Do know that the library has been tuned for all recent AMD GPUs and Nvidia GTX GPUs, not Tesla GPUs. Also the code does not work on CPUs or Intel GPUs.

More on random numbers on our blog

Want to know more about Random numbers? We wrote about the subject before.

Random Numbers in Parallel Computing: Generation and Reproducibility (Part 1)

Random Numbers in Parallel Computing: Generation and Reproducibility (Part 2)

Porting code that uses random numbers

Need a tailored RNG?

When you know the exact restrictions you have for your project, we can:

  • further tune the library to be even faster, or
  • add special characteristics (i.e. less cyclic), or
  • port other PRNGs to the GPU.

We did not put these hacks in the official code, as we then could not guarantee a correct output for generic goals. In case you need a RND tailored for your specific needs, we are the team that can build it.

Get in touch with the GPU Library Specialists today.


Related Posts


Stream Team at ISC

This year we'll be with 4 people at ISC: Vincent, Adel, Anna and Istvan. You can find us at booth G-812, next to Red Hat.Booth G-812 is manned& ...


GPU-related PHD positions at Eindhoven University and Twente University

We're collaborating with a few universities on formal verification of GPU code. The project is called ChEOPS: verified Construction of corrEct and Opt ...

nvidia logo

Academic hackatons for Nvidia GPUs

Are you working with Nvidia GPUs in your research and wish Nvidia would support you as they used to 5 years ago? This is now done with hackatons, wher ...


IWOCL 2019

...  more interesting place for CUDA-developers who like to learn and discuss new GPU-programming techniques. This is because Nvidia's ...