OpenCL – the battle, part II

Reading Time: 3 minutes

Part II: the software-companies

It is very clear what’s at stake for the hardware-companies; we’ve also discussed the operating systems. But what should the software companies do? For companies which make i.e. encoding-software or databases it is very simple: support OpenCL or be years behind (what marketing can’t fix). For most other software there is a dependency on the programming language since OpenCL is a very specialised way of programming which (most times) is too different from in-house knowledge and can therefore be too expensive.

This article is somewhat brief, since most of the material will be discussed further in later-to-be-released articles.

Video-encoding and rendering

Why we had easily 60 frames per second in games but rendering an image of our own house would take minutes? You had the feeling there was a gap between worlds which needed to be closed. OpenGL/DirectX did a lot (also see our next article about OpenGL, OpenCL and DirectX), but was not able to help us in outside games. Apple did a lot to the desktop by integrating hardware-acceleration (later copied by Linux and Windows), but somehow GPU-processed results were not regarded professional and maybe seen more as an intermediate result (to see how it would look like).

Elemental Technologies was first with its H.264/AVC encoder; Nero and nVidia joined forces somewhat later. Both are based on CUDA and not OpenCL. Since rendering is close to what we already expected to come out of a GPU, we think this market is very soon recovering introducing the same product, based on OpenCL.

A few months ago nVidia has released its GPU-based ray-tracing engine, OptiX. On Youtube you can find the demo of VRay‘s accelerated ray-tracing engine.

We expect a lot of news from the graphics-world, since they already know how to program with shaders. A lot of artists will love the free speed-up, but it’s not breaking news this would be possible.

Programming languages

C and C++ are official bindings of OpenCL. And thereby Objective-C (used on i.e. on the iPhone) has native support.

As we described last week, we think that Oracle/Sun is taking OpenCL more serious now, Several wrappers exist for Java, but native support is missing; we would suggest writing the OpenCL-part in C or C++ when using Java, even if this breaks the beauty of the multi-platform-language.

It is very clear Microsoft had a better view with being an early adopter with Visual Studio integration trough profilers (created by AMD and nVidia). You already see higher-level implementations, such as the C#-toolkit OpenTK has included support that goes beyond the default dll-bindings. Also here programming parts in native C would be best.

Python is famous for its endless wrappers around anything, so it was to be expected to find an OpenCL-binding. Python has always been the safe choice for scientific programming, because of its enthusiastic community.

A binding for OpenCL in languages like PHP and Perl is completely absent. Most times this is not a problem, as C-libraries can easily be called.

RapidMind had en product which provided higher-level programming on the GPU, but after its acquisition by Intel, we don’t see the product any more. So we can conclude we just have to wait for native support in other languages than C, C++ and objective-C, to have better support.

Databases

We will cover databases later, when projects are more mature. In short, currently is investigated how GPUs can do, what SUN’s UltraSparcs already did. Since the memory-bandwidth is only great when using the onboard-memory, this is not as promising as it looks. Index-searches can be sped up, but these are not the real bottle-neck in database-performance. We think it is very important to invest in OpenCL-research in this competing market.

Operating Systems

Apple has had good GPU-support since OSX and therefore a good understanding of graphic-cards. Apple started the project OSX and already has updated several core libraries with OpenCL.

Microsoft has built DirectCompute in DirectX 11. This is OpenCL-technology put in a MS-jacket, as we’ve seen the company do many times before. Coming up is an article which discusses the differences.

Linux (Desktop and HPC) have not great support for OpenCL, but it works well enough to have most large OpenCL/CUDA-upgraded clusters on its name. Due to the flexibility of the OS and the strong competition between nVidia and AMD, a lot of research is done. Nevertheless there are no core-libraries in Linux which support OpenCL. We expect i.e. visualisation-libraries to support OpenCL this year.

Mathematical software

Matlab, Octave, Mathematica, R and Maple will all have a big advantage by using the GPU. Matlab has the most support by external libraries: CUDA, Jacket, gpuMat, etc. Mathematica will soon release a CUDA-version of Mathematica. R is still in discussion, Octave has a few partial/abandoned implementations of some libraries; since there is a lot of money to make by selling these products we can only expect full open-source implementations. Maple refers to its external call routines, so we still have to wait a while until we can have GPU-support.

Conclusion

This short overview gives an idea of where to expect to find OpenCL-powered solutions. When we find more markets the coming weeks, we’ll update this post.

Clear winners cannot be pinpointed, since the door has just opened. Maybe Nero, since it will now sell more of its encoding-products to owners of nVidia GPUs.

SUN jumping on the OpenCL-train?

Reading Time: 2 minutes

Edit (27 May 2010): until now Oracle/SUN has not shown anything that would validate this rumour, and the job-posting is not there anymore. Follow us on Twitter to be the first to know if Oracle/SUN will have better support for GPGPU for Solaris and/or Java and/or its hardware.

Job Description:The Power to change your world begins with your work at Sun!
This is a software staff engineering position requiring the ability to design, test, implement and maintain innovative and advanced graphics software. The person in this role is expected to identify areas for improvement and modification of Sun’s platform products and contribute to Sun’s overall product strategy. This person will work closely with others within the team and, as required, across teams to accomplish project objectives. May assume a leadership role in projects, including such activities as leading projects, participator in product planning and technology evaluation and related activities. May use technical leadership and influence to negotiate product design features or applications, both internally, and with open source groups as needed.
Requirements:* Excellent problem solving, critical thinking, and communication
skills
* Excellent knowledge of the C/C++ and Java programming languages
* Thorough working knowledge of 3D graphics, GPU architecture, and
3D APIs, such as OpenGL & Direct3D
* Thorough working knowledge of shader-level languages such as
GLSL, HLSL, and/or Cg
* Experience designing cross-platform, public APIs for developers
(Windows/MacOS/Linux)
* Experience with multi-threaded programming and debugging techniques
* Experience with operating systems level engineering
* Experience with performance profiling, analysis, and optimization
Education and Experience:Univ degree in computer sciene or engineering plus 5 years direct experience

In other words, a specialist in everything graphics-cards and Java, in a completely new area. Since OpenGL is a already known area not needing such a specialist for, there is a very good chance it will target OpenCL. A good choice.

Sun has all reasons to jump the train with Java, since Microsoft is already integrating loads of OpenCL-tools into its Visual Studio product (created by AMD and nVidia). Java has still more than 3 times the market share than C#, but with this late jump the gap will be closer in favour of Microsoft. Remember C# can easily call C-functions (which is the language OpenCL is written in); Java has a far more difficult task when it comes to calling C-functions without hazards, which is sort of implemented here and here. If OpenCL would not be implemented by Java, C, C++ and C# will make a jump a hole in Java’s share.

Besides Java, also Sun’s super-multi-threaded Sparc-servers will be in trouble since the graphic-cards of nVidia and AMD are now serious competitors. There is no official support of Sparc-processors for OpenCL, wile AMD has included X86-support and IBM PowerPC-support (also working on Cell) a few months ago.

Then we have the databases Oracle and MySQL; Oracle depends on Java a lot. While we see experiments speeding up competitor PostgreSQL with GPU-power, Oracle might become the slow turtle in a GPU-ruled database-world. MySQL has the same development-speed and also “bleeding edge” releases, but Oracle might slow down its official support. Expect Microsoft to have SQL-server fully loaded in its next major release.

If Oracle/Sun jumped the train today, expect no OpenCL-products from Oracle/Sun before Q2-2011.

OpenCL – the battle, part I

Reading Time: 4 minutes

Part I: the Hardware-companies and Operating Systems

(Part II will be about programming languages and software-companies, part III about the gaming-industry)

OpenCL is the new, but already de-facto standard of stream-computing; but how it got there so fast is somewhat strange. A few years ago there were many companies and research-groups seeing the power of using the GPU, such as:

And the fight is really not over, since we are talking about a big shift in the super-computing industry. Just think of IBM BlueGene, which will lose lots of market to nVidia and AMD. Or Intel, who hasn’t acquired a GPU-creator as AMD did. Who had expected the market to change this rigorous? If we’re honest, we could have seen it coming (when looking at the turbulence around PhysX and Havok), but “normally” this new techniques would be introduced slowly.

The fight is about market-shares. For operating-systems, the user wants to have their movies encoded in 20 minutes just like their neighbour. For HPC-computing, since clusters can be updated for a far lower price than was possible with the old-fashioned way; here it is mostly between Linux HPC and windows HPC (which still has a very small market-share), but also database-engines which rely on high-performance hardware/software.
The most to gain is in the processor-market. The extremely large consumer-market is declining since 2004, since most users do not need more than a netbook and have bought a separate gaming-computer for the more demanding games. We don’t only see Intel and AMD anymore, but IBM’s powerful Cell- en Power-processors, very power-efficient ARM-processors, etc. Now OpenCL could make it more interesting to buy an average processor and a good graphics-card, Intel (and AMD) have no choice then to take the battle with nVidia.

Background: Why Apple made OpenCL

Short answer: pure frustration. All those different implementations would or get a share or fight for being named the standard; Apple wanted to bet on the right horse and therefore took the lead in creating an open standard. Money would be made by updating software and selling more hardware. For that reason Apple’s close partners Intel and nVidia were easily motivated to help developing the standard. Currently Apple’s only (public) reasons for giving away such an expensive and specialised project is publicity and to be ahead of the competition. Since it will not be a core-business of Apple, it does not need to stay in lead, but which companies do?

Acquisitions, acquisition, acquisitions

No time to lose for the big companies, so they must get the knowledge in-house as soon as possible. Below are some examples.

  • Microsoft: Interactive Supercomputing (22-Sept-2009): made Star-P, software which allowed users to perform scientific, engineering or analytical computation on array or matrix-based data to use parallel architectures such as multi-core workstations, multi-processor systems, distributed memory clusters or utility/cloud-based environments. This is completely in the field of OpenCL, which Microsoft needs to strengthen its products as Apple already did, such as SQL-server and Windows HPC.
  • nVidia: Ageia technologies (22-Febr-2008): made specialized PC-cards and software for calculating complicated physics in games. They made the first commercial product aiming at the masses (gamers). PhysX-code could by integrated in nVidia-drivers to be used with modern nVidia-GPUs.
  • AMD: ATI (24-juli-2006): graphics chip specialist. Although the price was too high, it saved AMD from being bought out by Intel and even stay ahead (if they had kept running).
  • Intel: Havok (17-Sept-2007): builds games-tools, such as a physics-engine. After Ageia was captured, the only good company out there to buy; AMD was too late, which spent all its money on ATI. Wind River (4-June-2009): a company providing embedded systems, development tools for embedded systems, middleware, and other types of software. Also read this interesting article. Cilk (31-July-2009): offers parallel extensions that are tightly tied into a compiler. RapidMind (19-Aug-2009): created a high-level language Sh, which had an OpenCL-backend. Intel has a lead in CPU-compilers, which it wants to broaden to multi-core- and GPU-compilers. Intel discovered it was in the group of “old fashioned compiler-builders” and had lots to learn in a short time.

If you know more acquisitions of interest, please let us know.

Winners

Apple, Intel and NVidia are the winners for 2009 and 2010. They have currently the most knowledge in house and have their marketing-machine running. NVidia has the best insight for new markets.

Microsoft and Game-developers are second; they took the first train by joining the OpenCL-consortium and taking it very serious. At the end of 2010 Microsoft will be at Apple’s level of expertise, so we will see then who has the best novelties. The game-developers, of which most already have experience with physics-calculations, all had a second chance when they had misjudged the Physics-engines. More on gaming in part III.

AMD is currently actually a big loser, since it does not seem to take it all seriously enough. But AMD can afford to be late, since OpenCL makes it easy to switch. We hope the best for AMD, since it has the technology of both CPU and GPU, and many years of experience in both fields. More on the competition between marketing-monster nVidia and silent AMD will be discussed in a blog-item, next week.

Another possible loser is Linux, which has lots to lose on HPC-market; OpenBSD-based Apple and Windows HPC can actually win market-share now. Expect most from hardware-manufacturers Intel, AMD and nVidia to give code to the community, but also from universities who do lots of research on the ever-flexible Linux. At the end it all depends on OpenCL-adaptation of (Linux-specific) programming-languages, which will be discussed in part II.

ARM is a member of the OpenCL-group but does not seem to invest in it; they seem to target another growing market: the low-power mobile devices. We will write on OpenCL and the mobile market later and why ARM currently can be relaxed about OpenCL.

We hope you have more insights in this new market; please contact us for more specific information and feel free to give your comments. Please stay tuned for part II and III, which will be released the next few weeks.

Starting up

Reading Time: 1 minute

The first steps have been made to create this company. The target is clear: serve customers in the more specialised fields (more than the usual office-applications) where mathematics and metrics are of importance. Most of our customers will have the expertise in a specialised field such as mathematics, physics, chemistry, economics and need extra power to use/sell their modelling-software.

Hardest target in the current market is finding those customers; since we can provide super-computers and -techniques at a tenth of the price it will be easier than our colleagues in the more expensive segment. Nevertheless the field is and will stay a specialised one.

Target-date is March 2010 to be open for customers. Hard work, but that is what we are made of.