Makes heterogeneous CPU/GPU computing installations easier to use

Nov 14, 2011 16:12 GMT  ·  By

NVIDIA really isn't laying low with supercomputing today, determined to get its GPU computing modules recognized as widely as possible, so it partnered with Cray, PGI and CAPS.

More specifically, NVIDIA, Cray Inc., the Portland Group (PGI), and CAPS Enterprise established a sort of consortium for the promotion of OpenACC.

If the name doesn't ring any bell, that's simply because it hadn't existed before.

OpenACC is a new standard meant to make it easy for programmers, or at least easier than right now, to use heterogeneous CPU/GPU computing systems.

What it does is offer so-called “directives” to compilers, pointing out which areas of code can be accelerated.

This makes it unnecessary for programmers to actually adapt or modify the code itself.

In other words, the detailed work of mapping the computation to the GPU compute accelerator is done by the compiler.

Chemistry, biology, data analytics, intelligence, weather and climate and various other fields stand to benefit from this.

The OpenACC becomes even more appealing when mention is made of how directives provide a common code base that is compatible across multiple platforms and multiple vendors as well.

So far, developers have reported application performance increases of between twice and ten times in just two weeks after starting to use the compilers.

“OpenACC represents a major development for the scientific community,” said Jeffrey Vetter, joint professor in the Computational Science and Engineering School of the College of Computing at Georgia institute of Technology.

“Programming models for open science by definition need to be flexible, open and portable across multiple platforms; OpenACC is well designed to fill this need. It provides a valuable new tool to empower the vast numbers of domain scientists who could benefit from application acceleration, but who may not have the funding or expertise to port their code to emerging architectures.”