Sep 22, 2010 08:40 GMT  ·  By

For a long time, the CUDA technology on NVIDIA's video boards has been the only means by which CUDA applications could be run at their full potential, but this state of affairs may no longer be when the CUDA x86 compiler arrives.

The main purpose of the NVIDIA CUDA architecture was to allow computationally intensive kernels to be offloaded form the CPU to the GPU.

This allowed them to take advantage of the massive parallel processing capabilities of said graphics processing units.

On the other hand, this also means that CUDA applications are not easy to run on systems without NVIDIA GPUs.

To remedy this, the Santa Clara, California-based outfit is preparing the CUDA x86 compiler, which will supposedly let said apps use the CPUs instead.

Basically, the applications will use the multiple cores of the AMD/Intel CPUs and their streaming SIMD (Single Instruction Multiple Data) capabilities.

"CUDA C for x86 is a perfect complement to CUDA Fortran and PGI's optimizing parallel Fortran and C compilers for multi-core x86," said Douglas Miles, director, The Portland Group.

"It's another important element in our on-going strategy of providing HPC programmers with development tools that give PGI users a full range of options for optimizing compute-intensive applications, while allowing them to leverage the latest technical innovations from AMD, Intel and Nvidia,” he added.

No exact release date was given for this solution, but its maker does intend to demonstrate it at the SC10 Supercomputing conference in November.

"In less than three years, CUDA has become the most widely used massively parallel programming model," said Sanford Russell, general manager of GPU Computing software at NVIDIA.

"With the CUDA for x86 CPU compiler, PGI is responding to the need of developers who want to use a single parallel programming model to target many core GPUs and multi-core CPUs," he concluded.