NVIDIA Releases CUDA Toolkit 3.1

Improves performance by 30% thanks to NVIDIA GPUDirect

Just a short while ago, NVIDIA made a significant step forward by unleashing Parallel Nsight, a solution that opens up the GPU computing universe to Visual Studio developers. The Santa Clara, California-based company doesn't seem to think that just one step is enough though. As such, it also brought for a new version of the CUDA Toolkit. Now available for download, the CUDA Toolkit 3.1 is ready to start playing its part as a globally-adopted software development kit.

Ever since 2007, the CUDA Toolkit has been used by many developers in their creation of a wide variety of GPU Computing applications. Over 600,000 downloads were made in the past two years, and more than 100,000 developers have been making the best of CUDA C/C++ programming. OpenCL and Direct Compute are also supported.

The high interest in this software development kit is owed to the fact that GPU computing spans the entire IT sector, from the HPC (high-performance computing) filed to the consumer market. The other major asset of the CUDA Toolkit is that it supports not just Windows, but also Mac OS and Linux platforms. Not only that, but it can even be used with Java, Python and .NET applications.

The CUDA Toolkit supports all CUDA GPUs, not just DirectX 10-ready but also those with DirectX 11. The latest version of the kit, 3.1, adds Fermi support for 16-way concurrency, as well as several other features. The list includes start/stop profiling, function pointers and execution, runtime/driver integration and MacOS 64-bit Runtime.

Nevertheless, the most important addition is the GPUDirect feature, which allows for data transfers to be made much more rapidly. GPUDirect does a number of things. For one, it performs MPI synchronization of CUDA memory over InfiniBand (supported by Qlogic and Mellanox). The second important fact is that it gives third party devices direct access to the CUDA memory itself. All in all, NVIDIA GPUDirect can push performance upwards by 30 percent and is supported by both Tesla and Quadro products.

The GPU maker did not spend all its resources on just perfecting its platform, of course. In addition to the actual development efforts, the company made a point of popularizing it and even setting up learning programs. At present, over 350 Universities, such as Cambridge, Harvard, the National Taiwan University, the Tokyo Institute of Technology and the Chinese Academy of Sciences, teach GPU Computing on the CUDA architecture. Now, the outfit is expanding its efforts by launching the CUDA Certification Program which can be seen in detail here. As for the CUDA Toolkit 3.1, it has already been added to the company's website.

Hot right now  ·  Latest news