It seems that CUDA is suddenly undesirable, unless the performance is doubled

Jun 27, 2012 10:01 GMT  ·  By

Intel’s Xeon Phi seemed like a doomed architecture back when Intel was attempting to compete with the likes of ATi and Nvidia. The company even scrapped the Larabee project, but all that work was not thrown away.

Intel redirected the use of their multiple core architecture towards high performance computing.

We believe that much of the work was done on the software side, as Intel’s main purpose was to make software integration much easier for HPC users.

The idea is a good one and the result is practical, although we’d rather have anything but x86 inside.

For now, if Intel Xeon Phi x86 offers the better result, it deserves all the credit.

Nvidia’s main problem is the fact that CUDA takes a whole lot of work to program for and that their new Kepler architecture is less powerful where raw computing power is involved.

Therefore, a science center must pay for thousands of man-hours to port an application source code from x86 to CUDA just to take advantage of Nvidia’s Tesla. This is the added cost of choosing an Nvidia Tesla accelerator card for you server or supercomputer.

Not only does the center have to pay for the extra man-hours of coding, but the final implementation and start usage of the server is also delayed by weeks or even months.

Intel brags that porting your code to its MIC accelerators will not take more than just a few days.

Considering that the performance of the current Xeon Phi version is almost equal with Nvidia’s Kepler-based Tesla, the server owner will think twice before sticking Tesla cards inside, considering the additional funds he must provide for all the software optimization work.

So what’s there left for Nvidia to do? If only Nvidia’s Kepler were faster.

There is one faster card where DP FP64 is concerned, and that is AMD’s Tahiti GPU.

Find out more in our second part.

Photo Gallery (2 Images)

Intel's Xeon-Phi Logo
Intel's Xeon Phi Accelerator Card
Open gallery