The company officially acknowledges it has a lot more to learn about GPU compute

Sep 2, 2012 23:21 GMT  ·  By

Intel’s work on the Larabee project and succeeding Xeon Phi accelerator cards has cost the company billions of dollars and lasted for almost a decade. Now, the architecture is finally ready to fight for a place on the GPU compute market.

During this year’s Hot Chips conference in Cupertino, California, the world’s largest semiconductor company shared the final details on the Xeon Phi cards and it seems the it achieved a lot for a first-generation product.

On the other hand, not all is rosy for Intel and its Xeon Phi project.

Intel’s problem is best described by its own slides during the presentation, as the company is actually now comparing its new and very advanced architecture with last year’s technology from AMD and Nvidia and still can’t show any impressive results.

It is widely known that AMD’s VLIW architecture was quite handicapped when it came to GPU compute work and Nvidia’s 2090 accelerators, while being considerably better, were launched a year ago.

Even so, these are the only examples Intel has found in the famous Top500 list of supercomputers that could allow it to present its new Xeon Phi cards in an acceptable light.

Intel’s performance per watt improvement measures just 0.7% when compared with a server using VLIW AMD cards and 9% when compared with a server using Nvidia’s Tesla 2090 cards.

We’re quite surprised AMD has the better solution when it comes to GPU compute and performance per watt, especially because we’re talking about a VLIW design but, overall, Intel’s Xeon Phi is quite disappointing.

Despite the company having a much better manufacturing technology, as the Xeon Phi chips are made using 22nm transistors and the world’s best software optimization and compiler support, the results are only comparable with AMD’s and Nvidia’s last year’s tech.

Photo Gallery (2 Images)

Intel's Xeon Phi Processor
Intel's Xeon Phi Hot Chips 2012 Presentation
Open gallery