The online services provider choses GPU computing for its Elastic Computer Cloud

Oct 4, 2012 17:21 GMT  ·  By

In spite of all the fuss that Intel raised over the Xeon Phi and how it is more convenient than GPU computing modules, Amazon decided to use GPU modules anyway.

A recent event suggests that Amazon may soon send for Tesla K20 modules from NVIDIA, for the “HPC on Amazon EC2” Elastic Computer Cloud.

The reason for this assumption is simple: according to VR-Zone, the provider of online services has already ordered a batch of Tesla K10 accelerators.

And by “batch” we don't mean just a small bunch, but a full 10,000 of them, all eager to show what single precision compute power is all about.

Whether or not it was a coincidence is irrelevant: Amazon needed single precision performance, not double precision data processing.

The Tesla K10, as a GPGPU adapter with two GK104 graphics processors and 8 GB of GDDR5 memory, can deliver it.

Obviously, this is a big win for NVIDIA, and all promoters of GPU computing for that matter (AMD, Nvidia, Qualcomm and Vivante).

The money that NVIDIA received wasn't as high as some may think though. In fact, Amazon pulled out all the cards and negotiated so stubbornly that it got a bigger discount than one might have expected.

The retail price of Tesla K10 is $4,999 (4,999 Euro), but large-volume orders for supercomputers come at a discount per model. That would normally mean $2,499 - $2,999 per board (2,499-2,799 Euro).

Amazon managed to squeeze $1,499 to $1,799 per card, which boils down to 15-18 million US dollars for the entire order.

Amazon may send for some Tesla K20 adapter in the future, as it still has a specific set of users that require double precision performance. Depending on how much longer the 28nm shortage lasts at TSMC, Tesla shipments could cause insufficient supply of consumer-oriented video boards.