Exascale computers required to find solutions to global problems

Nov 17, 2009 09:18 GMT  ·  By
This is the Kraken Cray XT5 supercomputer, located in the Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA
   This is the Kraken Cray XT5 supercomputer, located in the Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA

China recently unveiled a military supercomputer that, according to it, would have ranked fourth in the most recent TOP 500 list. This implies that advances in supercomputing technology may actually be made at a faster pace than suspected. For this year's annual supercomputing conference in Portland, Oregon (22nd annual conference), the main topic will be the elaboration of exascale systems capable of finding solutions to the problems the planet is being faced with in our times.

Supercomputers are machines generally used for high-level, three-dimensional visualizations, similar to 3D computer games. These machines run endless 'what-if' scenarios in order to test many ideas and the consequences they would bring about if applied. Basically, these supermachines are the substitute for the live testing of new types of fuels, architectures, chemical processes, etc. Basically, what the 11,000 people attending the supercomputing conference will have to find out is how to make a good-enough system that would be able to find viable ways of solving climate change, creating biochemical fuels from weeds and developing long-lasting car batteries, among others.

The current supercomputers aren't even close to an exascale. The most that the current leader can achieve (the Oak Ridge National Laboratory-based Cray XT5) is 2.3 petaflops. The Jaguar operates using 224,256 CPU cores from six-core AMD Opteron chips. Nevertheless, according to Buddy Bland, project director at the Oak Ridge Leadership Computing Facility (that includes Jaguar), the U.S. Department of Energy doesn't consider it even remotely powerful enough, already putting together plans to make one 1,000 times more powerful.

The goal would be to create a system able to somehow simulate high-resolution climate models, bio-energy products and smart grid development, as well as fusion energy designs. The current idea would be to somehow have 100 million CPU cores working in unison, quite ambitious really.

The Jaguar uses seven megawatts of power or seven million watts. An exascale system that uses CPU processing cores alone might take two gigawatts or two billion watts, Dave Turek, IBM vice president of deep computing, says. "That's roughly the size of [a] medium-sized nuclear power plant. That's an untenable proposition for the future," he adds.

Looking at history, the supercomputing developments haven't been very quick. The first teraflop-capable system (one trillion calculations per second) was created as far back as 1997 at ASCI Red at Sandia National Lab. Although it was quite a breakthrough such a long time ago, when technology, one could say, was still in its infancy, it wasn't followed by any true, immediate, major breakthrough. The one-petaflop threshold was just barely overcome as late as 2008 by IBM's Roadrunner, at the Los Alamos National Laboratory (1,000 trillion (one quadrillion) sustained floating-point operations per second).

Considering that these two breakthroughs are 12 years apart, and seeing how the goal is of 1,000 petaflops, it becomes questionable whether such a supercomputer will be developed by 2018. Not only that, but 2018 is eight years away, during which time the world's current problems will probably increase or be replaced by others, and chances are that even those computers will have a hard time finding a way to solve them all. We can just wait and see, hoping that the 1,000-petaflop dream comes true as soon as possible. Humanity will just have to be careful not to blow up the planet in the meantime.