International treaties make such modeling efforts indispensable

Jun 6, 2012 08:04 GMT  ·  By
Employees at Lawrence Livermore National Laboratory work on a high-performance computer
   Employees at Lawrence Livermore National Laboratory work on a high-performance computer

Investigators from the Purdue University and the National Nuclear Security Administration (NNSA) are currently compiling a series of simulations depicting what happens during a nuclear explosion at unprecedented resolution.

The performance of nuclear weapons can be simulated in precise molecular detail within these new models, the team explains. The NNSA is a quasi-independent agency operating in the US Department of Energy (DOE), whose job is to control nuclear security-related activities.

International treaties signed between various nations forbid anyone from testing nuclear weapons, so conducting supercomputer simulations is the only way to gain more data on what would happen during explosions. This information is critical for national defense, US officials believe.

But using supercomputers for this application yields a series of problems related to the accuracy and reliability of the simulations themselves, explains Purdue School of Electrical and Computer Engineering associate professor, Saurabh Bagchi.

In the new study, experts were able to conduct ultra-precise simulations of molecular-scale interactions occurring during nuclear explosions by eliminating a series of bottlenecks preventing supercomputers from handling such complex processes.

“Due to natural faults in the execution environment there is a high likelihood that some processing element will have an error during the application's execution, resulting in corrupted memory or failed communication between machines,” Bagchi explains.

“There are bottlenecks in terms of communication and computation,” he goes on to say. By applying a technique called clustering, researchers were able to group large number of processes into a smaller number of equivalence classes, inside each of which the included processes featured similar traits.

The expert says that this is what ultimately enabled the team to conduct the complex simulation it intended to on a supercomputer. The new research was funded by the US National Science Foundation.

A paper detailing the work will be presented during the Annual IEEE/IFIP International Conference on Dependable Systems and Networks, which will be held June 25-28, in Boston.