Shaping exascale computing
so it’ll go easy on the juice
Posted November 14, 2012
Simulation of electronic charge density of a lithium oxide nanoparticle. Click image to enlarge and for more information.
Powering today’s petaflops computers – capable of quadrillions of calculations per second – costs $5 million to $10 million a year. Using current technology and linking a million teraflops (trillions of calculations per second) processors, an exascale system a thousand times more powerful would draw more than a gigawatt of electricity and cost at least $1 billion a year, says William Harrod, Research Division Director of the Department of Energy’s Office of Science Advanced Scientific Computing Research (ASCR) program.
Those costs are a nonstarter, says Thuc Hoang, manager of high-performance computing (HPC) development and operations for the National Nuclear Security Administration’s (NNSA) Advanced Simulation and Computing program. “There’s no way we can increase our electricity payment by 50 times or 100 times.”
Even an optimistic outcome of current R&D activities projects a factor-of-five gap between expected power efficiency and what’s necessary for an exascale system to run on around 20 megawatts, DOE’s target. It will take improvements in nearly every part of the computing stack – hardware, system software, communications, storage, algorithms, equipment cooling and more – to even come close.
“Computing is now at a critical crossroads,” according to Harrod, who is delivering a keynote talk on the topic this week in Salt Lake City at SC12, the annual supercomputing conference. “We can no longer proceed down the path of steady but incremental progress to which we have become accustomed. Thus, exascale computing is not simply an effort to provide the next level of computational power by creative scaling up of current petascale computing systems. New architectures will be required.”