Designers dig out drawing board
in quest for exascale computing
Posted May 14, 2012
The architects of tomorrow’s exascale computers are designing systems that borrow from and contribute to an unlikely source: the consumer electronics industry. The result will be systems exquisitely designed to meet the needs of scientists studying complex systems such as jet engine efficiency, large-scale weather patterns and earthquake models.
These machines won’t just be about a thousand times faster than today’s fastest high-performance computing systems. They’ll also require so much energy to operate that computer architects must completely rethink how these systems will be built. They’re breaking down and redesigning everything from microprocessor interconnects to memory and storage to conserve energy and increase efficiency. In many areas, architects are going back to the drawing board and asking what scientific questions the systems will be used to answer, and then optimizing the machine design to perform those operations.
“Once you choose the specific (scientific) performance metric that you want to improve, then you can start to make huge strides in determining the right hardware and software design tradeoffs to improve energy efficiency and effectiveness,” says John Shalf, who leads research into energy-efficient processors for scientific applications at Lawrence Berkeley National Laboratory. “Unlike the HPC community, which has tended to focus on flops, an energy-sensitive exascale design must use the application as the thing to measure performance.”
Unlike previous generations of large-scale computers, none of the improved performance is expected to come from faster processors, says Kathy Yelick, associate laboratory director for computing sciences at Berkeley Lab. Instead, exascale computing will extend the concept of massive parallelism. Systems will consist of tens of thousands of nodes running in parallel, with each node containing tens of thousands of processor cores.