Message passing evolves
to meet data-hungry applications
Posted June 14, 2011
Computing’s rapid evolution demands equally fast changes in software. Users want programs that run on any machine, from a laptop to a massively parallel supercomputer. But they don’t want to have to be computer scientists to make it work.
To do that, says Argonne National Laboratory’s Ewing “Rusty” Lusk, you need a standard so that the same parallel programs can run on a wide range of computers. The Message Passing Interface (MPI) standard aims for that goal, but it’s a moving target. MPI must adapt to the times, and the times are always changing.
MPI is necessary because a distributed-memory system – in which a computer has multiple processors, each with its local memory – requires a network to share data among processors. Message passing moves data between the local memories of different processors.
Multiple MPI implementations exist. One of them is MPICH, where “CH” stands for chameleon – MPI that adapts to its surroundings, which here means a computer and its operating system. (See sidebar, “Changing a chameleon’s color.”) Its latest iteration, MPICH2, is used by some of the top HPC companies.
For example, Microsoft’s Windows HPC Server includes an MPI implementation called Microsoft MPI that is based on Argonne’s MPICH2. Cray, the supercomputing company, also uses MPICH2 for its latest XE6 systems, “and (it) is scaling to over 220,000 cores on the Cray XT5 at Oak Ridge National Laboratory,” says Barry Bolding, Cray products division vice president.
Lusk, who leads the MPI standard implementation team at Argonne, notes that “MPICH2’s real impact is that it has been taken up by the parallel computer vendors and incorporated into their system software. Thus, anyone who runs a parallel program on IBM, Cray, Intel or other parallel computers is using MPICH2, even though they might not know it.”