Fusion Energy
September 2009

Edgy energy

Huge simulations are illuminating one of the mysteries of fusion energy: how conditions on the edge of a super-hot plasma cloud influence events in the cloud core.

In this poloidal cross-section tokamak simulation, turbulence is generated in two places: at the edge and at the center. The center turbulence is confined as it rotates quickly, but the edge turbulence propagates inward, colliding with the center turbulence and filling the entire volume. Researchers believe this is a key factor in forming H mode. See the sidebar for a related video.

For almost 30 years, scientists have been unsure how the tail wags the dog in fusion energy reactors.

That is, they’ve struggled to explain why conditions on the edge of a superheated plasma cloud have such big impact on events in the cloud’s core.

Now researchers working under Department of Energy (DOE) programs have significant clues, thanks to simulations on one of the world’s most powerful computers.

A team of investigators led by C.S. Chang of New York University’s Courant Institute of Mathematical Sciences devised and ran the models. The simulations are among the first to portray high-confinement mode, or H-mode, a phenomenon that’s critical to fusion’s viability as an abundant, clean power source.

Now the group is working on even more accurate models. They’re backed with 20 million computer processor hours from the Department of Energy’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. What they learn could have implications for ITER, the largest magnetic fusion experiment to date. ITER, under construction in France, is designed to be the first fusion reactor to produce 10 times more energy than it consumes.

Fusion harnesses physics that fuel the sun. In magnetic fusion, one of the technology’s most promising forms, hydrogen isotopes are heated and electrified to temperatures hotter than the sun’s interior in a torus – a ring-shaped vacuum chamber called a tokamak. Powerful magnetic fields contain the plasma in the torus’ center, keeping it from touching the walls.

The high temperature and pressure strip electrons from the hydrogen atoms. The resulting ions spin through the circular cavity, fusing and releasing energy.

To help ITER and the next fusion reactors succeed, scientists must understand what drives H-mode. Without it, heating the plasma core to temperatures necessary for economical fusion energy will be nearly impossible.

German fusion scientists discovered H-mode in the early 1980s. They found that with enough core heating the plasma jumps into a different state – hot and turbulent in the middle, with a thin outer layer that’s cooler and almost free of turbulence. The increase in core temperature and density dramatically increases the chances for fusion.

“Right at the edge, the plasma temperature gradient forms a pedestal,” Chang says – an instantaneous jump in the curve tracking temperature from the plasma edge to the interior. “They found out that when plasma gets above some critical power, that’s what it wants to do. We call it self-organization – plasma self-organizes into that state.”

H-mode opened horizons for fusion, but scientists still are unsure how it works. “Without understanding that, we just extrapolate that ITER will behave the same way,” Chang says. “Our mission is to understand that by large-scale simulation.”

To take on the challenge, the offices of Fusion Energy Sciences (FES) and Advanced Scientific Computing Research (ASCR) in DOE’s Office of Science launched the Center for Plasma Edge Simulation (CPES). Chang is principle investigator for CPES, a collaboration involving a dozen universities and national laboratories under DOE’s Scientific Discovery through Advanced Computing (SciDAC) program.

Center researchers have toiled since 2005 to build computer codes that describe how plasma edges behave in existing and next-generation reactors. Now they’re getting a chance to run those codes on powerful computers, including Jaguar, the Cray XT at Oak Ridge National Laboratory (ORNL). As of summer 2009, Jaguar was rated as the fastest computer for unclassified research at a peak of 1.64 quadrillion calculations per second, or petaflops. A quadrillion is a 1 followed by 15 zeroes.

The unprecedented simulation is giving researchers a clearer picture of how the tail wags the dog – how H-mode increases core temperature and pressure in mere milliseconds, orders of magnitude faster than it would by energy transport alone.

‘If you want to know how the tail is wagging the dog, you also have to calculate the dog’s wagging.’

“We found it’s not the actual transport of energy,” Chang says. “It’s the turbulence at the edge making nonlocal connections with the turbulence at the core.”

The simulation indicates turbulence starts at both the edge and in the core, where it’s generated by heating. Core turbulence is self-contained and doesn’t propagate outward. Edge-generated turbulence, meanwhile, moves inward – but not outward, so the quiescent outer layer is maintained. The inward-moving turbulence stops when it meets the strongly sheared core turbulence.

As edge turbulence moves into the core it interacts with and drastically alters local turbulence. This nonlocal, nonlinear turbulence interaction boosts the core temperature and maintains its slope, Chang says.

The simulation also showed lots of energy must be poured into the plasma core to maintain H-mode. “Energy flows from the core to the edge,” Chang says. “This energy is fueling this edge pedestal phenomenon and the pedestal is making a special kind of turbulence which is connected to the core turbulence. It’s all self-organization.”

It took close collaboration between physicists, applied mathematicians and computer scientists to get these important results, Chang says. The simulation code is all new, developed under SciDAC in just three years.

The main code is XGC1 (for X-point included Gyrokinetic Code), the first to track a sample of individual particles in the reactor while simulating the entire tokamak, from wall to wall, with realistic geometry and in real time. That means the model includes important features like a magnetic field designed to scrape off escaping edge plasma and divert it to a chamber so it doesn’t damage the main reactor walls.

Other codes (including XGC0, an XGC1 predecessor) cannot simultaneously model small-scale turbulence and the large-scale shift in background temperature. Instead, researchers usually assume the background temperature is fixed. Such “delta-f” models are easier to compute but have less ability.

“We decided that’s not really the way to go,” Chang says. “If you want to know how the tail is wagging the dog, you also have to calculate the dog’s wagging.”

XGC1 is a full distribution function (full-f) representation, particle-in-cell code. Full-f means it doesn’t separate scales between background temperature and plasma dynamics. It’s also a “first-principles” code – one that operates on the fundamental laws of physics, making few assumptions.

The code uses the generalized Vlasov equation to describe plasma dynamics. The equation calculates the momentum, position and other qualities of electrons and ions in six dimensions. That’s still too many for even the most powerful computer to handle, Chang says. So the researchers reduced the equation to five dimensions by using charged rings to approximate the particles’ gyroscopic motions.

“Particle in cell” means XGC1 is capable of tracking individual ions and electrons in the plasma cloud. But because tracking every particle is computationally impossible, the model portrays a representative sample of as many as 14 billion marker particles. That’s still an enormous number and, along with the demands of modeling realistic geometry in a full-f, first-principles code, makes the calculation tremendously demanding.

Chang compares XGC1 to a sport-utility vehicle capable of modeling the tough terrain of the tokamak interior. “The only problem is, it’s a gas guzzler” that demands computer time.

Enter Jaguar. Using time allocated by ASCR, Chang and his colleagues have routinely run XGC1 on 15,000 and 30,000 of the Cray’s processor cores. In a test on 149,760 cores (nearly all those available for computing) the code scaled perfectly – meaning the speed at which it ran increased in direct proportion to the number of processing cores used. It’s a significant accomplishment, says ORNL computer scientist Patrick Worley, who led the effort to tune XGC1 for best performance.

Some of that focused on minimizing the “overhead” that slows performance when running on almost 150,000 processor cores: time spent sending information between processors, waiting for “slow” processors that take longer to complete their assigned tasks than others, and performing tasks redundantly rather than dividing work between processors.

In particular, researchers addressed performance variability that arises when using explicit message-passing between huge numbers of processors. They reduced the number of processors that send and receive messages via the MPI (Message Passing Interface) library. Instead, the computer scientists used the OpenMP runtime system to divide work between processor cores in the same compute node. Cores within a node share a common memory space. OpenMP enables them to read and write into the shared memory space, eliminating the need to send and receive messages using MPI.

“It wasn’t easy to adopt,” Worley says, but “it’s been a win. Running the code on 150,000 processors with MPI alone would have taken significantly longer.”

Other researchers, like ORNL’s Ed D’Azevedo, focused on improving implementations of fundamental algorithms underlying the code. Seunghoe Ku of the Courant Institute did much of the hands-on code development and worked closely with applied mathematicians like Mark Adams of Columbia University. Adams helped improve the data locality – reorganizing information to make the code run more efficiently. He also helped rework parts of the code to run in parallel.

Meanwhile, ORNL’s Scott Klasky and other researchers devised data management strategies that kept the program from bogging down as trillions of bytes of data flowed in and out.

There was at least one major obstacle, Chang says. Errors in the code portraying the large-scale background temperature gradient were propagating, causing the simulation to produce nonsensical results. “It destroys the whole thing when you don’t do the large-scale right. Any small mistake there can destroy the small-scale turbulence” and even make the code crash.

“We were kind of struggling for a couple months about two years ago,” but now the code accurately computes the different scales with few problems. “We succeeded thanks to this great collaboration between computer science, applied math and physics. Without that it would not have been possible.”

Although the previous 150,000-processor run was only a test, Worley says, Chang “is very good at defining demonstration runs that get science done too. There are many things that came out of that experiment – in fact there are many things still coming out of it.”

The team began a full production run on 149,760 Jaguar cores this June. The plan, Chang says, is to understand the electrostatic turbulence dynamics in the whole-volume ITER tokamak with realistic geometry. The entire simulation was expected to take five days running around the clock.

As good as the team’s simulation is, it still depicts just some of what fuels fusion. Several factors drive microturbulence, for instance, but the present model includes only the ion temperature gradient – the range of ion kinetic energy through different parts of the plasma cloud. Portraying the effects of other turbulence drivers will make the simulation more accurate – but also demand better algorithms and more computing power.

“The next major step is to add kinetic electron effects to the turbulence,” Chang says, noting that electron simulation demands more computer time than ion simulation. “Without the INCITE award, we are dead.”

The team’s results already are providing insights for engineers and scientists working on ITER and other fusion research. With more physics in the model, Chang and colleagues “can predict what will happen in ITER if they build it this way or that way or change the operating conditions,” such as core containment, Chang says.

“We hope that this kind of first-principles simulation will give them the right start.”