Machine-learning technique could improve fusion energy outputs

By Neal Singer

Photography By Randy Montoya

Thursday, October 08, 2020

Building a better reactor by modifying its walls

Aidan Thompson in front of Starburst structure
COOL FUSION — Sandia machine learning and fusion researcher Aidan Thompson considers the future from the shelter of the Sandia “found art” piece titled Starburst. From 1980 to 1986, the structure was the power flow lines and target chamber of PBFA 1, Sandia’s earliest major fusion attempt.

Machine-learning techniques, best known for teaching self-driving cars to stop at red lights, may soon help researchers around the world improve their control over the most complicated reaction known to science: nuclear fusion.

Fusion reactions are typically hydrogen atoms heated to form a gaseous cloud called a plasma that releases energy as the particles bang into each other and fuse. Getting these reactions under better control could create huge amounts of environmentally clean energy from nuclear reactors in fusion power plants of the future.

“The connection between machine learning and fusion energy is not obvious,” said Sandia researcher Aidan Thompson, principal investigator for a $2.2 million, three-year DOE Office of Science award to make that connection. “Simply put, we have pioneered machine-learning’s use to improve simulations of the reactor’s wall material as it interacts with the plasma. This has been beyond the scope of atomic-scale simulations of the past.”

The expected result should suggest procedural or structural modifications to improve nuclear energy output, he said.

Modeling nuclear fusion

Machine learning is powerful because it uses mathematical and statistical means to figure out a situation, rather than analyze every piece of data in the desired category. For example, only a small number of dog photos are needed to teach a recognition system the concept of “dogginess”— in other words, “this is a dog” — rather than scanning every dog photo in existence.

Sandia’s machine-learning approach to nuclear fusion is the same, but more complicated.

“It is not a trivial problem to physically observe what is going on within a reactor’s walls as these structures are internally bombarded with hydrogen, helium, deuterium and tritium as parts of a super-heated plasma,” Aidan said.

He described components of the circling plasma striking and altering the composition of the retaining walls, and heavy atoms dislodging from the struck walls and altering the plasma. Reactions take place in nanoseconds, at temperatures as hot as the sun. Trying to modify components using trial and error to improve outcomes is extraordinarily laborious.

Machine-learning algorithms, on the other hand, use computer-generated data without direct measurements from experiments and can yield information that eventually could be used to make plasma interactions with containment-wall material less damaging, and thus improve the overall energy output of fusion reactors.

“There is no other way of getting this information,” Aidan said.

A few atoms predict energy of many

Aidan’s team expects that by using large datasets of quantum-mechanics calculations under extreme conditions as training data, they can build a machine-learning model that predicts the energy of any configuration of atoms.

This model, called a machine-learning interatomic potential, or MLIAP, can be inserted into huge classical molecular dynamics codes such as Sandia’s award-winning LAMMPS, or Large-scale Atomic/Molecular Massively Parallel Simulator, software. In this way, by interrogating only a relatively small number of atoms, they can extend the accuracy of quantum mechanics to the scale of millions of atoms needed to simulate the behavior of fusion energy materials.

“So, why is what we are doing machine learning and not just bookkeeping lots of data? The short answer is, we generate equations from an infinite set of possible variables to build models that are grounded in physics but contain hundreds or thousands of parameters that keep us within range of our target,” Aidan said, defending the reality of the machine-learning process.

One catch is that the accuracy of the MLIAP model depends on the overlap between the training data and the actual atomic environments encountered by the application, Aidan said.

These environments may be various, requiring new training data and alteration of the machine-learning model. Recognizing and adjusting for overlaps is part of the work of the next few years.

“Our model at first will be used to interpret small experiments,” Aidan said. “Conversely, that experimental data will be used to validate our model, which can then be used to make predictions about what is happening in a full-scale fusion reactor.”

The target for giving fusion researchers access to the Sandia machine-learning models to build better fusion reactors is approximately three years, he said.

Team members include researchers from Los Alamos National Laboratory and the University of Tennessee at Knoxville, as well as Sandia researchers Habib Najm, Robert Kolasinski, Mitchell Wood, Julien Tranchida, Khachik Sargsyan, and Mary Alice Cusentino.