This report describes the high-level accomplishments from the Plasma Science and Engineering Grand Challenge LDRD at Sandia National Laboratories. The Laboratory has a need to demonstrate predictive capabilities to model plasma phenomena in order to rapidly accelerate engineering development in several mission areas. The purpose of this Grand Challenge LDRD was to advance the fundamental models, methods, and algorithms along with supporting electrode science foundation to enable a revolutionary shift towards predictive plasma engineering design principles. This project integrated the SNL knowledge base in computer science, plasma physics, materials science, applied mathematics, and relevant application engineering to establish new cross-laboratory collaborations on these topics. As an initial exemplar, this project focused efforts on improving multi-scale modeling capabilities that are utilized to predict the electrical power delivery on large-scale pulsed power accelerators. Specifically, this LDRD was structured into three primary research thrusts that, when integrated, enable complex simulations of these devices: (1) the exploration of multi-scale models describing the desorption of contaminants from pulsed power electrodes, (2) the development of improved algorithms and code technologies to treat the multi-physics phenomena required to predict device performance, and (3) the creation of a rigorous verification and validation infrastructure to evaluate the codes and models across a range of challenge problems. These components were integrated into initial demonstrations of the largest simulations of multi-level vacuum power flow completed to-date, executed on the leading HPC computing machines available in the NNSA complex today. These preliminary studies indicate relevant pulsed power engineering design simulations can now be completed in (of order) several days, a significant improvement over pre-LDRD levels of performance.
One of the most severe obstacles to increasing the longevity of tungsten-based plasma facing components, such as divertor tiles, is the surface deterioration driven by sub-surface helium bubble formation and rupture. Supported by experimental observations at PISCES, this work uses molecular dynamics simulations to identify the microscopic mechanisms underlying suppression of helium bubble formation by the introduction of plasma-borne beryllium. Simulations of the initial surface material (crystalline W), early-time Be exposure (amorphous W-Be) and final WBe2 intermetallic surfaces were used to highlight the effect of Be. Significant differences in He retention, depth distribution and cluster size were observed in the cases with beryllium present. Helium resided much closer to the surface in the Be cases with nearly 80% of the total helium inventory located within the first 2 nm. Moreover, coarsening of the He depth profile due to bubble formation is suppressed due to a one-hundred fold decrease in He mobility in WBe2, relative to crystalline W. This is further evidenced by the drastic reduction in He cluster sizes even when it was observed that both the amorphous W-Be and WBe2 intermetallic phases retain nearly twice as much He during cumulative implantation studies.
We describe recent efforts to improve our predictive modeling of rate-dependent behavior at, or near, a phase transition using molecular dynamics simulations. Cadmium sulfide (CdS) is a well-studied material that undergoes a solid-solid phase transition from wurtzite to rock salt structures between 3 and 9 GPa. Atomistic simulations are used to investigate the dominant transition mechanisms as a function of orientation, size and rate. We found that the final rock salt orientations were determined relative to the initial wurtzite orientation, and that these orientations were different for the two orientations and two pressure regimes studied. The CdS solid-solid phase transition is studied, for both a bulk single crystal and for polymer-encapsulated spherical nanoparticles of various sizes.
We present a scale-bridging approach based on a multi-fidelity (MF) machine-learning (ML) framework leveraging Gaussian processes (GP) to fuse atomistic computational model predictions across multiple levels of fidelity. Through the posterior variance of the MFGP, our framework naturally enables uncertainty quantification, providing estimates of confidence in the predictions. We used density functional theory as high-fidelity prediction, while a ML interatomic potential is used as low-fidelity prediction. Practical materials’ design efficiency is demonstrated by reproducing the ternary composition dependence of a quantity of interest (bulk modulus) across the full aluminum–niobium–titanium ternary random alloy composition space. The MFGP is then coupled to a Bayesian optimization procedure, and the computational efficiency of this approach is demonstrated by performing an on-the-fly search for the global optimum of bulk modulus in the ternary composition space. The framework presented in this manuscript is the first application of MFGP to atomistic materials simulations fusing predictions between density functional theory and classical interatomic potential calculations.
Machine learning of the quantitative relationship between local environment descriptors and the potential energy surface of a system of atoms has emerged as a new frontier in the development of interatomic potentials (IAPs). Here, we present a comprehensive evaluation of machine learning IAPs (ML-IAPs) based on four local environment descriptors - atom-centered symmetry functions (ACSF), smooth overlap of atomic positions (SOAP), the spectral neighbor analysis potential (SNAP) bispectrum components, and moment tensors - using a diverse data set generated using high-throughput density functional theory (DFT) calculations. The data set comprising bcc (Li, Mo) and fcc (Cu, Ni) metals and diamond group IV semiconductors (Si, Ge) is chosen to span a range of crystal structures and bonding. All descriptors studied show excellent performance in predicting energies and forces far surpassing that of classical IAPs, as well as predicting properties such as elastic constants and phonon dispersion curves. We observe a general trade-off between accuracy and the degrees of freedom of each model and, consequently, computational cost. We will discuss these trade-offs in the context of model selection for molecular dynamics and other applications.