Atomic cluster expansion (ACE) methods provide a systematic way to describe particle local environments of arbitrary body order. For practical applications it is often required that the basis of cluster functions be symmetrized with respect to rotations and permutations. Existing methodologies yield sets of symmetrized functions that are over-complete. These methodologies thus require an additional numerical procedure, such as singular value decomposition (SVD), to eliminate redundant functions. In this work, it is shown that analytical linear relationships for subsets of cluster functions may be derived using recursion and permutation properties of generalized Wigner symbols. From these relationships, subsets (blocks) of cluster functions can be selected such that, within each block, functions are guaranteed to be linearly independent. It is conjectured that this block-wise independent set of permutation-adapted rotation and permutation invariant (PA-RPI) functions forms a complete, independent basis for ACE. Along with the first analytical proofs of block-wise linear dependence of ACE cluster functions and other theoretical arguments, numerical results are offered to demonstrate this. The utility of the method is demonstrated in the development of an ACE interatomic potential for tantalum. Using the new basis functions in combination with Bayesian compressive sensing sparse regression, some high degree descriptors are observed to persist and help achieve high-accuracy models.
The properties of electrons in matter are of fundamental importance. They give rise to virtually all material properties and determine the physics at play in objects ranging from semiconductor devices to the interior of giant gas planets. Modeling and simulation of such diverse applications rely primarily on density functional theory (DFT), which has become the principal method for predicting the electronic structure of matter. While DFT calculations have proven to be very useful, their computational scaling limits them to small systems. We have developed a machine learning framework for predicting the electronic structure on any length scale. It shows up to three orders of magnitude speedup on systems where DFT is tractable and, more importantly, enables predictions on scales where DFT calculations are infeasible. Our work demonstrates how machine learning circumvents a long-standing computational bottleneck and advances materials science to frontiers intractable with any current solutions.
A series of MD and DFT simulations were performed to investigate hydrogen self-clustering and retention in tungsten. Using a newly develop machine learned interatomic potential, spontaneous formation of hydrogen platelets was observed after implanting low-energy hydrogen into tungsten at high fluxes and temperatures. The platelets formed along low miller index orientations and neighboring tetrahedral and octahedral sites and could grow to over 50 atoms in size. High temperatures above 600 K and high hydrogen concentrations were needed to observe significant platelet formation. A critical platelet size of six hydrogen atoms was needed for long term stability. Platelets smaller than this were found to be thermally unstable within a few nanoseconds. To verify these observations, characteristic platelets from the MD simulations were simulated using large-scale DFT. DFT corroborated the MD results in that large platelets were also found to be dynamically stable for five or more hydrogen atoms. The LDOS from the DFT simulated platelets indicated that hydrogen atoms, particularly at the periphery of the platelet, were found to be at least as stable as hydrogen atoms in bulk tungsten. In addition, electrons were found to be localized around hydrogen atoms in the platelet itself and that hydrogen atoms up to 4.2 Å away within the platelet were found to share charge suggesting that the hydrogen atoms are interacting across longer distances than previously suggested. These results reveal a self-clustering mechanisms for hydrogen within tungsten in the absence of radiation induced or microstructural defects that could be a precursor to blistering and potentially explain the experimentally observed high hydrogen retention particularly in the near surface region.
SPPARKS is an open-source parallel simulation code for developing and running various kinds of on-lattice Monte Carlo models at the atomic or meso scales. It can be used to study the properties of solid-state materials as well as model their dynamic evolution during processing. The modular nature of the code allows new models and diagnostic computations to be added without modification to its core functionality, including its parallel algorithms. A variety of models for microstructural evolution (grain growth), solid-state diffusion, thin film deposition, and additive manufacturing (AM) processes are included in the code. SPPARKS can also be used to implement grid-based algorithms such as phase field or cellular automata models, to run either in tandem with a Monte Carlo method or independently. For very large systems such as AM applications, the Stitch I/O library is included, which enables only a small portion of a huge system to be resident in memory. In this paper we describe SPPARKS and its parallel algorithms and performance, explain how new Monte Carlo models can be added, and highlight a variety of applications which have been developed within the code.
This paper describes the implementation of the stress-fluctuation technique into the LAMMPS code to compute the anisotropic thermal elastic constants tensor of materials. The implementation provides both methods for computing the analytical fluctuation expressions and also a generic numerical derivative method. The former makes the extension to new potentials straightforward, as it requires writing code only for the second derivatives of each energy term w.r.t. distance, angle, etc. The latter provides a generic interface to compute an accurate approximation of the elastic constants for any potential already implemented in LAMMPS. We show how both methods compare with the direct deformation computation in several test cases and discuss the implementation advantages and limitations.
Here we present a classical molecular-spin dynamics (MSD) methodology that enables accurate computations of the temperature dependence of the magnetocrystalline anisotropy as well as magnetoelastic properties of magnetic materials. The nonmagnetic interactions are accounted for by a spectral neighbor analysis potential (SNAP) machine-learned interatomic potential, whereas the magnetoelastic contributions are accounted for using a combination of an extended Heisenberg Hamiltonian and a Néel pair interaction model, representing both the exchange interaction and spin-orbit-coupling effects, respectively. All magnetoelastic potential components are parameterized using a combination of first-principles and experimental data. Our framework is applied to the α phase of iron. Initial testing of our MSD model is done using a 0 K parametrization of the Néel interaction model. After this, we examine how individual Néel parameters impact the $B$1 and $B$2 magnetostrictive coefficients using a moment-independent δ sensitivity analysis. The results from this study are then used to initialize a genetic algorithm optimization which explores the Néel parameter phase space and tries to minimize the error in the B1 and B2 magnetostrictive coefficients in the range of 0–1200 K. Our results show that while both the 0 K and genetic algorithm optimized parametrization provide good experimental agreement for $B$1 and $B$2, only the genetic algorithm optimized results can capture the second peak in the $B$1 magnetostrictive coefficient which occurs near approximately 800 K.
Tungsten (W) is a material of choice for the divertor material due to its high melting temperature, thermal conductivity, and sputtering threshold. However, W has a very high brittle-to-ductile transition temperature, and at fusion reactor temperatures (≥1000 K), it may undergo recrystallization and grain growth. Dispersion-strengthening W with zirconium carbide (ZrC) can improve ductility and limit grain growth, but much of the effects of the dispersoids on microstructural evolution and thermomechanical properties at high temperatures are still unknown. We present a machine learned Spectral Neighbor Analysis Potential for W-ZrC that can now be used to study these materials. In order to construct a potential suitable for large-scale atomistic simulations at fusion reactor temperatures, it is necessary to train on ab initio data generated for a diverse set of structures, chemical environments, and temperatures. Further accuracy and stability tests of the potential were achieved using objective functions for both material properties and high temperature stability. Validation of lattice parameters, surface energies, bulk moduli, and thermal expansion is confirmed on the optimized potential. Tensile tests of W/ZrC bicrystals show that although the W(110)-ZrC(111) C-terminated bicrystal has the highest ultimate tensile strength (UTS) at room temperature, observed strength decreases with increasing temperature. At 2500 K, the terminating C layer diffuses into the W, resulting in a weaker W-Zr interface. Meanwhile, the W(110)-ZrC(111) Zr-terminated bicrystal has the highest UTS at 2500 K.
Advances in machine learning (ML) have enabled the development of interatomic potentials that promise the accuracy of first principles methods and the low-cost, parallel efficiency of empirical potentials. However, ML-based potentials struggle to achieve transferability, i.e., provide consistent accuracy across configurations that differ from those used during training. In order to realize the promise of ML-based potentials, systematic and scalable approaches to generate diverse training sets need to be developed. This work creates a diverse training set for tungsten in an automated manner using an entropy optimization approach. Subsequently, multiple polynomial and neural network potentials are trained on the entropy-optimized dataset. A corresponding set of potentials are trained on an expert-curated dataset for tungsten for comparison. The models trained to the entropy-optimized data exhibited superior transferability compared to the expert-curated models. Furthermore, the models trained to the expert-curated set exhibited a significant decrease in performance when evaluated on out-of-sample configurations.
The long-standing problem of predicting the electronic structure of matter on ultra-large scales (beyond 100,000 atoms) is solved with machine learning.