Fully characterizing high energy density (HED) phenomena using pulsed power facilities (Z machine) and coherent light sources is possible only with complementary numerical modeling for design, diagnostic development, and data interpretation. The exercise of creating numerical tests, that match experimental conditions, builds critical insight that is crucial for the development of a strong fundamental understanding of the physics behind HED phenomena and for the design of next generation pulsed power facilities. The persistence of electron correlation in HED ma- terials arising from Coulomb interactions and the Pauli exclusion principle is one of the greatest challenges for accurate numerical modeling and has hitherto impeded our ability to model HED phenomena across multiple length and time scales at sufficient accuracy. An exemplar is a fer- romagnetic material like iron, while familiar and widely used, we lack a simulation capability to characterize the interplay of structure and magnetic effects that govern material strength, ki- netics of phase transitions and other transport properties. Herein we construct and demonstrate the Molecular-Spin Dynamics (MSD) simulation capability for iron from ambient to earth core conditions, all software advances are open source and presently available for broad usage. These methods are multi-scale in nature, direct comparisons between high fidelity density functional the- ory (DFT) and linear-scaling MSD simulations is done throughout this work, with advancements made to MSD allowing for electronic structure changes being reflected in classical dynamics. Main takeaways for the project include insight into the role of magnetic spins on mechanical properties and thermal conductivity, development of accurate interatomic potentials paired with spin Hamil- tonians, and characterization of the high pressure melt boundary that is of critical importance to planetary modeling efforts.
This report documents the results of an FY22 ASC V&V level 2 milestone demonstrating new algorithms for multifidelity uncertainty quantification. Part I of the report describes the algorithms, studies their performance on a simple model problem, and then deploys the methods to a thermal battery example from the open literature. Part II (restricted distribution) applies the multifidelity UQ methods to specific thermal batteries of interest to the NNSA/ASC program.
Making reliable predictions in the presence of uncertainty is critical to high-consequence modeling and simulation activities, such as those encountered at Sandia National Laboratories. Surrogate or reduced-order models are often used to mitigate the expense of performing quality uncertainty analyses with high-fidelity, physics-based codes. However, phenomenological surrogate models do not always adhere to important physics and system properties. This project develops surrogate models that integrate physical theory with experimental data through a maximally-informative framework that accounts for the many uncertainties present in computational modeling problems. Correlations between relevant outputs are preserved through the use of multi-output or co-predictive surrogate models; known physical properties (specifically monotoncity) are also preserved; and unknown physics and phenomena are detected using a causal analysis. By endowing surrogate models with key properties of the physical system being studied, their predictive power is arguably enhanced, allowing for reliable simulations and analyses at a reduced computational cost.
Optimal mitigation planning for highly disruptive contingencies to a transmission-level power system requires optimization with dynamic power system constraints, due to the key role of dynamics in system stability to major perturbations. We formulate a generalized disjunctive program to determine optimal grid component hardening choices for protecting against major failures, with differential algebraic constraints representing system dynamics (specifically, differential equations representing generator and load behavior and algebraic equations representing instantaneous power balance over the transmission system). We optionally allow stochastic optimal pre-positioning across all considered failure scenarios, and optimal emergency control within each scenario. This novel formulation allows, for the first time, analyzing the resilience interdependencies of mitigation planning, preventive control, and emergency control. Using all three strategies in concert is particularly effective at maintaining robust power system operation under severe contingencies, as we demonstrate on the Western System Coordinating Council (WSCC) 9-bus test system using synthetic multi-device outage scenarios. Towards integrating our modeling framework with real threats and more realistic power systems, we explore applying hybrid dynamics to power systems. Our work is applied to basic RL circuits with the ultimate goal of using the methodology to model protective tripping schemes in the grid. Finally, we survey mitigation techniques for HEMP threats and describe a GIS application developed to create threat scenarios in a grid with geographic detail.
Modeling real-world phenomena to any degree of accuracy is a challenge that the scientific research community has navigated since its foundation. Lack of information and limited computational and observational resources necessitate modeling assumptions which, when invalid, lead to model-form error (MFE). The work reported herein explored a novel method to represent model-form uncertainty (MFU) that combines Bayesian statistics with the emerging field of universal differential equations (UDEs). The fundamental principle behind UDEs is simple: use known equational forms that govern a dynamical system when you have them; then incorporate data-driven approaches – in this case neural networks (NNs) – embedded within the governing equations to learn the interacting terms that were underrepresented. Utilizing epidemiology as our motivating exemplar, this report will highlight the challenges of modeling novel infectious diseases while introducing ways to incorporate NN approximations to MFE. Prior to embarking on a Bayesian calibration, we first explored methods to augment the standard (non-Bayesian) UDE training procedure to account for uncertainty and increase robustness of training. In addition, it is often the case that uncertainty in observations is significant; this may be due to randomness or lack of precision in the measurement process. This uncertainty typically manifests as “noisy” observations which deviate from a true underlying signal. To account for such variability, the NN approximation to MFE is endowed with a probabilistic representation and is updated using available observational data in a Bayesian framework. By representing the MFU explicitly and deploying an embedded, data-driven model, this approach enables an agile, expressive, and interpretable method for representing MFU. In this report we will provide evidence that Bayesian UDEs show promise as a novel framework for any science-based, data-driven MFU representation; while emphasizing that significant advances must be made in the calibration of Bayesian NNs to ensure a robust calibration procedure.
Predictive design of REHEDS experiments with radiation-hydrodynamic simulations requires knowledge of material properties (e.g. equations of state (EOS), transport coefficients, and radiation physics). Interpreting experimental results requires accurate models of diagnostic observables (e.g. detailed emission, absorption, and scattering spectra). In conditions of Local Thermodynamic Equilibrium (LTE), these material properties and observables can be pre-computed with relatively high accuracy and subsequently tabulated on simple temperature-density grids for fast look-up by simulations. When radiation and electron temperatures fall out of equilibrium, however, non-LTE effects can profoundly change material properties and diagnostic signatures. Accurately and efficiently incorporating these non-LTE effects has been a longstanding challenge for simulations. At present, most simulations include non-LTE effects by invoking highly simplified inline models. These inline non-LTE models are both much slower than table look-up and significantly less accurate than the detailed models used to populate LTE tables and diagnose experimental data through post-processing or inversion. Because inline non-LTE models are slow, designers avoid them whenever possible, which leads to known inaccuracies from using tabular LTE. Because inline models are simple, they are inconsistent with tabular data from detailed models, leading to ill-known inaccuracies, and they cannot generate detailed synthetic diagnostics suitable for direct comparisons with experimental data. This project addresses the challenge of generating and utilizing efficient, accurate, and consistent non-equilibrium material data along three complementary but relatively independent research lines. First, we have developed a relatively fast and accurate non-LTE average-atom model based on density functional theory (DFT) that provides a complete set of EOS, transport, and radiative data, and have rigorously tested it against more sophisticated first-principles multi-atom DFT models, including time-dependent DFT. Next, we have developed a tabular scheme and interpolation methods that compactly capture non-LTE effects for use in simulations and have implemented these tables in the GORGON magneto-hydrodynamic (MHD) code. Finally, we have developed post-processing tools that use detailed tabulated non-LTE data to directly predict experimental observables from simulation output.
This document provides very basic background information and initial enabling guidance for computational analysts to develop and utilize GitOps practices within the Common Engineering Environment (CEE) and High Performance Computing (HPC) computational environment at Sandia National Laboratories through GitLab/Jacamar runner based workflows.
This SAND report documents CIS Late Start LDRD Project 22-0311, "Differential geometric approaches to momentum-based formulations for fluids". The project primarily developed geometric mechanics formulations for momentum-based descriptions of nonrelativistic fluids, utilizing a differential geometry/exterior calculus treatment of momentum and a space+time splitting. Specifically, the full suite of geometric mechanics formulations (variational/Lagrangian, Lie-Poisson Hamiltonian and Curl-Form Hamiltonian) were developed in terms of exterior calculus using vector-bundle valued differential forms. This was done for a fairly general version of semi-direct product theory sufficient to cover a wide range of both neutral and charged fluid models, including compressible Euler, magnetohydrodynamics and Euler-Maxwell. As a secondary goal, this project also explored the connection between geometric mechanics formulations and the more traditional Godunov form (a hyperbolic system of conservation laws). Unfortunately, this stage did not produce anything particularly interesting, due to unforeseen technical difficulties. There are two publications related to this work currently in preparation, and this work will be presented at SIAM CSE 23, at which the PI is organizing a mini-symposium on geometric mechanics formulations and structure-preserving discretizations for fluids. The logical next step is to utilize the exterior calculus based understanding of momentum coupled with geometric mechanics formulations to develop (novel) structure-preserving discretizations of momentum. This is the main subject of a successful FY23 CIS LDRD "Structure-preserving discretizations for momentum-based formulations of fluids".
This report details work that was completed to address the Fiscal Year 2022 Advanced Science and Technology (AS&T) Laboratory Directed Research and Development (LDRD) call for “AI-enhanced Co-Design of Next Generation Microelectronics.” This project required concurrent contributions from the fields of 1) materials science, 2) devices and circuits, 3) physics of computing, and 4) algorithms and system architectures. During this project, we developed AI-enhanced circuit design methods that relied on reinforcement learning and evolutionary algorithms. The AI-enhanced design methods were tested on neuromorphic circuit design problems that have real-world applications related to Sandia’s mission needs. The developed methods enable the design of circuits, including circuits that are built from emerging devices, and they were also extended to enable novel device discovery. We expect that these AI-enhanced design methods will accelerate progress towards developing next-generation, high-performance neuromorphic computing systems.
For decades, Arctic temperatures have increased twice as fast as average global temperatures. As a first step towards quantifying parametric uncertainty in Arctic climate, we performed a variance-based global sensitivity analysis (GSA) using a fully-coupled, ultra-low resolution (ULR) configuration of version 1 of the U.S. Department of Energy’s Energy Exascale Earth System Model (E3SMv1). Specifically, we quantified the sensitivity of six quantities of interest (QOIs), which characterize changes in Arctic climate over a 75 year period, to uncertainties in nine model parameters spanning the sea ice, atmosphere and ocean components of E3SMv1. Sensitivity indices for each QOI were computed with a Gaussian process emulator using 139 random realizations of the random parameters and fixed pre-industrial forcing. Uncertainties in the atmospheric parameters in the CLUBB (Cloud Layers Unified by Binormals) scheme were found to have the most impact on sea ice status and the larger Arctic climate. Our results demonstrate the importance of conducting sensitivity analyses with fully coupled climate models. The ULR configuration makes such studies computationally feasible today due to its low computational cost. When advances in computational power and modeling algorithms enable the tractable use of higher-resolution models, our results will provide a baseline that can quantify the impact of model resolution on the accuracy of sensitivity indices. Moreover, the confidence intervals provided by our study, which we used to quantify the impact of the number of model evaluations on the accuracy of sensitivity estimates, have the potential to inform the computational resources needed for future sensitivity studies.
Kinetic gas dynamics in rarefied and moderate-density regimes have complex behavior associated with collisional processes. These processes are generally defined by convolution integrals over a high-dimensional space (as in the Boltzmann operator), or require evaluating complex auxiliary variables (as in Rosenbluth potentials in Fokker-Planck operators) that are challenging to implement and computationally expensive to evaluate. In this work, we develop a data-driven neural network model that augments a simple and inexpensive BGK collision operator with a machine-learned correction term, which improves the fidelity of the simple operator with a small overhead to overall runtime. The composite collision operator has a tunable fidelity and, in this work, is trained using and tested against a direct-simulation Monte-Carlo (DSMC) collision operator.
An approach to numerically modeling relativistic magnetrons, in which the electrons are represented with a relativistic fluid, is described. A principal effect in the operation of a magnetron is space-charge-limited (SCL) emission of electrons from the cathode. We have developed an approximate SCL emission boundary condition for the fluid electron model. This boundary condition prescribes the flux of electrons as a function of the normal component of the electric field on the boundary. We show the results of a benchmarking activity that applies the fluid SCL boundary condition to the one-dimensional Child–Langmuir diode problem and a canonical two-dimensional diode problem. Simulation results for a two-dimensional A6 magnetron are then presented. Computed bunching of the electron cloud occurs and coincides with significant microwave power generation. Numerical convergence of the solution is considered. Sharp gradients in the solution quantities at the diocotron resonance, spanning an interval of three to four grid cells in the most well-resolved case, are present and likely affect convergence.
In this article, we present a general methodology to combine the Discontinuous Petrov–Galerkin (DPG) method in space and time in the context of methods of lines for transient advection–reaction problems. We first introduce a semidiscretization in space with a DPG method redefining the ideas of optimal testing and practicality of the method in this context. Then, we apply the recently developed DPG-based time-marching scheme, which is of exponential-type, to the resulting system of Ordinary Differential Equations (ODEs). Further, we also discuss how to efficiently compute the action of the exponential of the matrix coming from the space semidiscretization without assembling the full matrix. Finally, we verify the proposed method for 1D+time advection–reaction problems showing optimal convergence rates for smooth solutions and more stable results for linear conservation laws comparing to the classical exponential integrators.
The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Fuel Cycle Technology (FCT) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and high-level nuclear waste (HLW). Two high priorities for SFWST disposal R&D are design concept development and disposal system modeling. These priorities are directly addressed in the SFWST Geologic Disposal Safety Assessment (GDSA) control account, which is charged with developing a geologic repository system modeling and analysis capability, and the associated software, GDSA Framework, for evaluating disposal system performance for nuclear waste in geologic media. GDSA Framework is supported by SFWST Campaign and its predecessor the Used Fuel Disposition (UFD) campaign.
Abstract. Advection of trace species, or tracers, also called tracer transport, in models of the atmosphere and other physical domains is an important and potentially computationally expensive part of a model's dynamical core. Semi-Lagrangian (SL) advection methods are efficient because they permit a time step much larger than the advective stability limit for explicit Eulerian methods without requiring the solution of a globally coupled system of equations as implicit Eulerian methods do. Thus, to reduce the computational expense of tracer transport, dynamical cores often use SL methods to advect tracers. The class of interpolation semi-Lagrangian (ISL) methods contains potentially extremely efficient SL methods. We describe a finite-element ISL transport method that we call the interpolation semi-Lagrangian element-based transport (Islet) method, such as for use with atmosphere models discretized using the spectral element method. The Islet method uses three grids that share an element grid: a dynamics grid supporting, for example, the Gauss–Legendre–Lobatto basis of degree three; a physics parameterizations grid with a configurable number of finite-volume subcells per element; and a tracer grid supporting use of Islet bases with particular basis again configurable. This method provides extremely accurate tracer transport and excellent diagnostic values in a number of verification problems.
Tabulated chemistry models are widely used to simulate large-scale turbulent fires in applications including energy generation and fire safety. Tabulation via piecewise Cartesian interpolation suffers from the curse-of-dimensionality, leading to a prohibitive exponential growth in parameters and memory usage as more dimensions are considered. Artificial neural networks (ANNs) have attracted attention for constructing surrogates for chemistry models due to their ability to perform high-dimensional approximation. However, due to well-known pathologies regarding the realization of suboptimal local minima during training, in practice they do not converge and provide unreliable accuracy. Partition of unity networks (POUnets) are a recently introduced family of ANNs which preserve notions of convergence while performing high-dimensional approximation, discovering a mesh-free partition of space which may be used to perform optimal polynomial approximation. In this work, we assess their performance with respect to accuracy and model complexity in reconstructing unstructured flamelet data representative of nonadiabatic pool fire models. Our results show that POUnets can provide the desirable accuracy of classical spline-based interpolants with the low memory footprint of traditional ANNs while converging faster to significantly lower errors than ANNs. For example, we observe POUnets obtaining target accuracies in two dimensions with 40 to 50 times less memory and roughly double the compression in three dimensions. We also address the practical matter of efficiently training accurate POUnets by studying convergence over key hyperparameters, the impact of partition/basis formulation, and the sensitivity to initialization.
PyApprox is a Python-based one-stop-shop for probabilistic analysis of scientific numerical models. Easy to use and extendable tools are provided for constructing surrogates, sensitivity analysis, Bayesian inference, experimental design, and forward uncertainty quantification. The algorithms implemented represent the most popular methods for model analysis developed over the past two decades, including recent advances in multi-fidelity approaches that use multiple model discretizations and/or simplified physics to significantly reduce the computational cost of various types of analyses. Simple interfaces are provided for the most commonly-used algorithms to limit a user’s need to tune the various hyper-parameters of each algorithm. However, more advanced work flows that require customization of hyper-parameters is also supported. An extensive set of Benchmarks from the literature is also provided to facilitate the easy comparison of different algorithms for a wide range of model analyses. This paper introduces PyApprox and its various features, and presents results demonstrating the utility of PyApprox on a benchmark problem modeling the advection of a tracer in ground water.
We present a polynomial preconditioner for solving large systems of linear equations. The polynomial is derived from the minimum residual polynomial (the GMRES polynomial) and is more straightforward to compute and implement than many previous polynomial preconditioners. Our current implementation of this polynomial using its roots is naturally more stable than previous methods of computing the same polynomial. We implement further stability control using added roots, and this allows for high degree polynomials. We discuss the effectiveness and challenges of root-adding and give an additional check for stability. In this article, we study the polynomial preconditioner applied to GMRES; however it could be used with any Krylov solver. This polynomial preconditioning algorithm can dramatically improve convergence for some problems, especially for difficult problems, and can reduce dot products by an even greater margin.
The causal structure of a simulation is a major determinant of both its character and behavior, yet most methods we use to compare simulations focus only on simulation outputs. We introduce a method that combines graphical representation with information theoretic metrics to quantitatively compare the causal structures of models. The method applies to agent-based simulations as well as system dynamics models and facilitates comparison within and between types. Comparing models based on their causal structures can illuminate differences in assumptions made by the models, allowing modelers to (1) better situate their models in the context of existing work, including highlighting novelty, (2) explicitly compare conceptual theory and assumptions to simulated theory and assumptions, and (3) investigate potential causal drivers of divergent behavior between models. We demonstrate the method by comparing two epidemiology models at different levels of aggregation.
In many recent applications, particularly in the field of atom-centered descriptors for interatomic potentials, tensor products of spherical harmonics have been used to characterize complex atomic environments. When coupled with a radial basis, the atomic cluster expansion (ACE) basis is obtained. However, symmetrization with respect to both rotation and permutation results in an overcomplete set of ACE descriptors with linear dependencies occurring within blocks of functions corresponding to particular generalized Wigner symbols. All practical applications of ACE employ semi-numerical constructions to generate a complete, fully independent basis. While computationally tractable, the resultant basis cannot be expressed analytically, is susceptible to numerical instability, and thus has limited reproducibility. Here we present a procedure for generating explicit analytic expressions for a complete and independent set of ACE descriptors. The procedure uses a coupling scheme that is maximally symmetric w.r.t. permutation of the atoms, exposing the permutational symmetries of the generalized Wigner symbols, and yields a permutation-adapted rotationally and permutationally invariant basis (PA-RPI ACE). Theoretical support for the approach is presented, as well as numerical evidence of completeness and independence. A summary of explicit enumeration of PA-RPI functions up to rank 6 and polynomial degree 32 is provided. The PA-RPI blocks corresponding to particular generalized Wigner symbols may be either larger or smaller than the corresponding blocks in the simpler rotationally invariant basis. Finally, we demonstrate that basis functions of high polynomial degree persist under strong regularization, indicating the importance of not restricting the maximum degree of basis functions in ACE models a priori.
The selective amorphization of SiGe in Si/SiGe nanostructures via a 1 MeV Si + implant was investigated, resulting in single-crystal Si nanowires (NWs) and quantum dots (QDs) encapsulated in amorphous SiGe fins and pillars, respectively. The Si NWs and QDs are formed during high-temperature dry oxidation of single-crystal Si/SiGe heterostructure fins and pillars, during which Ge diffuses along the nanostructure sidewalls and encapsulates the Si layers. The fins and pillars were then subjected to a 3 × 10 15 ions/cm 2 1 MeV Si + implant, resulting in the amorphization of SiGe, while leaving the encapsulated Si crystalline for larger, 65-nm wide NWs and QDs. Interestingly, the 26-nm diameter Si QDs amorphize, while the 28-nm wide NWs remain crystalline during the same high energy ion implant. This result suggests that the Si/SiGe pillars have a lower threshold for Si-induced amorphization compared to their Si/SiGe fin counterparts. However, Monte Carlo simulations of ion implantation into the Si/SiGe nanostructures reveal similar predicted levels of displacements per cm 3 . Molecular dynamics simulations suggest that the total stress magnitude in Si QDs encapsulated in crystalline SiGe is higher than the total stress magnitude in Si NWs, which may lead to greater crystalline instability in the QDs during ion implant. The potential lower amorphization threshold of QDs compared to NWs is of special importance to applications that require robust QD devices in a variety of radiation environments.
A semi-analytic fluid model has been developed for characterizing relativistic electron emission across a warm diode gap. Here we demonstrate the use of this model in (i) verifying multi-fluid codes in modeling compressible relativistic electron flows (the EMPIRE-Fluid code is used as an example; see also Ref. 1), (ii) elucidating key physics mechanisms characterizing the influence of compressibility and relativistic injection speed of the electron flow, and (iii) characterizing the regimes over which a fluid model recovers physically reasonable solutions.
The objective of this milestone was to finish integrating GenTen tensor software with combustion application Pele using the Ascent in situ analysis software, partnering with the ALPINE and Pele teams. Also, to demonstrate the usage of the tensor analysis as part of a combustion simulation.
We present an adaptive algorithm for constructing surrogate models of multi-disciplinary systems composed of a set of coupled components. With this goal we introduce “coupling” variables with a priori unknown distributions that allow surrogates of each component to be built independently. Once built, the surrogates of the components are combined to form an integrated-surrogate that can be used to predict system-level quantities of interest at a fraction of the cost of the original model. The error in the integrated-surrogate is greedily minimized using an experimental design procedure that allocates the amount of training data, used to construct each component-surrogate, based on the contribution of those surrogates to the error of the integrated-surrogate. The multi-fidelity procedure presented is a generalization of multi-index stochastic collocation that can leverage ensembles of models of varying cost and accuracy, for one or more components, to reduce the computational cost of constructing the integrated-surrogate. Extensive numerical results demonstrate that, for a fixed computational budget, our algorithm is able to produce surrogates that are orders of magnitude more accurate than methods that treat the integrated system as a black-box.
Physics-informed machine learning (PIML) has emerged as a promising new approach for simulating complex physical and biological systems that are governed by complex multiscale processes for which some data are also available. In some instances, the objective is to discover part of the hidden physics from the available data, and PIML has been shown to be particularly effective for such problems for which conventional methods may fail. Unlike commercial machine learning where training of deep neural networks requires big data, in PIML big data are not available. Instead, we can train such networks from additional information obtained by employing the physical laws and evaluating them at random points in the space-time domain. Such PIML integrates multimodality and multifidelity data with mathematical models, and implements them using neural networks or graph networks. Here, we review some of the prevailing trends in embedding physics into machine learning, using physics-informed neural networks (PINNs) based primarily on feed-forward neural networks and automatic differentiation. For more complex systems or systems of systems and unstructured data, graph neural networks (GNNs) present some distinct advantages, and here we review how physics-informed learning can be accomplished with GNNs based on graph exterior calculus to construct differential operators; we refer to these architectures as physics-informed graph networks (PIGNs). We present representative examples for both forward and inverse problems and discuss what advances are needed to scale up PINNs, PIGNs and more broadly GNNs for large-scale engineering problems.