Buczkowski, Nicole E.; Foss, Mikil D.; Parks, Michael L.; Radu, Petronela
The paper presents a collection of results on continuous dependence for solutions to nonlocal problems under perturbations of data and system parameters. The integral operators appearing in the systems capture interactions via heterogeneous kernels that exhibit different types of weak singularities, space dependence, even regions of zero-interaction. The stability results showcase explicit bounds involving the measure of the domain and of the interaction collar size, nonlocal Poincaré constant, and other parameters. In the nonlinear setting, the bounds quantify in different Lp norms the sensitivity of solutions under different nonlinearity profiles. The results are validated by numerical simulations showcasing discontinuous solutions, varying horizons of interactions, and symmetric and heterogeneous kernels.
This SAND report documents CIS Late Start LDRD Project 22-0311, "Differential geometric approaches to momentum-based formulations for fluids". The project primarily developed geometric mechanics formulations for momentum-based descriptions of nonrelativistic fluids, utilizing a differential geometry/exterior calculus treatment of momentum and a space+time splitting. Specifically, the full suite of geometric mechanics formulations (variational/Lagrangian, Lie-Poisson Hamiltonian and Curl-Form Hamiltonian) were developed in terms of exterior calculus using vector-bundle valued differential forms. This was done for a fairly general version of semi-direct product theory sufficient to cover a wide range of both neutral and charged fluid models, including compressible Euler, magnetohydrodynamics and Euler-Maxwell. As a secondary goal, this project also explored the connection between geometric mechanics formulations and the more traditional Godunov form (a hyperbolic system of conservation laws). Unfortunately, this stage did not produce anything particularly interesting, due to unforeseen technical difficulties. There are two publications related to this work currently in preparation, and this work will be presented at SIAM CSE 23, at which the PI is organizing a mini-symposium on geometric mechanics formulations and structure-preserving discretizations for fluids. The logical next step is to utilize the exterior calculus based understanding of momentum coupled with geometric mechanics formulations to develop (novel) structure-preserving discretizations of momentum. This is the main subject of a successful FY23 CIS LDRD "Structure-preserving discretizations for momentum-based formulations of fluids".
This report documents the progress made in simulating the HERMES-III Magnetically Insulated Transmission Line (MITL) and courtyard with EMPIRE and ITS. This study focuses on the shots that were taken during the months of June and July of 2019 performed with the new MITL extension. There were a few shots where there was dose mapping of the courtyard, 11132, 11133, 11134, 11135, 11136, and 11146. This report focuses on these shots because there was full data return from the MITL electrical diagnostics and the radiation dose sensors in the courtyard. The comparison starts with improving the processing of the incoming voltage into the EMPIRE simulation from the experiment. The currents are then compared at several location along the MITL. The simulation results of the electrons impacting the anode are shown. The electron impact energy and angle is then handed off to ITS which calculates the dose on the faceplate and locations in the courtyard and they are compared to experimental measurements. ITS also calculates the photons and electrons that are injected into the courtyard, these quantities are then used by EMPIRE to calculated the photon and electron transport in the courtyard. The details for the algorithms used to perform the courtyard simulations are presented as well as qualitative comparisons of the electric field, magnetic field, and the conductivity in the courtyard. Because of the computational burden of these calculations the pressure was reduce in the courtyard to reduce the computational load. The computation performance is presented along with suggestion on how to improve both the computational performance as well as the algorithmic performance. Some of the algorithmic changed would reduce the accuracy of the models and detail comparison of these changes are left for a future study. As well as, list of code improvements there is also a list of suggested experimental improvements to improve the quality of the data return.
Modeling real-world phenomena to any degree of accuracy is a challenge that the scientific research community has navigated since its foundation. Lack of information and limited computational and observational resources necessitate modeling assumptions which, when invalid, lead to model-form error (MFE). The work reported herein explored a novel method to represent model-form uncertainty (MFU) that combines Bayesian statistics with the emerging field of universal differential equations (UDEs). The fundamental principle behind UDEs is simple: use known equational forms that govern a dynamical system when you have them; then incorporate data-driven approaches – in this case neural networks (NNs) – embedded within the governing equations to learn the interacting terms that were underrepresented. Utilizing epidemiology as our motivating exemplar, this report will highlight the challenges of modeling novel infectious diseases while introducing ways to incorporate NN approximations to MFE. Prior to embarking on a Bayesian calibration, we first explored methods to augment the standard (non-Bayesian) UDE training procedure to account for uncertainty and increase robustness of training. In addition, it is often the case that uncertainty in observations is significant; this may be due to randomness or lack of precision in the measurement process. This uncertainty typically manifests as “noisy” observations which deviate from a true underlying signal. To account for such variability, the NN approximation to MFE is endowed with a probabilistic representation and is updated using available observational data in a Bayesian framework. By representing the MFU explicitly and deploying an embedded, data-driven model, this approach enables an agile, expressive, and interpretable method for representing MFU. In this report we will provide evidence that Bayesian UDEs show promise as a novel framework for any science-based, data-driven MFU representation; while emphasizing that significant advances must be made in the calibration of Bayesian NNs to ensure a robust calibration procedure.
Classification of features in a scene typically requires conversion of the incoming photonic field int the electronic domain. Recently, an alternative approach has emerged whereby passive structured materials can perform classification tasks by directly using free-space propagation and diffraction of light. In this manuscript, we present a theoretical and computational study of such systems and establish the basic features that govern their performance. We show that system architecture, material structure, and input light field are intertwined and need to be co-designed to maximize classification accuracy. Our simulations show that a single layer metasurface can achieve classification accuracy better than conventional linear classifiers, with an order of magnitude fewer diffractive features than previously reported. For a wavelength λ, single layer metasurfaces of size 100λ x 100λ with aperture density λ-2 achieve ~96% testing accuracy on the MNIST dataset, for an optimized distance ~100λ to the output plane. This is enabled by an intrinsic nonlinearity in photodetection, despite the use of linear optical metamaterials. Furthermore, we find that once the system is optimized, the number of diffractive features is the main determinant of classification performance. The slow asymptotic scaling with the number of apertures suggests a reason why such systems may benefit from multiple layer designs. Finally, we show a trade-off between the number of apertures and fabrication noise.
This document provides very basic background information and initial enabling guidance for computational analysts to develop and utilize GitOps practices within the Common Engineering Environment (CEE) and High Performance Computing (HPC) computational environment at Sandia National Laboratories through GitLab/Jacamar runner based workflows.
Analog computing has been widely proposed to improve the energy efficiency of multiple important workloads including neural network operations, and other linear algebra kernels. To properly evaluate analog computing and explore more complex workloads such as systems consisting of multiple analog data paths, system level simulations are required. Moreover, prior work on system architectures for analog computing often rely on custom simulators creating signficant additional design effort and complicating comparisons between different systems. To remedy these issues, this report describes the design and implementation of a flexible tile-based analog accelerator element for the Structural Simulation Toolkit (SST). The element focuses on heavily on the tile controller—an often neglected aspect of prior work—that is sufficiently versatile to simulate a wide range of different tile operations including neural network layers, signal processing kernels, and generic linear algebra operations without major constraints. The tile model also interoperates with existing SST memory and network models to reduce the overall development load and enable future simulation of heterogeneous systems with both conventional digital logic and analog compute tiles. Finally, both the tile and array models are designed to easily support future extensions as new analog operations and applications that can benefit from analog computing are developed.
Plasma physics simulations are vital for a host of Sandia mission concerns, for fundamental science, and for clean energy in the form of fusion power. Sandia's most mature plasma physics simulation capabilities come in the form of particle-in-cell (PIC) models and magnetohydrodynamics (MHD) models. MHD models for a plasma work well in denser plasma regimes when there is enough material that the plasma approximates a fluid. PIC models, on the other hand, work well in lower-density regimes, in which there is not too much to simulate; error in PIC scales as the square root of the number of particles, making high-accuracy simulations expensive. Real-world applications, however, almost always involve a transition region between the high-density regimes where MHD is appropriate, and the low-density regimes for PIC. In such a transition region, a direct discretization of Vlasov is appropriate. Such discretizations come with their own computational costs, however; the phase-space mesh for Vlasov can involve up to six dimensions (seven if time is included), and to apply appropriate homogeneous boundary conditions in velocity space requires meshing a substantial padding region to ensure that the distribution remains sufficiently close to zero at the velocity boundaries. Moreover, for collisional plasmas, the right-hand side of the Vlasov equation is a collision operator, which is non-local in velocity space, and which may dominate the cost of the Vlasov solver. The present LDRD project endeavors to develop modern, foundational tools for the development of continuum-kinetic Vlasov solvers, using the discontinuous Petrov-Galerkin (DPG) methodology, for discretization of Vlasov, and machine-learning (ML) models to enable efficient evaluation of collision operators. DPG affords several key advantages. First, it has a built-in, robust error indicator, allowing us to adapt the mesh in a very natural way, enabling a coarse velocity-space mesh near the homogeneous boundaries, and a fine mesh where the solution has fine features. Second, it is an inherently high-order, high-intensity method, requiring extra local computations to determine so-called optimal test functions, which makes it particularly suited to modern hardware in which floating-point throughput is increasing at a faster rate than memory bandwidth. Finally, DPG is a residual-minimizing method, which enables high-accuracy computation: in typical cases, the method delivers something very close to the $L^2$ projection of the exact solution. Meanwhile, the ML-based collision model we adopt affords a cost structure that scales as the square root of a standard direct evaluation. Moreover, we design our model to conserve mass, momentum, and energy by construction, and our approach to training is highly flexible, in that it can incorporate not only synthetic data from direct-simulation Monte Carlo (DSMC) codes, but also experimental data. We have developed two DPG formulations for Vlasov-Poisson: a time-marching, backward-Euler discretization and a space-time discretization. We have conducted a number of numerical experiments to verify the approach in a 1D1V setting. In this report, we detail these formulations and experiments. We also summarize some new theoretical results developed as part of this project (published as papers previously): some new analysis of DPG for the convection-reaction problem (of which the Vlasov equation is an instance), a new exponential integrator for DPG, and some numerical exploration of various DPG-based time-marching approaches to the heat equation. As part of this work, we have contributed extensively to the Camellia open-source library; we also describe the new capabilities and their usage. We have also developed a well-documented methodology for single-species collision operators, which we applied to argon and demonstrated with numerical experiments. We summarize those results here, as well as describing at a high level a design extending the methodology to multi-species operators. We have released a new open-source library, MLC, under a BSD license; we include a summary of its capabilities as well.
This project created and demonstrated a framework for the efficient and accurate prediction of complex systems with only a limited amount of highly trusted data. These next generation computational multi-fidelity tools fuse multiple information sources of varying cost and accuracy to reduce the computational and experimental resources needed for designing and assessing complex multi-physics/scale/component systems. These tools have already been used to substantially improve the computational efficiency of simulation aided modeling activities from assessing thermal battery performance to predicting material deformation. This report summarizes the work carried out during a two year LDRD project. Specifically we present our technical accomplishments; project outputs such as publications, presentations and professional leadership activities; and the project’s legacy.
When modeling complex physical systems with advanced dynamics, such as shocks and singularities, many classic methods for solving partial differential equations can return inaccurate or unusable results. One way to resolve these complex dynamics is through r-adaptive refinement methods, in which a fixed number of mesh points are shifted to areas of high interest. The mesh refinement map can be found through the solution of the Monge-Ampére equation, a highly nonlinear partial differential equation. Due to its nonlinearity, the numerical solution of the Monge-Ampére equation is nontrivial and has previously required computationally expensive methods. In this report, we detail our novel optimization-based, multigrid-enabled solver for a low-order finite element approximation of the Monge-Ampére equation. This fast and scalable solver makes r-adaptive meshing more readily available for problems related to large-scale optimal design. Beyond mesh adaptivity, our report discusses additional applications where our fast solver for the Monge-Ampére equation could be easily applied.
Quantifying the sensitivity - how a quantity of interest (QoI) varies with respect to a parameter – and response – the representation of a QoI as a function of a parameter - of a computer model of a parametric dynamical system is an important and challenging problem. Traditional methods fail in this context since sensitive dependence on initial conditions implies that the sensitivity and response of a QoI may be ill-conditioned or not well-defined. If a chaotic model has an ergodic attractor, then ergodic averages of QoIs are well-defined quantities and their sensitivity can be used to characterize model sensitivity. The response theorem gives sufficient conditions such that the local forward sensitivity – the derivative with respect to a given parameter - of an ergodic average of a QoI is well-defined. We describe a method based on ergodic and response theory for computing the sensitivity and response of a given QoI with respect to a given parameter in a chaotic model with an ergodic and hyperbolic attractor. This method does not require computation of ensembles of the model with perturbed parameter values. The method is demonstrated and some of the computations are validated on the Lorenz 63 and Lorenz 96 models.
This report details work that was completed to address the Fiscal Year 2022 Advanced Science and Technology (AS&T) Laboratory Directed Research and Development (LDRD) call for “AI-enhanced Co-Design of Next Generation Microelectronics.” This project required concurrent contributions from the fields of 1) materials science, 2) devices and circuits, 3) physics of computing, and 4) algorithms and system architectures. During this project, we developed AI-enhanced circuit design methods that relied on reinforcement learning and evolutionary algorithms. The AI-enhanced design methods were tested on neuromorphic circuit design problems that have real-world applications related to Sandia’s mission needs. The developed methods enable the design of circuits, including circuits that are built from emerging devices, and they were also extended to enable novel device discovery. We expect that these AI-enhanced design methods will accelerate progress towards developing next-generation, high-performance neuromorphic computing systems.
The purpose of our report is to discuss the notion of entropy and its relationship with statistics. Our goal is to provide a manner in which you can think about entropy, its central role within information theory and relationship with statistics. We review various relationships between information theory and statistics—nearly all are well-known but unfortunately are often not recognized. Entropy quantities the "average amount of surprise" in a random variable and lies at the heart of information theory, which studies the transmission, processing, extraction, and utilization of information. For us, data is information. What is the distinction between information theory and statistics? Information theorists work with probability distributions. Instead, statisticians work with samples. In so many words, information theory using samples is the practice of statistics.
The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Fuel Cycle Technology (FCT) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and high-level nuclear waste (HLW). Two high priorities for SFWST disposal R&D are design concept development and disposal system modeling. These priorities are directly addressed in the SFWST Geologic Disposal Safety Assessment (GDSA) control account, which is charged with developing a geologic repository system modeling and analysis capability, and the associated software, GDSA Framework, for evaluating disposal system performance for nuclear waste in geologic media. GDSA Framework is supported by SFWST Campaign and its predecessor the Used Fuel Disposition (UFD) campaign.
Advection of trace species, or tracers, also called tracer transport, in models of the atmosphere and other physical domains is an important and potentially computationally expensive part of a model's dynamical core. Semi-Lagrangian (SL) advection methods are efficient because they permit a time step much larger than the advective stability limit for explicit Eulerian methods without requiring the solution of a globally coupled system of equations as implicit Eulerian methods do. Thus, to reduce the computational expense of tracer transport, dynamical cores often use SL methods to advect tracers. The class of interpolation semi-Lagrangian (ISL) methods contains potentially extremely efficient SL methods. We describe a finite-element ISL transport method that we call the interpolation semi-Lagrangian element-based transport (Islet) method, such as for use with atmosphere models discretized using the spectral element method. The Islet method uses three grids that share an element grid: a dynamics grid supporting, for example, the Gauss-Legendre-Lobatto basis of degree three; a physics parameterizations grid with a configurable number of finite-volume subcells per element; and a tracer grid supporting use of Islet bases with particular basis again configurable. This method provides extremely accurate tracer transport and excellent diagnostic values in a number of verification problems.
PyApprox is a Python-based one-stop-shop for probabilistic analysis of scientific numerical models. Easy to use and extendable tools are provided for constructing surrogates, sensitivity analysis, Bayesian inference, experimental design, and forward uncertainty quantification. The algorithms implemented represent the most popular methods for model analysis developed over the past two decades, including recent advances in multi-fidelity approaches that use multiple model discretizations and/or simplified physics to significantly reduce the computational cost of various types of analyses. Simple interfaces are provided for the most commonly-used algorithms to limit a user’s need to tune the various hyper-parameters of each algorithm. However, more advanced work flows that require customization of hyper-parameters is also supported. An extensive set of Benchmarks from the literature is also provided to facilitate the easy comparison of different algorithms for a wide range of model analyses. This paper introduces PyApprox and its various features, and presents results demonstrating the utility of PyApprox on a benchmark problem modeling the advection of a tracer in ground water.
An approach to numerically modeling relativistic magnetrons, in which the electrons are represented with a relativistic fluid, is described. A principal effect in the operation of a magnetron is space-charge-limited (SCL) emission of electrons from the cathode. We have developed an approximate SCL emission boundary condition for the fluid electron model. This boundary condition prescribes the flux of electrons as a function of the normal component of the electric field on the boundary. We show the results of a benchmarking activity that applies the fluid SCL boundary condition to the one-dimensional Child-Langmuir diode problem and a canonical two-dimensional diode problem. Simulation results for a two-dimensional A6 magnetron are then presented. Computed bunching of the electron cloud occurs and coincides with significant microwave power generation. Numerical convergence of the solution is considered. Sharp gradients in the solution quantities at the diocotron resonance, spanning an interval of three to four grid cells in the most well-resolved case, are present and likely affect convergence.
We present a polynomial preconditioner for solving large systems of linear equations. The polynomial is derived from the minimum residual polynomial (the GMRES polynomial) and is more straightforward to compute and implement than many previous polynomial preconditioners. Our current implementation of this polynomial using its roots is naturally more stable than previous methods of computing the same polynomial. We implement further stability control using added roots, and this allows for high degree polynomials. We discuss the effectiveness and challenges of root-adding and give an additional check for stability. In this article, we study the polynomial preconditioner applied to GMRES; however it could be used with any Krylov solver. This polynomial preconditioning algorithm can dramatically improve convergence for some problems, especially for difficult problems, and can reduce dot products by an even greater margin.
For decades, Arctic temperatures have increased twice as fast as average global temperatures. As a first step toward quantifying parametric uncertainty in Arctic climate, we performed a variance-based global sensitivity analysis (GSA) using a fully coupled, ultra-low resolution (ULR) configuration of version 1 of the U.S. Department of Energy's Energy Exascale Earth System Model (E3SMv1). Specifically, we quantified the sensitivity of six quantities of interests (QOIs), which characterize changes in Arctic climate over a 75 year period, to uncertainties in nine model parameters spanning the sea ice, atmosphere, and ocean components of E3SMv1. Sensitivity indices for each QOI were computed with a Gaussian process emulator using 139 random realizations of the random parameters and fixed preindustrial forcing. Uncertainties in the atmospheric parameters in the Cloud Layers Unified by Binormals (CLUBB) scheme were found to have the most impact on sea ice status and the larger Arctic climate. Our results demonstrate the importance of conducting sensitivity analyses with fully coupled climate models. The ULR configuration makes such studies computationally feasible today due to its low computational cost. When advances in computational power and modeling algorithms enable the tractable use of higher-resolution models, our results will provide a baseline that can quantify the impact of model resolution on the accuracy of sensitivity indices. Moreover, the confidence intervals provided by our study, which we used to quantify the impact of the number of model evaluations on the accuracy of sensitivity estimates, have the potential to inform the computational resources needed for future sensitivity studies.
The causal structure of a simulation is a major determinant of both its character and behavior, yet most methods we use to compare simulations focus only on simulation outputs. We introduce a method that combines graphical representation with information theoretic metrics to quantitatively compare the causal structures of models. The method applies to agent-based simulations as well as system dynamics models and facilitates comparison within and between types. Comparing models based on their causal structures can illuminate differences in assumptions made by the models, allowing modelers to (1) better situate their models in the context of existing work, including highlighting novelty, (2) explicitly compare conceptual theory and assumptions to simulated theory and assumptions, and (3) investigate potential causal drivers of divergent behavior between models. We demonstrate the method by comparing two epidemiology models at different levels of aggregation.
We show, through the use of the Landauer-Büttiker (LB) formalism and a tight-binding (TB) model, that the transport gap of twinned graphene can be tuned through the application of a uniaxial strain in the direction normal to the twin band. Remarkably, we find that the transport gap Egap bears a square-root dependence on the control parameter ϵx−ϵc, where ϵx is the applied uniaxial strain and ϵc∼19% is a critical strain. We interpret this dependence as evidence of criticality underlying a continuous phase transition, with ϵx−ϵc playing the role of control parameter and the transport gap Egap playing the role of order parameter. For ϵx<ϵc, the transport gap is non-zero and the material is semiconductor, whereas for ϵx>ϵc the transport gap closes to zero and the material becomes conductor, which evinces a semiconductor-to-conductor phase transition. The computed critical exponent of 1/2 places the transition in the meanfield universality class, which enables far-reaching analogies with other systems in the same class.
The selective amorphization of SiGe in Si/SiGe nanostructures via a 1 MeV Si+ implant was investigated, resulting in single-crystal Si nanowires (NWs) and quantum dots (QDs) encapsulated in amorphous SiGe fins and pillars, respectively. The Si NWs and QDs are formed during high-temperature dry oxidation of single-crystal Si/SiGe heterostructure fins and pillars, during which Ge diffuses along the nanostructure sidewalls and encapsulates the Si layers. The fins and pillars were then subjected to a 3 × 1015 ions/cm2 1 MeV Si+ implant, resulting in the amorphization of SiGe, while leaving the encapsulated Si crystalline for larger, 65-nm wide NWs and QDs. Interestingly, the 26-nm diameter Si QDs amorphize, while the 28-nm wide NWs remain crystalline during the same high energy ion implant. This result suggests that the Si/SiGe pillars have a lower threshold for Si-induced amorphization compared to their Si/SiGe fin counterparts. However, Monte Carlo simulations of ion implantation into the Si/SiGe nanostructures reveal similar predicted levels of displacements per cm3. Molecular dynamics simulations suggest that the total stress magnitude in Si QDs encapsulated in crystalline SiGe is higher than the total stress magnitude in Si NWs, which may lead to greater crystalline instability in the QDs during ion implant. The potential lower amorphization threshold of QDs compared to NWs is of special importance to applications that require robust QD devices in a variety of radiation environments.
A semi-analytic fluid model has been developed for characterizing relativistic electron emission across a warm diode gap. Here we demonstrate the use of this model in (i) verifying multi-fluid codes in modeling compressible relativistic electron flows (the EMPIRE-Fluid code is used as an example; see also Ref. 1), (ii) elucidating key physics mechanisms characterizing the influence of compressibility and relativistic injection speed of the electron flow, and (iii) characterizing the regimes over which a fluid model recovers physically reasonable solutions.
The objective of this milestone was to finish integrating GenTen tensor software with combustion application Pele using the Ascent in situ analysis software, partnering with the ALPINE and Pele teams. Also, to demonstrate the usage of the tensor analysis as part of a combustion simulation.
We present an adaptive algorithm for constructing surrogate models of multi-disciplinary systems composed of a set of coupled components. With this goal we introduce “coupling” variables with a priori unknown distributions that allow surrogates of each component to be built independently. Once built, the surrogates of the components are combined to form an integrated-surrogate that can be used to predict system-level quantities of interest at a fraction of the cost of the original model. The error in the integrated-surrogate is greedily minimized using an experimental design procedure that allocates the amount of training data, used to construct each component-surrogate, based on the contribution of those surrogates to the error of the integrated-surrogate. The multi-fidelity procedure presented is a generalization of multi-index stochastic collocation that can leverage ensembles of models of varying cost and accuracy, for one or more components, to reduce the computational cost of constructing the integrated-surrogate. Extensive numerical results demonstrate that, for a fixed computational budget, our algorithm is able to produce surrogates that are orders of magnitude more accurate than methods that treat the integrated system as a black-box.