Publications

Results 26–50 of 9,998
Skip to search filters

Unified Language Frontend for Physic-Informed AI/ML

Kelley, Brian M.; Rajamanickam, Sivasankaran R.

Artificial intelligence and machine learning (AI/ML) are becoming important tools for scientific modeling and simulation as in several other fields such as image analysis and natural language processing. ML techniques can leverage the computing power available in modern systems and reduce the human effort needed to configure experiments, interpret and visualize results, draw conclusions from huge quantities of raw data, and build surrogates for physics based models. Domain scientists in fields like fluid dynamics, microelectronics and chemistry can automate many of their most difficult and repetitive tasks or improve the design times by use of the faster ML-surrogates. However, modern ML and traditional scientific highperformance computing (HPC) tend to use completely different software ecosystems. While ML frameworks like PyTorch and TensorFlow provide Python APIs, most HPC applications and libraries are written in C++. Direct interoperability between the two languages is possible but is tedious and error-prone. In this work, we show that a compiler-based approach can bridge the gap between ML frameworks and scientific software with less developer effort and better efficiency. We use the MLIR (multi-level intermediate representation) ecosystem to compile a pre-trained convolutional neural network (CNN) in PyTorch to freestanding C++ source code in the Kokkos programming model. Kokkos is a programming model widely used in HPC to write portable, shared-memory parallel code that can natively target a variety of CPU and GPU architectures. Our compiler-generated source code can be directly integrated into any Kokkosbased application with no dependencies on Python or cross-language interfaces.

More Details

Lossless Quantum Hard-Drive Memory Using Parity-Time Symmetry

Chatterjee, Eric N.; Soh, Daniel B.; Young, Steve M.

We theoretically studied the feasibility of building a long-term read-write quantum memory using the principle of parity-time (PT) symmetry, which has already been demonstrated for classical systems. The design consisted of a two-resonator system. Although both resonators would feature intrinsic loss, the goal was to apply a driving signal to one of the resonators such that it would become an amplifying subsystem, with a gain rate equal and opposite to the loss rate of the lossy resonator. Consequently, the loss and gain probabilities in the overall system would cancel out, yielding a closed quantum system. Upon performing detailed calculations on the impact of a driving signal on a lossy resonator, our results demonstrated that an amplifying resonator is physically unfeasible, thus forestalling the possibility of PT-symmetric quantum storage. Our finding serves to significantly narrow down future research into designing a viable quantum hard drive.

More Details

Mathematical Foundations for Nonlocal Interface Problems: Multiscale Simulations of Heterogeneous Materials (Final LDRD Report)

D'Elia, Marta D.; Bochev, Pavel B.; Foster, John E.; Glusa, Christian A.; Gulian, Mamikon G.; Gunzburger, Max G.; Trageser, Jeremy T.; Kuhlman, Kristopher L.; Martinez, Mario A.; Najm, H.N.; Silling, Stewart A.; Tupek, Michael T.; Xu, Xiao X.

Nonlocal models provide a much-needed predictive capability for important Sandia mission applications, ranging from fracture mechanics for nuclear components to subsurface flow for nuclear waste disposal, where traditional partial differential equations (PDEs) models fail to capture effects due to long-range forces at the microscale and mesoscale. However, utilization of this capability is seriously compromised by the lack of a rigorous nonlocal interface theory, required for both application and efficient solution of nonlocal models. To unlock the full potential of nonlocal modeling we developed a mathematically rigorous and physically consistent interface theory and demonstrate its scope in mission-relevant exemplar problems.

More Details

Large-Scale Atomistic Simulations [Slides]

Moore, Stan G.

This report investigates free expansion of Aluminum and provides a take home message of "The physically realistic SNAP machine-learning potential captures liquid-vapor coexistence behavior for free expansion of aluminum at a level not generally accessible to hydrocodes".

More Details

GDSA Framework Development and Process Model Integration FY2022

Mariner, Paul M.; Debusschere, Bert D.; Fukuyama, David E.; Harvey, Jacob H.; LaForce, Tara; Leone, Rosemary C.; Perry, Frank V.; Swiler, Laura P.; TACONI, ANNA M.

The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Spent Fuel & Waste Disposition (SFWD) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and high-level nuclear waste (HLW). A high priority for SFWST disposal R&D is disposal system modeling (Sassani et al. 2021). The SFWST Geologic Disposal Safety Assessment (GDSA) work package is charged with developing a disposal system modeling and analysis capability for evaluating generic disposal system performance for nuclear waste in geologic media. This report describes fiscal year (FY) 2022 advances of the Geologic Disposal Safety Assessment (GDSA) performance assessment (PA) development groups of the SFWST Campaign. The common mission of these groups is to develop a geologic disposal system modeling capability for nuclear waste that can be used to assess probabilistically the performance of generic disposal options and generic sites. The modeling capability under development is called GDSA Framework (pa.sandia.gov). GDSA Framework is a coordinated set of codes and databases designed for probabilistically simulating the release and transport of disposed radionuclides from a repository to the biosphere for post-closure performance assessment. Primary components of GDSA Framework include PFLOTRAN to simulate the major features, events, and processes (FEPs) over time, Dakota to propagate uncertainty and analyze sensitivities, meshing codes to define the domain, and various other software for rendering properties, processing data, and visualizing results.

More Details

Revealing conductivity of p-type delta layer systems for novel computing applications

Mamaluy, Denis M.; Mendez Granado, Juan P.

This project uses a quantum simulation technique to reveal the true conducting properties of novel atomic precision advanced manufacturing materials. With Moore's law approaching the limit of scaling for the CMOS technology, it is crucial to provide the best computing power and resources to National Security missions. Atomic precision advanced manufacturing-based computing systems can become the key to the design, use, and security of modern weapon systems, critical infrastructure, and communications. We will utilize the state-of-the-art computational methodology to create a predictive simulator for p-type atomic precision advanced manufacturing systems, which may also find applications in counterfeit detection and anti-tamper.

More Details

First-principles simulation of light-ion microscopy of graphene

2D Materials

Kononov, Alina K.; Olmstead, Alexandra L.; Baczewski, Andrew D.; Schleife, Andre S.

The extreme sensitivity of 2D materials to defects and nanostructure requires precise imaging techniques to verify presence of desirable and absence of undesirable features in the atomic geometry. Helium-ion beams have emerged as a promising materials imaging tool, achieving up to 20 times higher resolution and 10 times larger depth-of-field than conventional or environmental scanning electron microscopes. Here, we offer first-principles theoretical insights to advance ion-beam imaging of atomically thin materials by performing real-time time-dependent density functional theory simulations of single impacts of 10–200 keV light ions in free-standing graphene. Here we predict that detecting electrons emitted from the back of the material (the side from which the ion exits) would result in up to three times higher signal and up to five times higher contrast images, making 2D materials especially compelling targets for ion-beam microscopy. This predicted superiority of exit-side emission likely arises from anisotropic kinetic emission. The charge induced in the graphene equilibrates on a sub-fs time scale, leading to only slight disturbances in the carbon lattice that are unlikely to damage the atomic structure for any of the beam parameters investigated here.

More Details

Sensitivity analysis of generic deep geologic repository with focus on spatial heterogeneity induced by stochastic fracture network generation

Advances in Water Resources

Brooks, Dusty M.; Swiler, Laura P.; Stein, Emily S.; Mariner, Paul M.; Basurto, Eduardo B.; Portone, Teresa P.; Eckert, Aubrey C.; Leone, Rosemary C.

Geologic Disposal Safety Assessment Framework is a state-of-the-art simulation software toolkit for probabilistic post-closure performance assessment of systems for deep geologic disposal of nuclear waste developed by the United States Department of Energy. This paper presents a generic reference case and shows how it is being used to develop and demonstrate performance assessment methods within the Geologic Disposal Safety Assessment Framework that mitigate some of the challenges posed by high uncertainty and limited computational resources. Variance-based global sensitivity analysis is applied to assess the effects of spatial heterogeneity using graph-based summary measures for scalar and time-varying quantities of interest. Behavior of the system with respect to spatial heterogeneity is further investigated using ratios of water fluxes. This analysis shows that spatial heterogeneity is a dominant uncertainty in predictions of repository performance which can be identified in global sensitivity analysis using proxy variables derived from graph descriptions of discrete fracture networks. New quantities of interest defined using water fluxes proved useful for better understanding overall system behavior.

More Details

Composing preconditioners for multiphysics PDE systems with applications to Generalized MHD

Tuminaro, Raymond S.; Crockatt, Michael M.; Robinson, Allen C.

New patch smoothers or relaxation techniques are developed for solving linear matrix equations coming from systems of discretized partial differential equations (PDEs). One key linear solver challenge for many PDE systems arises when the resulting discretization matrix has a near null space that has a large dimension, which can occur in generalized magnetohydrodynamic (GMHD) systems. Patch-based relaxation is highly effective for problems when the null space can be spanned by a basis of locally supported vectors. The patch-based relaxation methods that we develop can be used either within an algebraic multigrid (AMG) hierarchy or as stand-alone preconditioners. These patch-based relaxation techniques are a form of well-known overlapping Schwarz methods where the computational domain is covered with a series of overlapping sub-domains (or patches). Patch relaxation then corresponds to solving a set of independent linear systems associated with each patch. In the context of GMHD, we also reformulate the underlying discrete representation used to generate a suitable set of matrix equations. In general, deriving a discretization that accurately approximates the curl operator and the Hall term while also producing linear systems with physically meaningful near null space properties can be challenging. Unfortunately, many natural discretization choices lead to a near null space that includes non-physical oscillatory modes and where it is not possible to span the near null space with a minimal set of locally supported basis vectors. Further discretization research is needed to understand the resulting trade-offs between accuracy, stability, and ease in solving the associated linear systems.

More Details

Enabling power measurement and control on Astra: The first petascale Arm supercomputer

Concurrency and Computation. Practice and Experience

Grant, Ryan E.; Hammond, Simon D.; Laros, James H.; Levenhagen, Michael J.; Olivier, Stephen L.; Pedretti, Kevin P.; Ward, H.L.; Younge, Andrew J.

Astra, deployed in 2018, was the first petascale supercomputer to utilize processors based on the ARM instruction set. The system was also the first under Sandia's Vanguard program which seeks to provide an evaluation vehicle for novel technologies that with refinement could be utilized in demanding, large-scale HPC environments. In addition to ARM, several other important first-of-a-kind developments were used in the machine, including new approaches to cooling the datacenter and machine. Here we document our experiences building a power measurement and control infrastructure for Astra. While this is often beyond the control of users today, the accurate measurement, cataloging, and evaluation of power, as our experiences show, is critical to the successful deployment of a large-scale platform. While such systems exist in part for other architectures, Astra required new development to support the novel Marvell ThunderX2 processor used in compute nodes. In addition to documenting the measurement of power during system bring up and for subsequent on-going routine use, we present results associated with controlling the power usage of the processor, an area which is becoming of progressively greater interest as data centers and supercomputing sites look to improve compute/energy efficiency and find additional sources for full system optimization.

More Details

Thermodynamically consistent versions of approximations used in modelling moist air

Quarterly Journal of the Royal Meteorological Society

Eldred, Christopher; Guba, Oksana G.; Taylor, Mark A.

Some existing approaches to modelling the thermodynamics of moist air make approximations that break thermodynamic consistency, such that the resulting thermodynamics does not obey the first and second laws or has other inconsistencies. Recently, an approach to avoid such inconsistency has been suggested: the use of thermodynamic potentials in terms of their natural variables, from which all thermodynamic quantities and relationships (equations of state) are derived. In this article, we develop this approach for unapproximated moist-air thermodynamics and two widely used approximations: the constant-κ approximation and the dry heat capacities approximation. The (consistent) constant-κ approximation is particularly attractive because it leads to, with the appropriate choice of thermodynamic variable, adiabatic dynamics that depend only on total mass and are independent of the breakdown between water forms. Additionally, a wide variety of material from different sources in the literature on thermodynamics in atmospheric modelling is brought together. It is hoped that this article provides a comprehensive reference for the use of thermodynamic potentials in atmospheric modelling, especially for the three systems considered here.

More Details

Metrics for Intercomparison of Remapping Algorithms (MIRA) protocol applied to Earth system models

Geoscientific Model Development

Mahadevan, Vijay S.; Guerra, Jorge E.; Jiao, Xiangmin; Kuberry, Paul A.; Li, Yipeng; Ullrich, Paul; Marsico, David; Jacob, Robert; Bochev, Pavel B.; Jones, Philip

Strongly coupled nonlinear phenomena such as those described by Earth system models (ESMs) are composed of multiple component models with independent mesh topologies and scalable numerical solvers. A common operation in ESMs is to remap or interpolate component solution fields defined on their computational mesh to another mesh with a different combinatorial structure and decomposition, e.g., from the atmosphere to the ocean, during the temporal integration of the coupled system. Several remapping schemes are currently in use or available for ESMs. However, a unified approach to compare the properties of these different schemes has not been attempted previously. We present a rigorous methodology for the evaluation and intercomparison of remapping methods through an independently implemented suite of metrics that measure the ability of a method to adhere to constraints such as grid independence, monotonicity, global conservation, and local extrema or feature preservation. A comprehensive set of numerical evaluations is conducted based on a progression of scalar fields from idealized and smooth to more general climate data with strong discontinuities and strict bounds. We examine four remapping algorithms with distinct design approaches, namely ESMF Regrid , TempestRemap , generalized moving least squares (GMLS) with post-processing filters, and WLS-ENOR . By repeated iterative application of the high-order remapping methods to the test fields, we verify the accuracy of each scheme in terms of their observed convergence order for smooth data and determine the bounded error propagation using challenging, realistic field data on both uniform and regionally refined mesh cases. In addition to retaining high-order accuracy under idealized conditions, the methods also demonstrate robust remapping performance when dealing with non-smooth data. There is a failure to maintain monotonicity in the traditional L2-minimization approaches used in ESMF and TempestRemap, in contrast to stable recovery through nonlinear filters used in both meshless GMLS and hybrid mesh-based WLS-ENOR schemes. Local feature preservation analysis indicates that high-order methods perform better than low-order dissipative schemes for all test cases. The behavior of these remappers remains consistent when applied on regionally refined meshes, indicating mesh-invariant implementations. The MIRA intercomparison protocol proposed in this paper and the detailed comparison of the four algorithms demonstrate that the new schemes, namely GMLS and WLS-ENOR, are competitive compared to standard conservative minimization methods requiring computation of mesh intersections. The work presented in this paper provides a foundation that can be extended to include complex field definitions, realistic mesh topologies, and spectral element discretizations, thereby allowing for a more complete analysis of production-ready remapping packages.

More Details

Dynamics Informed Optimization for Resilient Energy Systems

Arguello, Bryan A.; Stewart, Nathan; Hoffman, Matthew J.; Nicholson, Bethany L.; Garrett, Richard A.; Moog, Emily R.

Optimal mitigation planning for highly disruptive contingencies to a transmission-level power system requires optimization with dynamic power system constraints, due to the key role of dynamics in system stability to major perturbations. We formulate a generalized disjunctive program to determine optimal grid component hardening choices for protecting against major failures, with differential algebraic constraints representing system dynamics (specifically, differential equations representing generator and load behavior and algebraic equations representing instantaneous power balance over the transmission system). We optionally allow stochastic optimal pre-positioning across all considered failure scenarios, and optimal emergency control within each scenario. This novel formulation allows, for the first time, analyzing the resilience interdependencies of mitigation planning, preventive control, and emergency control. Using all three strategies in concert is particularly effective at maintaining robust power system operation under severe contingencies, as we demonstrate on the Western System Coordinating Council (WSCC) 9-bus test system using synthetic multi-device outage scenarios. Towards integrating our modeling framework with real threats and more realistic power systems, we explore applying hybrid dynamics to power systems. Our work is applied to basic RL circuits with the ultimate goal of using the methodology to model protective tripping schemes in the grid. Finally, we survey mitigation techniques for HEMP threats and describe a GIS application developed to create threat scenarios in a grid with geographic detail.

More Details

An introduction to developing GitLab/Jacamar runner analyst centric workflows at Sandia

Robinson, Allen C.; Swan, Matthew S.; Harvey, Evan C.; Klein, Brandon T.; Lawson, Gary L.; Milewicz, Reed M.; Pedretti, Kevin P.; Schmitz, Mark E.; Warnock, Scott A.

This document provides very basic background information and initial enabling guidance for computational analysts to develop and utilize GitOps practices within the Common Engineering Environment (CEE) and High Performance Computing (HPC) computational environment at Sandia National Laboratories through GitLab/Jacamar runner based workflows.

More Details

Deployment of Multifidelity Uncertainty Quantification for Thermal Battery Assessment Part I: Algorithms and Single Cell Results

Eldred, Michael S.; Adams, Brian M.; Geraci, Gianluca G.; Portone, Teresa P.; Ridgway, Elliott M.; Stephens, John A.; Wildey, Timothy M.

This report documents the results of an FY22 ASC V&V level 2 milestone demonstrating new algorithms for multifidelity uncertainty quantification. Part I of the report describes the algorithms, studies their performance on a simple model problem, and then deploys the methods to a thermal battery example from the open literature. Part II (restricted distribution) applies the multifidelity UQ methods to specific thermal batteries of interest to the NNSA/ASC program.

More Details

Neuromorphic Information Processing by Optical Media

Leonard, Francois L.; Fuller, Elliot J.; Teeter, Corinne M.; Vineyard, Craig M.

Classification of features in a scene typically requires conversion of the incoming photonic field int the electronic domain. Recently, an alternative approach has emerged whereby passive structured materials can perform classification tasks by directly using free-space propagation and diffraction of light. In this manuscript, we present a theoretical and computational study of such systems and establish the basic features that govern their performance. We show that system architecture, material structure, and input light field are intertwined and need to be co-designed to maximize classification accuracy. Our simulations show that a single layer metasurface can achieve classification accuracy better than conventional linear classifiers, with an order of magnitude fewer diffractive features than previously reported. For a wavelength λ, single layer metasurfaces of size 100λ x 100λ with aperture density λ-2 achieve ~96% testing accuracy on the MNIST dataset, for an optimized distance ~100λ to the output plane. This is enabled by an intrinsic nonlinearity in photodetection, despite the use of linear optical metamaterials. Furthermore, we find that once the system is optimized, the number of diffractive features is the main determinant of classification performance. The slow asymptotic scaling with the number of apertures suggests a reason why such systems may benefit from multiple layer designs. Finally, we show a trade-off between the number of apertures and fabrication noise.

More Details

Sensitivity Analyses for Monte Carlo Sampling-Based Particle Simulations

Bond, Stephen D.; Franke, Brian C.; Lehoucq, Richard B.; McKinley, Scott M.

Computational design-based optimization is a well-used tool in science and engineering. Our report documents the successful use of a particle sensitivity analysis for design-based optimization within Monte Carlo sampling-based particle simulation—a currently unavailable capability. Such a capability enables the particle simulation communities to go beyond forward simulation and promises to reduce the burden on overworked analysts by getting more done with less computation.

More Details

Quantum-Accurate Multiscale Modeling of Shock Hugoniots, Ramp Compression Paths, Structural and Magnetic Phase Transitions, and Transport Properties in Highly Compressed Metals

Wood, Mitchell A.; Nikolov, Svetoslav V.; Rohskopf, Andrew D.; Desjarlais, Michael P.; Cangi, Attila C.; Tranchida, Julien T.

Fully characterizing high energy density (HED) phenomena using pulsed power facilities (Z machine) and coherent light sources is possible only with complementary numerical modeling for design, diagnostic development, and data interpretation. The exercise of creating numerical tests, that match experimental conditions, builds critical insight that is crucial for the development of a strong fundamental understanding of the physics behind HED phenomena and for the design of next generation pulsed power facilities. The persistence of electron correlation in HED ma- terials arising from Coulomb interactions and the Pauli exclusion principle is one of the greatest challenges for accurate numerical modeling and has hitherto impeded our ability to model HED phenomena across multiple length and time scales at sufficient accuracy. An exemplar is a fer- romagnetic material like iron, while familiar and widely used, we lack a simulation capability to characterize the interplay of structure and magnetic effects that govern material strength, ki- netics of phase transitions and other transport properties. Herein we construct and demonstrate the Molecular-Spin Dynamics (MSD) simulation capability for iron from ambient to earth core conditions, all software advances are open source and presently available for broad usage. These methods are multi-scale in nature, direct comparisons between high fidelity density functional the- ory (DFT) and linear-scaling MSD simulations is done throughout this work, with advancements made to MSD allowing for electronic structure changes being reflected in classical dynamics. Main takeaways for the project include insight into the role of magnetic spins on mechanical properties and thermal conductivity, development of accurate interatomic potentials paired with spin Hamil- tonians, and characterization of the high pressure melt boundary that is of critical importance to planetary modeling efforts.

More Details

Multi-fidelity information fusion and resource allocation

Jakeman, John D.; Eldred, Michael S.; Geraci, Gianluca G.; Seidl, Daniel T.; Smith, Thomas M.; Gorodetsky, Alex A.; Pham, Trung P.; Narayan, Akil N.; Zeng, Xiaoshu Z.; Ghanem, Roger G.

This project created and demonstrated a framework for the efficient and accurate prediction of complex systems with only a limited amount of highly trusted data. These next generation computational multi-fidelity tools fuse multiple information sources of varying cost and accuracy to reduce the computational and experimental resources needed for designing and assessing complex multi-physics/scale/component systems. These tools have already been used to substantially improve the computational efficiency of simulation aided modeling activities from assessing thermal battery performance to predicting material deformation. This report summarizes the work carried out during a two year LDRD project. Specifically we present our technical accomplishments; project outputs such as publications, presentations and professional leadership activities; and the project’s legacy.

More Details

Model-Form Epistemic Uncertainty Quantification for Modeling with Differential Equations: Application to Epidemiology

Acquesta, Erin A.; Portone, Teresa P.; Dandekar, Raj D.; Rackauckas, Chris R.; Bandy, Rileigh J.; Huerta, Jose G.; Dytzel, India L.

Modeling real-world phenomena to any degree of accuracy is a challenge that the scientific research community has navigated since its foundation. Lack of information and limited computational and observational resources necessitate modeling assumptions which, when invalid, lead to model-form error (MFE). The work reported herein explored a novel method to represent model-form uncertainty (MFU) that combines Bayesian statistics with the emerging field of universal differential equations (UDEs). The fundamental principle behind UDEs is simple: use known equational forms that govern a dynamical system when you have them; then incorporate data-driven approaches – in this case neural networks (NNs) – embedded within the governing equations to learn the interacting terms that were underrepresented. Utilizing epidemiology as our motivating exemplar, this report will highlight the challenges of modeling novel infectious diseases while introducing ways to incorporate NN approximations to MFE. Prior to embarking on a Bayesian calibration, we first explored methods to augment the standard (non-Bayesian) UDE training procedure to account for uncertainty and increase robustness of training. In addition, it is often the case that uncertainty in observations is significant; this may be due to randomness or lack of precision in the measurement process. This uncertainty typically manifests as “noisy” observations which deviate from a true underlying signal. To account for such variability, the NN approximation to MFE is endowed with a probabilistic representation and is updated using available observational data in a Bayesian framework. By representing the MFU explicitly and deploying an embedded, data-driven model, this approach enables an agile, expressive, and interpretable method for representing MFU. In this report we will provide evidence that Bayesian UDEs show promise as a novel framework for any science-based, data-driven MFU representation; while emphasizing that significant advances must be made in the calibration of Bayesian NNs to ensure a robust calibration procedure.

More Details

Accelerating Multiscale Materials Modeling with Machine Learning

Modine, N.A.; Stephens, John A.; Swiler, Laura P.; Thompson, Aidan P.; Vogel, Dayton J.; Cangi, Attila C.; Feilder, Lenz F.; Rajamanickam, Sivasankaran R.

The focus of this project is to accelerate and transform the workflow of multiscale materials modeling by developing an integrated toolchain seamlessly combining DFT, SNAP, LAMMPS, (shown in Figure 1-1) and a machine-learning (ML) model that will more efficiently extract information from a smaller set of first-principles calculations. Our ML model enables us to accelerate first-principles data generation by interpolating existing high fidelity data, and extend the simulation scale by extrapolating high fidelity data (102 atoms) to the mesoscale (104 atoms). It encodes the underlying physics of atomic interactions on the microscopic scale by adapting a variety of ML techniques such as deep neural networks (DNNs), and graph neural networks (GNNs). We developed a new surrogate model for density functional theory using deep neural networks. The developed ML surrogate is demonstrated in a workflow to generate accurate band energies, total energies, and density of the 298K and 933K Aluminum systems. Furthermore, the models can be used to predict the quantities of interest for systems with more number of atoms than the training data set. We have demonstrated that the ML model can be used to compute the quantities of interest for systems with 100,000 Al atoms. When compared with 2000 Al system the new surrogate model is as accurate as DFT, but three orders of magnitude faster. We also explored optimal experimental design techniques to choose the training data and novel Graph Neural Networks to train on smaller data sets. These are promising methods that need to be explored in the future.

More Details

Differential geometric approaches to momentum-based formulations for fluids [Slides]

Eldred, Christopher

This SAND report documents CIS Late Start LDRD Project 22-0311, "Differential geometric approaches to momentum-based formulations for fluids". The project primarily developed geometric mechanics formulations for momentum-based descriptions of nonrelativistic fluids, utilizing a differential geometry/exterior calculus treatment of momentum and a space+time splitting. Specifically, the full suite of geometric mechanics formulations (variational/Lagrangian, Lie-Poisson Hamiltonian and Curl-Form Hamiltonian) were developed in terms of exterior calculus using vector-bundle valued differential forms. This was done for a fairly general version of semi-direct product theory sufficient to cover a wide range of both neutral and charged fluid models, including compressible Euler, magnetohydrodynamics and Euler-Maxwell. As a secondary goal, this project also explored the connection between geometric mechanics formulations and the more traditional Godunov form (a hyperbolic system of conservation laws). Unfortunately, this stage did not produce anything particularly interesting, due to unforeseen technical difficulties. There are two publications related to this work currently in preparation, and this work will be presented at SIAM CSE 23, at which the PI is organizing a mini-symposium on geometric mechanics formulations and structure-preserving discretizations for fluids. The logical next step is to utilize the exterior calculus based understanding of momentum coupled with geometric mechanics formulations to develop (novel) structure-preserving discretizations of momentum. This is the main subject of a successful FY23 CIS LDRD "Structure-preserving discretizations for momentum-based formulations of fluids".

More Details

Towards Z-Next: The Integration of Theory, Experiments, and Computational Simulation in a Bayesian Data Assimilation Framework

Maupin, Kathryn A.; Tran, Anh; Lewis, William L.; Knapp, Patrick K.; Joseph, V.R.; Wu, C.F.J.; Glinsky, Michael G.; Valaitis, Sonata V.

Making reliable predictions in the presence of uncertainty is critical to high-consequence modeling and simulation activities, such as those encountered at Sandia National Laboratories. Surrogate or reduced-order models are often used to mitigate the expense of performing quality uncertainty analyses with high-fidelity, physics-based codes. However, phenomenological surrogate models do not always adhere to important physics and system properties. This project develops surrogate models that integrate physical theory with experimental data through a maximally-informative framework that accounts for the many uncertainties present in computational modeling problems. Correlations between relevant outputs are preserved through the use of multi-output or co-predictive surrogate models; known physical properties (specifically monotoncity) are also preserved; and unknown physics and phenomena are detected using a causal analysis. By endowing surrogate models with key properties of the physical system being studied, their predictive power is arguably enhanced, allowing for reliable simulations and analyses at a reduced computational cost.

More Details
Results 26–50 of 9,998
Results 26–50 of 9,998