Making reliable predictions in the presence of uncertainty is critical to high-consequence modeling and simulation activities, such as those encountered at Sandia National Laboratories. Surrogate or reduced-order models are often used to mitigate the expense of performing quality uncertainty analyses with high-fidelity, physics-based codes. However, phenomenological surrogate models do not always adhere to important physics and system properties. This project develops surrogate models that integrate physical theory with experimental data through a maximally-informative framework that accounts for the many uncertainties present in computational modeling problems. Correlations between relevant outputs are preserved through the use of multi-output or co-predictive surrogate models; known physical properties (specifically monotoncity) are also preserved; and unknown physics and phenomena are detected using a causal analysis. By endowing surrogate models with key properties of the physical system being studied, their predictive power is arguably enhanced, allowing for reliable simulations and analyses at a reduced computational cost.
Predictive design of REHEDS experiments with radiation-hydrodynamic simulations requires knowledge of material properties (e.g. equations of state (EOS), transport coefficients, and radiation physics). Interpreting experimental results requires accurate models of diagnostic observables (e.g. detailed emission, absorption, and scattering spectra). In conditions of Local Thermodynamic Equilibrium (LTE), these material properties and observables can be pre-computed with relatively high accuracy and subsequently tabulated on simple temperature-density grids for fast look-up by simulations. When radiation and electron temperatures fall out of equilibrium, however, non-LTE effects can profoundly change material properties and diagnostic signatures. Accurately and efficiently incorporating these non-LTE effects has been a longstanding challenge for simulations. At present, most simulations include non-LTE effects by invoking highly simplified inline models. These inline non-LTE models are both much slower than table look-up and significantly less accurate than the detailed models used to populate LTE tables and diagnose experimental data through post-processing or inversion. Because inline non-LTE models are slow, designers avoid them whenever possible, which leads to known inaccuracies from using tabular LTE. Because inline models are simple, they are inconsistent with tabular data from detailed models, leading to ill-known inaccuracies, and they cannot generate detailed synthetic diagnostics suitable for direct comparisons with experimental data. This project addresses the challenge of generating and utilizing efficient, accurate, and consistent non-equilibrium material data along three complementary but relatively independent research lines. First, we have developed a relatively fast and accurate non-LTE average-atom model based on density functional theory (DFT) that provides a complete set of EOS, transport, and radiative data, and have rigorously tested it against more sophisticated first-principles multi-atom DFT models, including time-dependent DFT. Next, we have developed a tabular scheme and interpolation methods that compactly capture non-LTE effects for use in simulations and have implemented these tables in the GORGON magneto-hydrodynamic (MHD) code. Finally, we have developed post-processing tools that use detailed tabulated non-LTE data to directly predict experimental observables from simulation output.
Negative and zero coefficient of thermal expansion (CTE) materials are of interest for developing polymer composites in electronic circuits that match the expansion of Si and in zero CTE supports for optical components, e.g., mirrors. In this work, the processing challenges and stability of ZrW2O8, HfW2O8, HfMgW3O12, Al(HfMg)0.5W3O12, and Al0.5Sc1.5W3O12 negative and zero thermal expansion coefficient ceramics are discussed. Al0.5Sc1.5W3O12 is demonstrated to be a relatively simple oxide to fabricate in large quantity and is shown to exhibit single phase up to 1300 °C in air and inert N2 environments. The negative and zero CTE behavior was confirmed with dilatometry. Thermal conductivity and heat capacity were reported for the first time for HfMgW3O12 and Al0.5Sc1.5W3O12 and thermal conductivity was found to be very low (~0.5 W/mK). Grüneisen parameter is also estimated. Methods for integration of Al0.5Sc1.5W3O12 with other materials was examined and embedding 50 vol% of the ceramic powder in flexible epoxy was demonstrated with a commercial vendor.
Accurate prediction of ductile behavior of structural alloys up to and including failure is essential in component or system failure assessment, which is necessary for nuclear weapons alteration and life extensions programs of Sandia National Laboratories. Modeling such behavior requires computational capabilities to robustly capture strong nonlinearities (geometric and material), rate- dependent and temperature-dependent properties, and ductile failure mechanisms. This study's objective is to validate numerical simulations of a high-deformation crush of a stainless steel can. The process consists of identifying a suitable can geometry and loading conditions, conducting the laboratory testing, developing a high-quality Sierra/SM simulation, and then drawing comparisons between model and measurement to assess the fitness of the simulation in regards to material model (plasticity), finite element model construction, and failure model. Following previous material model calibration, a J2 plasticity model with a microstructural BCJ failure model is employed to model the test specimen made of 304L stainless steel. Simulated results are verified and validated through mesh and mass-scaling convergence studies, parameter sensitivity studies, and a comparison to experimental data. The converged mesh and degree of mass-scaling are the mesh discretization with 140,372 elements, and a mass scaling with a target time increment of 1.0e-6 seconds and time step scale factor of 0.5, respectively. Results from the coupled thermal-mechanical explicit dynamic analysis are comparable to the experimental data. Simulated global force vs displacement (F/D) response predicts key points such as yield, ultimate, and kinks of the experimental F/D response. Furthermore, the final deformed shape of the can and field data predicted from the analysis are similar to that of the deformed can, as measured by 3D optical CMM scans and DIC data from the experiment.
This paper describes a new non-charge-based data storing technique in NAND flash memory called watermark that encodes read-only data in the form of physical properties of flash memory cells. Unlike traditional charge-based data storing method in flash memory, the proposed technique is resistant to total ionizing dose (TID) effects. To evaluate its resistance to irradiation effects, we analyze data stored in several commercial single-level-cell (SLC) flash memory chips from different vendors and technology nodes. These chips are irradiated using a Co-60 gamma-ray source array for up to 100 krad(Si) at Sandia National Laboratories. Experimental evaluation performed on a flash chip from Samsung shows that the intrinsic bit error rate (BER) of watermark increases from mathbf {sim }0.8 % for TID = 0 krad(Si) to mathbf {mathrm {sim }}1 % for TID = 100 krad(Si). Conversely, the BER of charge-based data stored on the same chip increases from 0% at TID = 0 krad(Si) to 1.5% at TID = 100 krad(Si). The results imply that the proposed technique may potentially offer significant improvements in data integrity relative to traditional charge-based data storage for very high radiation (TID mathbf { > } 100 krad(Si)) environments. These gains in data integrity relative to the charge-based data storage are useful in radiation-prone environments, but they come at the cost of increased write times and higher BERs before irradiation.
This report details work that was completed to address the Fiscal Year 2022 Advanced Science and Technology (AS&T) Laboratory Directed Research and Development (LDRD) call for “AI-enhanced Co-Design of Next Generation Microelectronics.” This project required concurrent contributions from the fields of 1) materials science, 2) devices and circuits, 3) physics of computing, and 4) algorithms and system architectures. During this project, we developed AI-enhanced circuit design methods that relied on reinforcement learning and evolutionary algorithms. The AI-enhanced design methods were tested on neuromorphic circuit design problems that have real-world applications related to Sandia’s mission needs. The developed methods enable the design of circuits, including circuits that are built from emerging devices, and they were also extended to enable novel device discovery. We expect that these AI-enhanced design methods will accelerate progress towards developing next-generation, high-performance neuromorphic computing systems.
This report is a summary of a 3-year LDRD project that developed novel methods to detect faults in the electric power grid dramatically faster than today’s protection systems. Accurately detecting and quickly removing electrical faults is imperative for power system resilience and national security to minimize impacts to defense critical infrastructure. The new protection schemes will improve grid stability during disturbances and allow additional integration of renewable energy technologies with low inertia and low fault currents. Signal-based fast tripping schemes were developed that use the physics of the grid and do not rely on communication to reduce cyber risks for safely removing faults.
Grid scale batteries need to be inexpensive to manufacture, safe to operate, and non-toxic in composition. Zinc aqueous (alkaline) batteries hold much promise, but good cycle life and utilization of the zinc has proven difficult partly because zinc is susceptible to H2 gas evolution in KOH. Water-insalt electrolyte (WiSE) can address this shortcoming by lowering the activity of free water molecules in solution, thus reducing H2 gas evolution. In this work, we show the relevant fundamental physicochemical properties of an acetate-based WiSE to establish the practicality and performance of this class of WiSE for battery application. Research and understanding of acetate WiSE is in a nascent state, presently.
This project explores the idea of performing kinetic numerical simulations in the Z inner magnetically insulated transmission line (inner MITL) by reduced physics models such as a guiding center drift kinetic approximation for particles and electrostatic and magnetostatic approximation for the fields. The basic problem explored herein is the generation, formation, and evolution of vortices by electron space charge limited (SCL) emission. The results indicate that for relevant to Z values of peak current and pulse length, these approximations are excellent, while also providing tens to hundreds of times reduction in the computational load. The benefits could be enormous: Implementation of these reduced physics models in present particle-in-cell (PIC) codes could enable them to be routinely used for experimental design while still capturing essential non-thermal (kinetic) physics.
Sandia National Laboratories (SNL) is designing and developing an Artificial Intelligence (AI)-enabled smart digital assistant (SDA), Inspecta (International Nuclear Safeguards Personal Examination and Containment Tracking Assistant). The goal is to provide inspectors an in-field digital assistant that can perform tasks identified as tedious, challenging, or prone to human error. During 2021, we defined the requirements for Inspecta based on reviews of International Atomic Energy Agency (IAEA) publications and interviews with former IAEA inspectors. We then mapped the requirements to current commercial or open-source technical capabilities to provide a development path for an initial Inspecta prototype while highlighting potential research and development tasks. We selected a highimpact inspection task that could be performed by an early Inspecta prototype and are developing the initial architecture, including hardware platform. This paper describes the methodology for selecting an initial task scenario, the first set of Inspecta skills needed to assist with that task scenario and finally the design and development of Inspecta’s architecture and platform.
Writing software is difficult. However, writing complex, well tested and designed, and functionally correct software is incredibly difficult. An entire field of study is devoted to the validation and verification of software to address this problem, and in this paper we analyze the landscape of currently available third party software. We have divided our analyses into three separate subsections with regards to software validation: formal methods, static analysis, and test generation. Formal verification is the most complex method in which to validate software correctness, but also the most thorough as it truly validates the mathematical validity of the source code. Static analysis generally is relegated to abstract syntax tree traversal techniques to find errors related to faulty software such as memory leaks or stack overflow issues. Automatic test generation is similar in implementation to static analysis, but pushes a bit further in verifying the boundedness of function inputs and outputs with regards to annotated or parsed criteria. The crux of this report is to analyze and describe the software tools that implement these techniques to validate and verify software. Pros and cons related to installation, utilization, and capabilities of the frameworks are described, and reproducible examples are provided with a focus on usability. The initial survey concluded that the most interesting tools of note are Z3, Isabelle/HOL, and TLA+ with regards to formal verification; and Infer, Frama-C, and SonarQube with regards to static analysis. With these tools in mind, a final conjecture is provided that describes future avenues of utilizing these tools for developing a verification framework to assist in validating existing software at Sandia National Laboratories.
Analog computing has been widely proposed to improve the energy efficiency of multiple important workloads including neural network operations, and other linear algebra kernels. To properly evaluate analog computing and explore more complex workloads such as systems consisting of multiple analog data paths, system level simulations are required. Moreover, prior work on system architectures for analog computing often rely on custom simulators creating signficant additional design effort and complicating comparisons between different systems. To remedy these issues, this report describes the design and implementation of a flexible tile-based analog accelerator element for the Structural Simulation Toolkit (SST). The element focuses on heavily on the tile controller—an often neglected aspect of prior work—that is sufficiently versatile to simulate a wide range of different tile operations including neural network layers, signal processing kernels, and generic linear algebra operations without major constraints. The tile model also interoperates with existing SST memory and network models to reduce the overall development load and enable future simulation of heterogeneous systems with both conventional digital logic and analog compute tiles. Finally, both the tile and array models are designed to easily support future extensions as new analog operations and applications that can benefit from analog computing are developed.
Computational design-based optimization is a well-used tool in science and engineering. Our report documents the successful use of a particle sensitivity analysis for design-based optimization within Monte Carlo sampling-based particle simulation—a currently unavailable capability. Such a capability enables the particle simulation communities to go beyond forward simulation and promises to reduce the burden on overworked analysts by getting more done with less computation.
Solar Thermal Ammonia Production has the potential to synthesize ammonia in a green, renewable process that can greatly reduce the carbon footprint left by conventional Haber-Bosch reaction. Ternary nitrides in the family A3BxN (A=Co, Ni, Fe; B=Mo; x=2,3) have been identified as a potential candidate for NH3 production. Experiments with Co3Mo3N in Ammonia Synthesis Reactor demonstrate cyclable NH3 production from bulk nitride under pure H2. Production rates were fairly flat in all the reduction steps with no evident dependence on the consumed solid-state nitrogen, as would be expected from catalytic Mars-van Krevelen mechanism. Material can be re-nitridized under pure N2. Bulk nitrogen per reduction step average between 25 – 40% of the total solid-state nitrogen. Selectivity to NH3 stabilized at 55 – 60% per cycle. Production rates (NH3 and N2) become apparent above 600 °C at P(H2) = 0.5 – 2 bar. Optimal point of operation to keep selectivity high without compromising NH3 rates currently estimated at 650 °C and 1.5 - 2 bar. The next steps are to optimize production rates, examine effect of N2 addition in NH3 synthesis reaction, and test additional ternary nitrides.
Foreign disinformation campaigns are strategically organized, extended efforts using disinformation – false or misleading information deliberately placed by an adversary – to achieve some goal. Disinformation campaigns pose severe threats to our nation’s security by misinforming decision makers and negatively influencing their actions when they are operating on limited amounts of evidence. Current efforts rely on subject matter experts to manually identify disinformation, or on computers and traditional natural language processing algorithms to identify patterns in data to calculate the probability that something is disinformation or not. While both have their merits and successes, subject matter experts are unable to keep up with the high volumes of global information and traditional natural language algorithms do not do well in identifying “why” something is disinformation or not. Our hypothesis is that we can identify disinformation by looking at the way someone speaks, in the rhetorical devices they use. We have curated and annotated a dataset designed for multiple natural language processing tasks, but specifically useful for disinformation detection algorithms.
This report summarizes the needs, challenges, and opportunities associated with carbon-free energy and energy storage for manufacturing and industrial decarbonization. Energy needs and challenges for different manufacturing and industrial sectors (e.g., cement/steel production, chemicals, materials synthesis) are identified. Key issues for industry include the need for large, continuous on-site capacity (tens to hundreds of megawatts), compatibility with existing infrastructure, cost, and safety. Energy storage technologies that can potentially address these needs, which include electrochemical, thermal, and chemical energy storage, are presented along with key challenges, gaps, and integration issues. Analysis tools to value energy storage technologies in the context of manufacturing and industrial decarbonizations are also presented. Material is drawn from the Energy Storage for Manufacturing and Industrial Decarbonization (Energy StorM) Workshop, held February 8 - 9, 2022. The objective was to identify research opportunities and needs for the U.S. Department of Energy as part of its Energy Storage Grand Challenge program.
Structural health monitoring of an engineered component in a harsh environment is critical for multiple DOE missions including nuclear fuel cycle, subsurface energy production/storage, and energy conversion. Supported by a seeding Laboratory Directed Research & Development (LDRD) project, we have explored a new concept for structural health monitoring by introducing a self-sensing capability into structural components. The concept is based on two recent technological advances: metamaterials and additive manufacturing. A self-sensing capability can be engineered by embedding a metastructure, for example, a sheet of electromagnetic resonators, either metallic or dielectric, into a material component. This embedment can now be realized using 3-D printing. The precise geometry of the embedded metastructure determines how the material interacts with an incident electromagnetic wave. Any change in the structure of the material (e.g., straining, degradation, etc.) would inevitably perturbate the embedded metastructures or metasurface array and therefore alter the electromagnetic response of the material, thus resulting in a frequency shift of a reflection spectrum that can be detected passively and remotely. This new sensing approach eliminates complicated environmental shielding, in-situ power supply, and wire routing that are generally required by the existing active-circuit-based sensors. The work documented in this report has preliminarily demonstrated the feasibility of the proposed concept. The work has established the needed simulation tools and experimental capabilities for future studies.