U.S. Domestic Material Control and Accounting for Advanced and Small Modular Reactors
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Aero-optics refers to optical distortions due to index-of-refraction gradients that are induced by aerodynamic density gradients. At hypersonic flow conditions, the bulk velocity is many times the speed of sound and density gradients may originate from shock waves, compressible turbulent structures, acoustic waves, thermal variations, etc. Due to the combination of these factors, aero-optic distortions are expected to differ from those common to sub-sonic and lower super-sonic speeds. This report summarizes the results from a 2019-2022 Laboratory Directed Research and Development (LDRD) project led by Sandia National Laboratories in collaboration with the University of Notre Dame, New Mexico State University, and the Georgia Institute of Technology. Efforts extended experimental and simulation methodologies for the study of turbulent hypersonic boundary layers. Notable experimental advancements include development of spectral de-aliasing techniques for highspeed wavefront measurements, a Spatially Selective Wavefront Sensor (SSWFS) technique, new experimental data at Mach 8 and 14, a Quadrature Fringe Imaging Interferometer (QFII) technique for time-resolved index-of-refraction measures, and application of QFII to shock-heated air. At the same time, model advancements include aero-optic analysis of several Direct Numerical Simulation (DNS) datasets from Mach 0.5 to 14 and development of wall-modeled Large Eddy Simulation (LES) techniques for aero-optic predictions. At Mach 8 measured and predicted root mean square Optical Path Differences agree within confidence bounds but are higher than semi-empirical trends extrapolated from lower Mach conditions. Overall, results show that aero-optic effects in the hypersonic flow regime are not simple extensions from prior knowledge at lower speeds and instead reflect the added complexity of compressible hypersonic flow physics.
Abstract not provided.
Abstract not provided.
Tritium has a unique physical and chemical behavior which causes it to be highly mobile in the environment. As it behaves similarly to hydrogen in the environment, it may also be readily incorporated into the water cycle and other biological processes. These factors and other environmental transformations may also cause the oxidation of an elemental tritium release, resulting in a multiple order of magnitude increase in dose coefficient and radiotoxicity. While source term development and understanding for advanced reactors are still underway, tritium may be a radionuclide of interest. It is thus important to understand how tritium moves through the environment and how the MACCS accident consequence code handles acute tritium releases in an accident scenario. Additionally, existing tritium models may have functionalities that could inform updates to MACCS to handle tritium. In this report tritium transport is reviewed and existing tritium models are summarized in view of potential updates to MACCS.
Estimation of two-phase fluid flow properties is important to understand and predict water and gas movement through the vadose zone for agricultural, hydrogeological, and engineering applications, such as for vapor-phase contaminant transport and/or containment of noble gases in the subsurface. In this second progress report of FY22, we present two ongoing activities related to imbibition testing on volcanic rock samples. We present the development of a new analytical solution predicting the temperature response observed during imbibition into dry samples, as discussed in our previous first progress report for FY22. We also illustrate the use of a multi-modal capillary pressure distribution to simulate both early- and late-time imbibition data collected on tuff core that can exhibit multiple pore types. These FY22 imbibition tests were conducted for an extended period (i.e., far beyond the time required for the wetting front to reach the top of the sample), which is necessary for parameter estimation and characterization of two different pore types within the samples.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This project evaluated the use of emerging spintronic memory devices for robust and efficient variational inference schemes. Variational inference (VI) schemes, which constrain the distribution for each weight to be a Gaussian distribution with a mean and standard deviation, are a tractable method for calculating posterior distributions of weights in a Bayesian neural network such that this neural network can also be trained using the powerful backpropagation algorithm. Our project focuses on domain-wall magnetic tunnel junctions (DW-MTJs), a powerful multi-functional spintronic synapse design that can achieve low power switching while also opening the pathway towards repeatable, analog operation using fabricated notches. Our initial efforts to employ DW-MTJs as an all-in-one stochastic synapse with both a mean and standard deviation didn’t end up meeting the quality metrics for hardware-friendly VI. In the future, new device stacks and methods for expressive anisotropy modification may make this idea still possible. However, as a fall back that immediately satisfies our requirements, we invented and detailed how the combination of a DW-MTJ synapse encoding the mean and a probabilistic Bayes-MTJ device, programmed via a ferroelectric or ionically modifiable layer, can robustly and expressively implement VI. This design includes a physics-informed small circuit model, that was scaled up to perform and demonstrate rigorous uncertainty quantification applications, up to and including small convolutional networks on a grayscale image classification task, and larger (Residual) networks implementing multi-channel image classification. Lastly, as these results and ideas all depend upon the idea of an inference application where weights (spintronic memory states) remain non-volatile, the retention of these synapses for the notched case was further interrogated. These investigations revealed and emphasized the importance of both notch geometry and anisotropy modification in order to further enhance the endurance of written spintronic states. In the near future, these results will be mapped to effective predictions for room temperature and elevated operation DW-MTJ memory retention, and experimentally verified when devices become available.
Abstract not provided.
The purpose of our report is to discuss the notion of entropy and its relationship with statistics. Our goal is to provide a manner in which you can think about entropy, its central role within information theory and relationship with statistics. We review various relationships between information theory and statistics—nearly all are well-known but unfortunately are often not recognized. Entropy quantities the "average amount of surprise" in a random variable and lies at the heart of information theory, which studies the transmission, processing, extraction, and utilization of information. For us, data is information. What is the distinction between information theory and statistics? Information theorists work with probability distributions. Instead, statisticians work with samples. In so many words, information theory using samples is the practice of statistics.
Abstract not provided.
Physical Review E
Due to significant computational expense, discrete element method simulations of jammed packings of size-dispersed spheres with size ratios greater than 1:10 have remained elusive, limiting the correspondence between simulations and real-world granular materials with large size dispersity. Invoking a recently developed neighbor binning algorithm, we generate mechanically stable jammed packings of frictionless spheres with power-law size distributions containing up to nearly 4 000 000 particles with size ratios up to 1:100. By systematically varying the width and exponent of the underlying power laws, we analyze the role of particle size distributions on the structure of jammed packings. The densest packings are obtained for size distributions that balance the relative abundance of large-large and small-small particle contacts. Although the proportion of rattler particles and mean coordination number strongly depend on the size distribution, the mean coordination of nonrattler particles attains the frictionless isostatic value of six in all cases. The size distribution of nonrattler particles that participate in the load-bearing network exhibits no dependence on the width of the total particle size distribution beyond a critical particle size for low-magnitude exponent power laws. This signifies that only particles with sizes greater than the critical particle size contribute to the mechanical stability. However, for high-magnitude exponent power laws, all particle sizes participate in the mechanical stability of the packing.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Imaging using THz waves has been a promising option for penetrative measurements in environments that are opaque to visible wavelengths. However, available THz imaging systems have been limited to relatively low frame rates and cannot be applied to study fast dynamics. This work explores the use of upconversion imaging techniques based on nonlinear optics to enable wavelength-flexible high frame rate THz imaging. UpConversion Imaging (UCI) uses nonlinear conversion techniques to shift the THz wavelengths carrying a target image to shorter visible or near-IR wavelengths that can be detected by available high-speed cameras. This report describes the analysis methodology used to design a prototype high-rate THz UCI system and gives a detailed explanations of the design choices that were made. The design uses a high-rate pulse-burst laser system to pump both THz generation and THz upconversion detection, allowing for scaling to acquisition rates in excess of 10 kHz. The design of the prototype system described in this report has been completed and all necessary materials have been procured. Assembly and characterization testing is on-going at the submission of this report. This report proposes future directions for work on high-rate THz UCI and potential applications of future systems.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Researchers have recently estimated that Arctic submarine permafrost currently traps 60 billion tons of methane and contains 560 billion tons of organic carbon in seafloor sediments and soil, a giant pool of carbon with potentially large feedbacks on the climate system. Unlike terrestrial permafrost, the submarine permafrost system has remained a “known unknown” because of the difficulty in acquiring samples and measurements. Consequently, this potentially large carbon stock never yet considered in global climate models or policy discussions, represents a real wildcard in our understanding of Earth’s climate. This report summarizes our group’s effort at developing a numerical modeling framework designed to produce a first-of-its-kind estimate of Arctic methane gas releases from the marine sediments to the water column, and potentially to the atmosphere, where positive climate feedback may occur. Newly developed modeling capability supported by the Laboratory Directed Research and Development (LDRD) program at Sandia National Laboratories now gives us the ability to probabilistically map gas distribution and quantity in the seabed by using a hybrid approach of geospatial machine learning, and predictive numerical thermodynamic ensemble modeling. The novelty in this approach is its ability to produce maps of useful data in regions that are only sparsely sampled, a common challenge in the Arctic, and a major obstacle to progress in the past. By applying this model to the circum-Arctic continental shelves and integrating the flux of free gas from in situ methanogenesis and dissociating gas hydrates from the sediment column under climate forcing, we can provide the most reliable estimate of a spatially and temporally varying source term for greenhouse gas flux that can be used by global oceanographic circulation and Earth system models (such as DOE’s E3SM). The result will allow us to finally tackle the wildcard of the submarine permafrost carbon system, and better inform us about the severity of future national security threats that sustained climate change poses.
The increasing use of machine learning (ML) models to support high-consequence decision making drives a need to increase the rigor of ML-based decision making. Critical problems ranging from climate change to nonproliferation monitoring rely on machine learning for aspects of their analyses. Likewise, future technologies, such as incorporation of data-driven methods into the stockpile surveillance and predictive failure analysis for weapons components, will all rely on decision-making that incorporates the output of machine learning models. In this project, our main focus was the development of decision scientific methods that combine uncertainty estimates for machine learning predictions, with a domain-specific model of error costs. Other focus areas include uncertainty measurement in ML predictions, designing decision rules using multiobjecive optimization, the value of uncertainty reduction, and decision-tailored uncertainty quantification for probability estimates. By laying foundations for rigorous decision making based on the predictions of machine learning models, these approaches are directly relevant to every national security mission that applies, or will apply, machine learning to data, most of which entail some decision context.
Grid scale batteries need to be inexpensive to manufacture, safe to operate, and non-toxic in composition. Zinc aqueous (alkaline) batteries hold much promise, but good cycle life and utilization of the zinc has proven difficult partly because zinc is susceptible to H2 gas evolution in KOH. Water-insalt electrolyte (WiSE) can address this shortcoming by lowering the activity of free water molecules in solution, thus reducing H2 gas evolution. In this work, we show the relevant fundamental physicochemical properties of an acetate-based WiSE to establish the practicality and performance of this class of WiSE for battery application. Research and understanding of acetate WiSE is in a nascent state, presently.
Optimal mitigation planning for highly disruptive contingencies to a transmission-level power system requires optimization with dynamic power system constraints, due to the key role of dynamics in system stability to major perturbations. We formulate a generalized disjunctive program to determine optimal grid component hardening choices for protecting against major failures, with differential algebraic constraints representing system dynamics (specifically, differential equations representing generator and load behavior and algebraic equations representing instantaneous power balance over the transmission system). We optionally allow stochastic optimal pre-positioning across all considered failure scenarios, and optimal emergency control within each scenario. This novel formulation allows, for the first time, analyzing the resilience interdependencies of mitigation planning, preventive control, and emergency control. Using all three strategies in concert is particularly effective at maintaining robust power system operation under severe contingencies, as we demonstrate on the Western System Coordinating Council (WSCC) 9-bus test system using synthetic multi-device outage scenarios. Towards integrating our modeling framework with real threats and more realistic power systems, we explore applying hybrid dynamics to power systems. Our work is applied to basic RL circuits with the ultimate goal of using the methodology to model protective tripping schemes in the grid. Finally, we survey mitigation techniques for HEMP threats and describe a GIS application developed to create threat scenarios in a grid with geographic detail.
Cryptography
Advanced, superscalar microprocessors ((Formula presented.)) are highly susceptible to wear-out failures because of their highly complex, densely packed circuit structure and extreme operational frequencies. Although many types of fault detection and mitigation strategies have been proposed, none have addressed the specific problem of detecting faults that lead to information leakage events on I/O channels of the (Formula presented.). Information leakage can be defined very generally as any type of output that the executing program did not intend to produce. In this work, we restrict this definition to output that represents a security concern, and in particular, to the leakage of plaintext or encryption keys, and propose a counter-based countermeasure to detect faults that cause this type of leakage event. Fault injection (FI) experiments are carried out on two RISC-V microprocessors emulated as soft cores on a Xilinx multi-processor System-on-chip (MPSoC) FPGA. The (Formula presented.) designs are instrumented with a set of counters that records the number of transitions that occur on internal nodes. The transition counts are collected from all internal nodes under both fault-free and faulty conditions, and are analyzed to determine which counters provide the highest fault coverage and lowest latency for detecting leakage faults. We show that complete coverage of all leakage faults is possible using only a single counter strategically placed within the branch compare logic of the (Formula presented.).
This report details work that was completed to address the Fiscal Year 2022 Advanced Science and Technology (AS&T) Laboratory Directed Research and Development (LDRD) call for “AI-enhanced Co-Design of Next Generation Microelectronics.” This project required concurrent contributions from the fields of 1) materials science, 2) devices and circuits, 3) physics of computing, and 4) algorithms and system architectures. During this project, we developed AI-enhanced circuit design methods that relied on reinforcement learning and evolutionary algorithms. The AI-enhanced design methods were tested on neuromorphic circuit design problems that have real-world applications related to Sandia’s mission needs. The developed methods enable the design of circuits, including circuits that are built from emerging devices, and they were also extended to enable novel device discovery. We expect that these AI-enhanced design methods will accelerate progress towards developing next-generation, high-performance neuromorphic computing systems.
Accurate prediction of ductile behavior of structural alloys up to and including failure is essential in component or system failure assessment, which is necessary for nuclear weapons alteration and life extensions programs of Sandia National Laboratories. Modeling such behavior requires computational capabilities to robustly capture strong nonlinearities (geometric and material), rate- dependent and temperature-dependent properties, and ductile failure mechanisms. This study's objective is to validate numerical simulations of a high-deformation crush of a stainless steel can. The process consists of identifying a suitable can geometry and loading conditions, conducting the laboratory testing, developing a high-quality Sierra/SM simulation, and then drawing comparisons between model and measurement to assess the fitness of the simulation in regards to material model (plasticity), finite element model construction, and failure model. Following previous material model calibration, a J2 plasticity model with a microstructural BCJ failure model is employed to model the test specimen made of 304L stainless steel. Simulated results are verified and validated through mesh and mass-scaling convergence studies, parameter sensitivity studies, and a comparison to experimental data. The converged mesh and degree of mass-scaling are the mesh discretization with 140,372 elements, and a mass scaling with a target time increment of 1.0e-6 seconds and time step scale factor of 0.5, respectively. Results from the coupled thermal-mechanical explicit dynamic analysis are comparable to the experimental data. Simulated global force vs displacement (F/D) response predicts key points such as yield, ultimate, and kinks of the experimental F/D response. Furthermore, the final deformed shape of the can and field data predicted from the analysis are similar to that of the deformed can, as measured by 3D optical CMM scans and DIC data from the experiment.
The purpose of this effort is to investigate whether large acoustic pressure waves can be transmitted inside beverage containers to enable pasteurization. Acoustic waves are known to induce large nonlinear compressive forces and shock waves in fluids, suggesting that compression waves may be capable of damaging bacteria inside beverage containers without appreciably increasingly the temperature or altering the freshness and flavor of the beverage contents. Although a combined process such as thermosonication (e.g., sonication with heating) is likely more efficient, it is instructive to compute the acoustic pressure field distribution inside the beverage container. The COMSOL simulations used two and three-dimensional models of beverage containers placed in a water bath to compute the acoustic pressure field. A limitation of these COMSOL models is that they cannot determine the bacterial lysis efficiency, rather the models provide an indirect metric of bacterial lysis based on the magnitude of the pressure field and its distribution.
Abstract not provided.
The plant polymer lignin is the most abundant renewable source of aromatics on the planet and conversion of it to valuable fuels and chemicals is critical to the economic viability of a lignocellulosic biofuels industry and to meeting the DOE’s 2022 goal of $\$2.50$/gallon mean biofuel selling price. Presently, there is no efficient way of converting lignin into valuable commodities. Current biological approaches require mixtures of expensive ligninolytic enzymes and engineered microbes. This project was aimed at circumventing these problems by discovering commensal relationships among fungi and bacteria involved in biological lignin utilization and using this knowledge to engineer microbial communities capable of converting lignin into renewable fuels and chemicals. Essentially, we aimed to learn from, mimic and improve on nature. We discovered fungi that synergistically work together to degrade lignin, engineered fungal systems to increase expression of the required enzymes and engineered organisms to produce products such as biodegradable plastics precursors.
Abstract not provided.
Prediction of flow, transport, and deformation in fractured and porous media is critical to improving our scientific understanding of coupled thermal-hydrological-mechanical processes related to subsurface energy storage and recovery, nonproliferation, and nuclear waste storage. Especially, earth rock response to changes in pressure and stress has remained a critically challenging task. In this work, we advance computational capabilities for coupled processes in fractured and porous media using Sandia Sierra Multiphysics software through verification and validation problems such as poro-elasticity, elasto-plasticity and thermo-poroelasticity. We apply Sierra software for geologic carbon storage, fluid injection/extraction, and enhanced geothermal systems. We also significantly improve machine learning approaches through latent space and self-supervised learning. Additionally, we develop new experimental technique for evaluating dynamics of compacted soils at an intermediate scale. Overall, this project will enable us to systematically measure and control the earth system response to changes in stress and pressure due to subsurface energy activities.
This SAND report documents CIS Late Start LDRD Project 22-0311, "Differential geometric approaches to momentum-based formulations for fluids". The project primarily developed geometric mechanics formulations for momentum-based descriptions of nonrelativistic fluids, utilizing a differential geometry/exterior calculus treatment of momentum and a space+time splitting. Specifically, the full suite of geometric mechanics formulations (variational/Lagrangian, Lie-Poisson Hamiltonian and Curl-Form Hamiltonian) were developed in terms of exterior calculus using vector-bundle valued differential forms. This was done for a fairly general version of semi-direct product theory sufficient to cover a wide range of both neutral and charged fluid models, including compressible Euler, magnetohydrodynamics and Euler-Maxwell. As a secondary goal, this project also explored the connection between geometric mechanics formulations and the more traditional Godunov form (a hyperbolic system of conservation laws). Unfortunately, this stage did not produce anything particularly interesting, due to unforeseen technical difficulties. There are two publications related to this work currently in preparation, and this work will be presented at SIAM CSE 23, at which the PI is organizing a mini-symposium on geometric mechanics formulations and structure-preserving discretizations for fluids. The logical next step is to utilize the exterior calculus based understanding of momentum coupled with geometric mechanics formulations to develop (novel) structure-preserving discretizations of momentum. This is the main subject of a successful FY23 CIS LDRD "Structure-preserving discretizations for momentum-based formulations of fluids".
Abstract not provided.
Abstract not provided.
Abstract not provided.
IFAC-PapersOnLine
This new research provides transformative marine energy technology to effectively power the blue economy. Harmonizing the energy capture and power from Wave Energy Converter (WEC) arrays require innovative designs for the buoy, electric machines, energy storage systems (ESS), and coordinated onshore electric power grid (EPG) integration. This paper introduces two innovative elements that are co-designed to extract the maximum power from; i) individual WEC buoys with a multi-resonance controller design and ii) synchronized with power packet network phase control through the physical placement of the WEC arrays reducing ESS requirements. MATLAB/Simulink models were created for the WEC array dynamics and control systems with Bretschneider irregular wave spectrum as inputs. The numerical simulation results show that for ideal physical WEC buoy array phasing of 60 degrees the ESS peak power and energy capacity requirements are minimized while the multi-resonant controllers optimize EPG power output for each WEC buoy.
Abstract not provided.
This report documents the results and conclusions of a recent project to understand the technoeconomics of utility-scale, particle-based concentrating solar power (CSP) facilities leveraging unique operational strategies. This project included two primary objectives. The first project objective was to build confidence in the modeling approaches applied to falling particle receivers (FPRs) including the effect s of wind. The second project objective was to create the necessary modeling capability to adequately predict and maximize the annual performance of utility-scale, particle-based CSP plants under anticipated conditions with and without active heliostat control. Results of an extensive model validation study provided the strongest evidence to date for the modeling strategies typically applied to FPRs, albeit at smaller receiver scales. This modeling strategy was then applied in a parametric study of candidate utility-scale FPRs, including both free-falling and multistage FPR concepts, to develop reduced order models for predicting the receiver thermal efficiency under anticipated environmental and operating conditions. Multistage FPRs were found to significantly improve receiver performance at utility-scales. These reduced order models were then leveraged in a sophisticated technoeconomic analysis to optimize utility-scale , particle-based CSP plants considering the potential of active heliostat control. In summary, active heliostat control did not show significant performance benefits to future utility-scale CSP systems though some benefit may still be realized in FPR designs with wide acceptance angles and/or with lower concentration ratios. Using the latest FPR technologies available, the levelized-cost of electricity was quantified for particle-based CSP facilities with nominal powers ranging from 5 MWe up to 100 MWe with many viable designs having costs < 0.06 $/kWh and local minimums occurring between ~25–35 MWe.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Tracer gases, whether they are chemical or isotopic in nature, are useful tools in examining the flow and transport of gaseous or volatile species in the underground. One application is using detection of short-lived argon and xenon radionuclides to monitor for underground nuclear explosions. However, even chemically inert species, such as the noble gases, have bene observed to exhibit non-conservative behavior when flowing through porous media containing certain materials, such as zeolites, due to gas adsorption processes. This report details the model developed, implemented, and tested in the open source and massively parallel subsurface flow and transport simulator PFLOTRAN for future use in modeling the transport of adsorbing tracer gases.
Abstract not provided.
Abstract not provided.
Neuromorphic Computing and Engineering
Though neuromorphic computers have typically targeted applications in machine learning and neuroscience (‘cognitive’ applications), they have many computational characteristics that are attractive for a wide variety of computational problems. In this work, we review the current state-of-the-art for non-cognitive applications on neuromorphic computers, including simple computational kernels for composition, graph algorithms, constrained optimization, and signal processing. We discuss the advantages of using neuromorphic computers for these different applications, as well as the challenges that still remain. The ultimate goal of this work is to bring awareness to this class of problems for neuromorphic systems to the broader community, particularly to encourage further work in this area and to make sure that these applications are considered in the design of future neuromorphic systems.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
In this work we present a novel method for improving the high-temperature performance of silicon photomultipliers (SiPMs) via focused ion beam (FIB) modification of individual microcells. The literature suggests that most of the dark count rate (DCR) in a SiPM is contributed by a small percentage (<5%) of microcells. By using a FIB to electrically deactivate this relatively small number of microcells, we believe we can greatly reduce the overall DCR of the SiPM at the expense of a small reduction in overall photodetection efficiency, thereby improving its high temperature performance. In this report we describe our methods for characterizing the SiPM to determine which individual microcells contribute the most to the DCR, preparing the SiPM for FIB, and modifying the SiPM using the FIB to deactivate the identified microcells.
Cathode-directed streamer evolution in near atmospheric air is modeled in 3D pin-to-plane geometries using a 3D kinetic Particle-In-Cell (PIC) code that simulates particle-particle collisions via the Direct Simulation Monte Carlo (DSMC) method. Due to the computational challenges associated with a complete 360° volumetric domain, a practical alternative was achieved using a wedge domain and a range of azimuthal angles was explored (5°, 15°, 30°, and 45°) to study possible effects on the streamer growth and propagation due to the finite wedge angle. A DC voltage of 6 kV is administered to a hemispherical anode of radius 100 μm, with a planar cathode held at ground potential, generating an over-volted state with an electric field of 4 MV/m across a 1500 μm gap. The domain is seeded with an initial ion and electron density of 1018 m-3 at 1 eV temperature confined to a spherical region of radius 100 μm centered at the tip of the anode. The air chemistry model [1] includes standard Townsend breakdown mechanisms (electron-neutral elastic, excitation, ionization, attachment, and detachment collision chemistry and secondary electron emission) as well as streamer mechanisms (photoionization and ion-neutral collisions) via tracking excited state neutrals which can then either quench via collisions or spontaneously emit a photon based on specific Einstein-A coefficients [2, 3]. In this work, positive streamer dynamics are formally quantified for each wedge angle in terms of electron velocity and density as temporal functions of coordinates r, Φ, and z. Applying a random plasma seed for each simulation, particles of interest are tracked with near femtosecond temporal resolution out to 1.4 ns and spatially binned. This process is repeated six times and results are averaged. Prior 2D studies have shown that the reduced electric field, E/n, can significantly impact streamer evolution [4]. We extend the analysis to 3D wedge geometries, to limit computational costs, and examine the wedge angle’s effect on streamer branching, propagation, and velocity. Results indicate that the smallest wedge angle that produced an acceptably converged solution is 30°. The potential effects that a mesh, when under-resolved with respect to the Debye length, can impart on streamer dynamics and numerical heating were not investigated, and we explicitly state here that the smallest cell size was approximately 10 times the minimum λD in the streamer channel at late times. This constraint on cell size was the result of computational limitations on total mesh count.
Abstract not provided.
This document contains the final report for the midyear LDRD titled "Extension of Interferometric Synthetic Aperture Radar to Multiple Phase-Centers." This report presents an overview of several methods for approaching the two-target in layover problem that exists in interferometric synthetic aperture radar systems. Simulation results for one of the methods are presented. In addition, a new direct approach is introduced.
This report documents the progress made in simulating the HERMES-III Magnetically Insulated Transmission Line (MITL) and courtyard with EMPIRE and ITS. This study focuses on the shots that were taken during the months of June and July of 2019 performed with the new MITL extension. There were a few shots where there was dose mapping of the courtyard, 11132, 11133, 11134, 11135, 11136, and 11146. This report focuses on these shots because there was full data return from the MITL electrical diagnostics and the radiation dose sensors in the courtyard. The comparison starts with improving the processing of the incoming voltage into the EMPIRE simulation from the experiment. The currents are then compared at several location along the MITL. The simulation results of the electrons impacting the anode are shown. The electron impact energy and angle is then handed off to ITS which calculates the dose on the faceplate and locations in the courtyard and they are compared to experimental measurements. ITS also calculates the photons and electrons that are injected into the courtyard, these quantities are then used by EMPIRE to calculated the photon and electron transport in the courtyard. The details for the algorithms used to perform the courtyard simulations are presented as well as qualitative comparisons of the electric field, magnetic field, and the conductivity in the courtyard. Because of the computational burden of these calculations the pressure was reduce in the courtyard to reduce the computational load. The computation performance is presented along with suggestion on how to improve both the computational performance as well as the algorithmic performance. Some of the algorithmic changed would reduce the accuracy of the models and detail comparison of these changes are left for a future study. As well as, list of code improvements there is also a list of suggested experimental improvements to improve the quality of the data return.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Representation of soil organic carbon (SOC) dynamics in Earth system models (ESMs) is a key source of uncertainty in predicting carbon climate feedbacks. The magnitude of this uncertainty can be reduced by accurate representation of environmental controllers of SOC stocks in ESMs. In this study, we used data of environmental factors, field SOC observations, ESM projections and machine learning approaches to identify dominant environmental controllers of SOC stocks and derive functional relationships between environmental factors and SOC stocks. Our derived functional relationships predicted SOC stocks with similar accuracy as the machine learning approach. We used the derived relationships to benchmark the coupled model intercomparison project phase six ESM representation of SOC stocks. We found divergent environmental control representation in ESMs in comparison to field observations. Representation of SOC in ESMs can be improved by including additional environmental factors and representing their functional relationships with SOC consistent with observations.
Abstract not provided.
This document provides very basic background information and initial enabling guidance for computational analysts to develop and utilize GitOps practices within the Common Engineering Environment (CEE) and High Performance Computing (HPC) computational environment at Sandia National Laboratories through GitLab/Jacamar runner based workflows.
Abstract not provided.
Multiple physical and chemical forms of a given radionuclide may be released in the event of a nuclear accident. Given that variable forms of an isotope may elicit changes in how that isotope moves through the environment and ultimately impacts human receptors, it is pertinent to understand how nuclear accident consequence models, such as MACCS, account for variable forms. This report documents a review of MACCS modeling capabilities for variability in radionuclide chemical and physical forms. This review centers on the current state-of-practice for dosimetry and deposition modeling of varying radionuclide forms to understand how consistent existing MACCS capabilities are with state of practice. This analysis is also used to inform potential MACCS model upgrades. MACCS conceptual models along with dosimetry and deposition related practices are discussed. Recommendations and suggestions for model improvements are posited.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report details model development, theory, and a literature review focusing on the emission of contaminants on solid substrates in fires. This is the final report from a 2-year Nuclear Safety Research and Development (NSRD) project. The work represents progress towards a goal of having modeling and simulation capabilities that are sufficiently mature and accurate that they can be utilized in place of physical tests for determining safe handling practices. At present, the guidelines for safety are largely empirically based, derived from a survey of existing datasets. This particular report details the development, verification and calibration of a number of code improvements that have been implemented in the SIERRA suite of codes, and the application of those codes to three different experimental scenarios that have been subject of prior tests. The first scenario involves a contaminated PMMA slab, which is exposed to heat. The modeling involved a novel method for simulating the viscous diffusion of the particles in the slab. The second scenario involved a small pool fire of contaminated combustible liquid mimicking historical tests and finds that the release of contaminants has a high functionality with the height of the liquid in the container. The third scenario involves the burning of a contaminated tray of shredded cellulose. A novel release mechanism was formulated based on predicted progress of the decomposition of the cellulose, and while the model was found to result in release that can be tuned to match the experiments, some modifications to the model are desirable to achieve quantitative accuracy.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE Transactions on Device and Materials Reliability
This paper describes a new non-charge-based data storing technique in NAND flash memory called watermark that encodes read-only data in the form of physical properties of flash memory cells. Unlike traditional charge-based data storing method in flash memory, the proposed technique is resistant to total ionizing dose (TID) effects. To evaluate its resistance to irradiation effects, we analyze data stored in several commercial single-level-cell (SLC) flash memory chips from different vendors and technology nodes. These chips are irradiated using a Co-60 gamma-ray source array for up to 100 krad(Si) at Sandia National Laboratories. Experimental evaluation performed on a flash chip from Samsung shows that the intrinsic bit error rate (BER) of watermark increases from mathbf {sim }0.8 % for TID = 0 krad(Si) to mathbf {mathrm {sim }}1 % for TID = 100 krad(Si). Conversely, the BER of charge-based data stored on the same chip increases from 0% at TID = 0 krad(Si) to 1.5% at TID = 100 krad(Si). The results imply that the proposed technique may potentially offer significant improvements in data integrity relative to traditional charge-based data storage for very high radiation (TID mathbf { > } 100 krad(Si)) environments. These gains in data integrity relative to the charge-based data storage are useful in radiation-prone environments, but they come at the cost of increased write times and higher BERs before irradiation.
Abstract not provided.
Abstract not provided.
Th e U.S. Strategic Petroleum Reserve (SPR) is a crude oil storage system administered by the U.S. Department of Energy. The reserve consists of 60 active storage caverns located in underground salt domes spread across four sites in Louisiana and Texas, near the Gulf of Mexico. Beginning in 2016, the SPR started executing C ongressionally mandated oil sales. The configuration of the reserve, with a total capacity of greater than 700 million barrels ( MMB ) , re quires that unsaturated water (referred to herein as ?raw? water) is injected into the storage caverns to displace oil for sales , exchanges, and drawdowns . As such, oil sales will produce cavern growth to the extent that raw water contacts the salt cavern walls and dissolves (leaches) the surrounding salt before reaching brine saturation. SPR injected a total of over 45 MMB of raw water into twenty - six caverns as part of oil sales in CY21 . Leaching effects were monitored in these caverns to understand how the sales operations may impact the long - term integrity of the caverns. While frequent sonars are the most direct means to monitor changes in cavern shape, they can be resource intensive for the number of caverns involved in sales and exchanges. An interm ediate option is to model the leaching effects and see if any concerning features develop. The leaching effects were modeled here using the Sandia Solution Mining Code , SANSMIC . The modeling results indicate that leaching - induced features do not raise co ncern for the majority of the caverns, 15 of 26. Eleven caverns, BH - 107, BH - 110, BH - 112, BH - 113, BM - 109, WH - 11, WH - 112, WH - 114, BC - 17, BC - 18, and BC - 19 have features that may grow with additional leaching and should be monitored as leaching continues in th ose caverns. Additionally, BH - 114, BM - 4, and BM - 106 were identified in previous leaching reports for recommendation of monitoring. Nine caverns had pre - and post - leach sonars that were compared with SANSMIC results. Overall, SANSMIC was able to capture the leaching well. A deviation in the SANSMIC and sonar cavern shapes was observed near the cavern floor in caverns with significant floor rise, a process not captured by SANSMIC. These results validate that SANSMIC continues to serve as a useful tool for mon itoring changes in cavern shape due to leaching effects related to sales and exchanges.
Abstract not provided.
Abstract not provided.
Time-resolved X-ray thermometry is an enabling technology for measuring temperature and phase change of components. However, current diagnostic methods are limited in their ability due to the invasive nature of probes or the requirement of coatings and optical access to the component. Our proposed developments overcome these challenges by utilizing X-rays to directly measure the objects temperature. Variable-Temperature X-ray Diffraction (VT-XRD) was performed over a wide range of temperatures and diffraction angles and was performed on several materials to analyze the patterns of the bulk materials for sensitivity. "High-speed" VT-XRD was then performed for a single material over a small range of diffraction angles to see how fast the experiments could be performed, whilst still maintaining peaks sufficiently large enough for analysis.
Abstract not provided.
Abstract not provided.
Abstract not provided.
We report the system response of a pixelated associated particle imaging (API) neutron radiography system. The detector readout currently consists of a 2x2 array of organic glass scintillator detectors, each with an 8x8 array of optically isolated pixels that match the size and pitch of the ARRAYJ-60035-64P-PCB Silicon Photomultiplier (SiPM) array from SensL/onsemi with 6x6 mm2 SiPMs. The alpha screen of the API deuterium-tritium neutron generator is read out with the S13361-3050AE-08 from Hamamatsu, which is an 8x8 array of 3x3 mm2 SiPMs. Data from the 320 channel system is acquired with the TOFPET2-based readout system. We present the predicted imaging capability of an eventual 5x5 detector array, the waveform-based energy and pulse shape characterization of the individual detectors, and the timing and energy response from the TOFPET2 system.
This report summarizes research performed in the context of a REHEDS LDRD project that explores methods for measuring electrical properties of vessel joints. These properties, which include contact points and associated contact resistance, are “hidden” in the sense that they are not apparent from a computer-assisted design (CAD) description or visual inspection. As is demonstrated herein, the impact of this project is the development of electromagnetic near-field scanning capabilities that allow weapon cavity joints to be characterized with high spatial and/or temporal resolution. Such scans provide insight on the hidden electrical properties of the joint, allowing more detailed and accurate models of joints to be developed, and ultimately providing higher fidelity shielding effectiveness (SE) predictions. The capability to perform high-resolution temporal scanning of joints under vibration is also explored, using a multitone probing concept, allowing time-varying properties of joints to be characterized and the associated modulation to SE to be quantified.
Abstract not provided.
Structural health monitoring of an engineered component in a harsh environment is critical for multiple DOE missions including nuclear fuel cycle, subsurface energy production/storage, and energy conversion. Supported by a seeding Laboratory Directed Research & Development (LDRD) project, we have explored a new concept for structural health monitoring by introducing a self-sensing capability into structural components. The concept is based on two recent technological advances: metamaterials and additive manufacturing. A self-sensing capability can be engineered by embedding a metastructure, for example, a sheet of electromagnetic resonators, either metallic or dielectric, into a material component. This embedment can now be realized using 3-D printing. The precise geometry of the embedded metastructure determines how the material interacts with an incident electromagnetic wave. Any change in the structure of the material (e.g., straining, degradation, etc.) would inevitably perturbate the embedded metastructures or metasurface array and therefore alter the electromagnetic response of the material, thus resulting in a frequency shift of a reflection spectrum that can be detected passively and remotely. This new sensing approach eliminates complicated environmental shielding, in-situ power supply, and wire routing that are generally required by the existing active-circuit-based sensors. The work documented in this report has preliminarily demonstrated the feasibility of the proposed concept. The work has established the needed simulation tools and experimental capabilities for future studies.
This report documents the Resilience Enhancements through Deep Learning Yields (REDLY) project, a three-year effort to improve electrical grid resilience by developing scalable methods for system operators to protect the grid against threats leading to interrupted service or physical damage. The computational complexity and uncertain nature of current real-world contingency analysis presents significant barriers to automated, real-time monitoring. While there has been a significant push to explore the use of accurate, high-performance machine learning (ML) model surrogates to address this gap, their reliability is unclear when deployed in high-consequence applications such as power grid systems. Contemporary optimization techniques used to validate surrogate performance can exploit ML model prediction errors, which necessitates the verification of worst-case performance for the models.
Abstract not provided.
Abstract not provided.
This report documents a method for the quantitative identification of radionuclides of potential interest for accident consequence analysis involving advanced nuclear reactors. Based on previous qualitative assessments of radionuclide inventories for advanced reactors coupled with the review of a radiological inventory developed for a heat pipe reactor, a 1 Ci activity airborne release was calculated for 137 radionuclides using the MACCS 4.1 code suite. Several assumptions regarding release conditions were made and discussed herein. The potential release of a heat pipe reactor inventory was also modeled following the same assumptions. Results provide an estimation of the relative EARLY and CHRONC phase dose contribution from advanced reactor radionuclides and are normalized to doses from equivalent releases of I-131 and Cs-137, respectively. Ultimately, a list of 69 radionuclides with EARLY or CHRONC dose contributions at least 1/100th that of I-131 or Cs-137, respectively – 48 of which are currently considered for LWR consequence analyses – was identified of being of potential importance for analyses involving a heat pipe reactor.
Abstract not provided.
Abstract not provided.
Ambient infrasound noise in quiet, rural environments has been extensively studied and well-characterized through noise models for several decades. More recently, creating noise models for high-noise rural environments has also become an area of active research. However, far less work has been done to create generalized low-frequency noise models for urban areas. The high ambient noise levels expected in cities and other highly populated areas means that these environments are regarded as poor locations for acoustic sensors, and historically, sensor deployment in urban areas were avoided for this reason. However, there are several advantages to placing sensors in urban environments, including convenience of deployment and maintenance, and increasingly, necessity, as more previously rural areas become populated. This study seeks to characterize trends in low-frequency urban noise by creating a background noise model for Las Vegas, NV, using the Las Vegas Infrasound Array (LVIA): a network of eleven infrasound sensors deployed throughout the city. Data included in this study spans from 2019 to 2021 and provides a largely uninterrupted record of noise levels in the city from 0.1–500 Hz, with only minor discontinuities on individual stations. We organize raw data from the LVIA sensors into hourly power spectral density (PSD) averages for each station and select from these PSDs to create frequency distributions for time periods of interest . These frequency distributions are converted into probability density functions (PDFs), which are then used to evaluate variations in frequency and amplitude over daily to seasonal timescale s. In addition to PDFs, the median, 5th percentile, and 95th percentile amplitude values are calculated across the entire frequency range. This methodology follows a well-established process for noise model creation.
Abstract not provided.
Quantifying the sensitivity - how a quantity of interest (QoI) varies with respect to a parameter – and response – the representation of a QoI as a function of a parameter - of a computer model of a parametric dynamical system is an important and challenging problem. Traditional methods fail in this context since sensitive dependence on initial conditions implies that the sensitivity and response of a QoI may be ill-conditioned or not well-defined. If a chaotic model has an ergodic attractor, then ergodic averages of QoIs are well-defined quantities and their sensitivity can be used to characterize model sensitivity. The response theorem gives sufficient conditions such that the local forward sensitivity – the derivative with respect to a given parameter - of an ergodic average of a QoI is well-defined. We describe a method based on ergodic and response theory for computing the sensitivity and response of a given QoI with respect to a given parameter in a chaotic model with an ergodic and hyperbolic attractor. This method does not require computation of ensembles of the model with perturbed parameter values. The method is demonstrated and some of the computations are validated on the Lorenz 63 and Lorenz 96 models.
Abstract not provided.
This report documents the results of an FY22 ASC V&V level 2 milestone demonstrating new algorithms for multifidelity uncertainty quantification. Part I of the report describes the algorithms, studies their performance on a simple model problem, and then deploys the methods to a thermal battery example from the open literature. Part II (restricted distribution) applies the multifidelity UQ methods to specific thermal batteries of interest to the NNSA/ASC program.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Energies
On the path towards climate-neutral future mobility, the usage of synthetic fuels derived from renewable power sources, so-called e-fuels, will be necessary. Oxygenated e-fuels, which contain oxygen in their chemical structure, not only have the potential to realize a climate-neutral powertrain, but also to burn more cleanly in terms of soot formation. Polyoxymethylene dimethyl ethers (PODE or OMEs) are a frequently discussed representative of such combustibles. However, to operate compression ignition engines with these fuels achieving maximum efficiency and minimum emissions, the physical-chemical behavior of OMEs needs to be understood and quantified. Especially the detailed characterization of physical and chemical properties of the spray is of utmost importance for the optimization of the injection and the mixture formation process. The presented work aimed to develop a comprehensive CFD model to specify the differences between OMEs and dodecane, which served as a reference diesel-like fuel, with regards to spray atomization, mixing and auto-ignition for single- and multi-injection patterns. The simulation results were validated against experimental data from a high-temperature and high-pressure combustion vessel. The sprays’ liquid and vapor phase penetration were measured with Mie-scattering and schlieren-imaging as well as diffuse back illumination and Rayleigh-scattering for both fuels. To characterize the ignition process and the flame propagation, measurements of the OH* chemiluminescence of the flame were carried out. Significant differences in the ignition behavior between OMEs and dodecane could be identified in both experiments and CFD simulations. Liquid penetration as well as flame lift-off length are shown to be consistently longer for OMEs. Zones of high reaction activity differ substantially for the two fuels: Along the spray center axis for OMEs and at the shear boundary layers of fuel and ambient air for dodecane. Additionally, the transient behavior of high temperature reactions for OME is predicted to be much faster.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
I started my internship in January 2022 but the research on measuring dispersion and loss of 355nm light from a silicon oxide waveguide began in August 2022 which will be the focus of this paper. The motivation of this project is to determine whether it is possible to use pulsed 355nm light in an integrated waveguide within an ion trap chip. To begin this project, light from the 355nm Coherent Paladin laser was coupled into a fiber which will be referred to as the “source fiber.” After coupling into a fiber, loss and dispersion measurements could be performed as this fiber was used to deliver light to each of the experiments which will be covered in detail in the following paragraphs.
Applied Mathematical Modelling
In this work we infer the underlying distribution on pore radius in human cortical bone samples using ultrasonic attenuation data. We first discuss how to formulate polydisperse attenuation models using a probabilistic approach and the Waterman Truell model for scattering attenuation. We then compare the Independent Scattering Approximation and the higher-order Waterman Truell models’ forward predictions for total attenuation in polydisperse samples. Following this, we formulate an inverse problem under the Prohorov Metric Framework coupled with variational regularization to stabilize this inverse problem. We then use experimental attenuation data taken from human cadaver samples and solve inverse problems resulting in nonparametric estimates of the probability density function on pore radius. We compare these estimates to the “true” microstructure of the bone samples determined via microCT imaging. We find that our methodology allows us to reliably estimate the underlying microstructure of the bone from attenuation data.
Abstract not provided.
Making reliable predictions in the presence of uncertainty is critical to high-consequence modeling and simulation activities, such as those encountered at Sandia National Laboratories. Surrogate or reduced-order models are often used to mitigate the expense of performing quality uncertainty analyses with high-fidelity, physics-based codes. However, phenomenological surrogate models do not always adhere to important physics and system properties. This project develops surrogate models that integrate physical theory with experimental data through a maximally-informative framework that accounts for the many uncertainties present in computational modeling problems. Correlations between relevant outputs are preserved through the use of multi-output or co-predictive surrogate models; known physical properties (specifically monotoncity) are also preserved; and unknown physics and phenomena are detected using a causal analysis. By endowing surrogate models with key properties of the physical system being studied, their predictive power is arguably enhanced, allowing for reliable simulations and analyses at a reduced computational cost.
AIChE Journal
Chemical engineering systems often involve a functional porous medium, such as in catalyzed reactive flows, fluid purifiers, and chromatographic separations. Ideally, the flow rates throughout the porous medium are uniform, and all portions of the medium contribute efficiently to its function. The permeability is a property of a porous medium that depends on pore geometry and relates flow rate to pressure drop. Additive manufacturing techniques raise the possibilities that permeability can be arbitrarily specified in three dimensions, and that a broader range of permeabilities can be achieved than by traditional manufacturing methods. Using numerical optimization methods, we show that designs with spatially varying permeability can achieve greater flow uniformity than designs with uniform permeability. We consider geometries involving hemispherical regions that distribute flow, as in many glass chromatography columns. By several measures, significant improvements in flow uniformity can be obtained by modifying permeability only near the inlet and outlet.
Abstract not provided.
Abstract not provided.
Journal of Peridynamics and Nonlocal Modeling
The paper presents a collection of results on continuous dependence for solutions to nonlocal problems under perturbations of data and system parameters. The integral operators appearing in the systems capture interactions via heterogeneous kernels that exhibit different types of weak singularities, space dependence, even regions of zero-interaction. The stability results showcase explicit bounds involving the measure of the domain and of the interaction collar size, nonlocal Poincaré constant, and other parameters. In the nonlinear setting, the bounds quantify in different Lp norms the sensitivity of solutions under different nonlinearity profiles. The results are validated by numerical simulations showcasing discontinuous solutions, varying horizons of interactions, and symmetric and heterogeneous kernels.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Mechanics of Materials
The Lip-field approach was introduced in Moës and Chevaugeon (2021) as a new way to regularize softening material models. It was tested in 1D quasistatic in Moës and Chevaugeon (2021) and 2D quasistatic in Chevaugeon and Moës (2021): this paper extends it to 1D dynamics, on the challenging problem of dynamic fragmentation. The Lip-field approach formulates the mechanical problem to be solved as an optimization problem, where the incremental potential to be minimized is the non-regularized one. Spurious localization is prevented by imposing a Lipschitz constraint on the damage field. The displacement and damage field at each time step are obtained by a staggered algorithm, that is the displacement field is computed for a fixed damage field, then the damage field is computed for a fixed displacement field. Indeed, these two problems are convex, which is not the case of the global problem where the displacement and damage fields are sought at the same time. The incremental potential is obtained by equivalence with a cohesive zone model, which makes material parameters calibration simple. A non-regularized local damage equivalent to a cohesive zone model is also proposed. It is used as a reference for the Lip-field approach, without the need to implement displacement jumps. These approaches are applied to the brittle fragmentation of a 1D bar with randomly perturbed material properties to accelerate spatial convergence. Both explicit and implicit dynamic implementations are compared. Favorable comparison to several analytical, numerical and experimental references serves to validate the modeling approach.
Plasma Sources Science and Technology
The methyl radical plays a central role in plasma-assisted hydrocarbon chemistry but is challenging to detect due to its high reactivity and strongly pre-dissociative electronically excited states. We report the development of a photo-fragmentation laser-induced fluorescence (PF-LIF) diagnostic for quantitative 2D imaging of methyl profiles in a plasma. This technique provides temporally and spatially resolved measurements of local methyl distributions, including in near-surface regions that are important for plasma-surface interactions such as plasma-assisted catalysis. The technique relies on photo-dissociation of methyl by the fifth harmonic of a Nd:YAG laser at 212.8 nm to produce CH fragments. These photofragments are then detected with LIF imaging by exciting a transition in the B-X(0, 0) band of CH with a second laser at 390 nm. Fluorescence from the overlapping A-X(0, 0), A-X(1, 1), and B-X(0, 1) bands of CH is detected near 430 nm with the A-state populated by collisional B-A electronic energy transfer. This non-resonant detection scheme enables interrogation close to a surface. The PF-LIF diagnostic is calibrated by producing a known amount of methyl through photo-dissociation of acetone vapor in a calibration gas mixture. We demonstrate PF-LIF imaging of methyl production in methane-containing nanosecond pulsed plasmas impinging on dielectric surfaces. Absolute calibration of the diagnostic is demonstrated in a diffuse, plane-to-plane discharge. Measured profiles show a relatively uniform distribution of up to 30 ppm of methyl. Relative methyl measurements in a filamentary plane-to-plane discharge and a plasma jet reveal highly localized intense production of methyl. The utility of the PF-LIF technique is further demonstrated by combining methyl measurements with formaldehyde LIF imaging to capture spatiotemporal correlations between methyl and formaldehyde, which is an important intermediate species in plasma-assisted oxidative coupling of methane.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The composition and phase fraction of the intergranular phase of 94ND10 ceramic is determined and fabricated ex situ. The fraction of each phase is 85.96 vol% Al2O3 bulk phase, 9.46 vol% Mg-rich intergranular phase, 4.36 vol% Ca/Si-rich intergranular phase, and 0.22 vol% voids. The Ca/Si-rich phase consists of 0.628 at% Mg, 12.59 at% Si, 10.24 at% Ca, 17.23 at% Al, and balance O. The Mgrich phase consists of 14.17 at% Mg, 0.066 at% Si, 0.047 at% Ca, 28.69 at% Al, and balance O. XRD of the ex situ intergranular material made by mixed oxides consisting of the above phase and element fractions yielded 92 vol% MgAl2O4 phase and 8 vol% CaAl2Si2O8 phase. The formation of MgAl2O4 phase is consistent with prior XRD of 94ND10, while the CaAl2Si2O8 phase may exist in 94ND10 but at a concentration not readily detected with XRD. The MgAl2O4 and CaAl2Si2O8 phases determined from XRD are expected to have the elemental compositions for the Mg-rich and Ca/Si-rich phases above by cation substitutions (e.g., some Mg substituted for by Ca in the Mg-rich phase) and impurity phases not detectable with XRD.
Abstract not provided.
Abstract not provided.
This report summarizes the needs, challenges, and opportunities associated with carbon-free energy and energy storage for manufacturing and industrial decarbonization. Energy needs and challenges for different manufacturing and industrial sectors (e.g., cement/steel production, chemicals, materials synthesis) are identified. Key issues for industry include the need for large, continuous on-site capacity (tens to hundreds of megawatts), compatibility with existing infrastructure, cost, and safety. Energy storage technologies that can potentially address these needs, which include electrochemical, thermal, and chemical energy storage, are presented along with key challenges, gaps, and integration issues. Analysis tools to value energy storage technologies in the context of manufacturing and industrial decarbonizations are also presented. Material is drawn from the Energy Storage for Manufacturing and Industrial Decarbonization (Energy StorM) Workshop, held February 8 - 9, 2022. The objective was to identify research opportunities and needs for the U.S. Department of Energy as part of its Energy Storage Grand Challenge program.
Abstract not provided.
A collection of x-ray computed tomography scans of specimens from the Museum of Southwestern Biology.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Parallel Computing
The parallel strong-scaling of iterative methods is often determined by the number of global reductions at each iteration. Low-synch Gram–Schmidt algorithms are applied here to the Arnoldi algorithm to reduce the number of global reductions and therefore to improve the parallel strong-scaling of iterative solvers for nonsymmetric matrices such as the GMRES and the Krylov–Schur iterative methods. In the Arnoldi context, the QR factorization is “left-looking” and processes one column at a time. Among the methods for generating an orthogonal basis for the Arnoldi algorithm, the classical Gram–Schmidt algorithm, with reorthogonalization (CGS2) requires three global reductions per iteration. A new variant of CGS2 that requires only one reduction per iteration is presented and applied to the Arnoldi algorithm. Delayed CGS2 (DCGS2) employs the minimum number of global reductions per iteration (one) for a one-column at-a-time algorithm. The main idea behind the new algorithm is to group global reductions by rearranging the order of operations. DCGS2 must be carefully integrated into an Arnoldi expansion or a GMRES solver. Numerical stability experiments assess robustness for Krylov–Schur eigenvalue computations. Performance experiments on the ORNL Summit supercomputer then establish the superiority of DCGS2 over CGS2.
Abstract not provided.
Abstract not provided.
Geomechanics for Energy and the Environment
Accurate modeling of subsurface flow and transport processes is vital as the prevalence of subsurface activities such as carbon sequestration, geothermal recovery, and nuclear waste disposal increases. Computational modeling of these problems leverages poroelasticity theory, which describes coupled fluid flow and mechanical deformation. Although fully coupled monolithic schemes are accurate for coupled problems, they can demand significant computational resources for large problems. In this work, a fixed stress scheme is implemented into the Sandia Sierra Multiphysics toolkit. Two implementation methods, along with the fully coupled method, are verified with one-dimensional (1D) Terzaghi, 2D Mandel, and 3D Cryer sphere benchmark problems. The impact of a range of material parameters and convergence tolerances on numerical accuracy and efficiency was evaluated. Overall the fixed stress schemes achieved acceptable numerical accuracy and efficiency compared to the fully coupled scheme. However, the accuracy of the fixed stress scheme tends to decrease with low permeable cases, requiring the finer tolerance to achieve a desired numerical accuracy. For the fully coupled scheme, high numerical accuracy was observed in most of cases except a low permeability case where an order of magnitude finer tolerance was required for accurate results. Finally, a two-layer Terzaghi problem and an injection–production well system were used to demonstrate the applicability of findings from the benchmark problems for more realistic conditions over a range of permeability. Simulation results suggest that the fixed stress scheme provides accurate solutions for all cases considered with the proper adjustment of the tolerance. This work clearly demonstrates the robustness of the fixed stress scheme for coupled poroelastic problems, while a cautious selection of numerical tolerance may be required under certain conditions with low permeable materials.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Sandia National Laboratories (SNL) is designing and developing an Artificial Intelligence (AI)-enabled smart digital assistant (SDA), Inspecta (International Nuclear Safeguards Personal Examination and Containment Tracking Assistant). The goal is to provide inspectors an in-field digital assistant that can perform tasks identified as tedious, challenging, or prone to human error. During 2021, we defined the requirements for Inspecta based on reviews of International Atomic Energy Agency (IAEA) publications and interviews with former IAEA inspectors. We then mapped the requirements to current commercial or open-source technical capabilities to provide a development path for an initial Inspecta prototype while highlighting potential research and development tasks. We selected a highimpact inspection task that could be performed by an early Inspecta prototype and are developing the initial architecture, including hardware platform. This paper describes the methodology for selecting an initial task scenario, the first set of Inspecta skills needed to assist with that task scenario and finally the design and development of Inspecta’s architecture and platform.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Brittle material failure in high consequence systems can appear random and unpredictable at subcritical stresses. Gaps in our understanding of how structural flaws and environmental factors (humidity, temperature) impact fracture propagation need to be addressed to circumvent this issue. A combined experimental and computational approach composed of molecular dynamics (MD) simulations, numerical modeling, and atomic force microscopy (AFM) has been undertaken to identify mechanisms of slow crack growth in silicate glasses. AFM characterization of crack growth as slow as 10-13 m/s was observed, with some stepwise crack growth. MD simulations have identified the critical role of inelastic relaxation in crack propagation, including evolution of the structure during relaxation. A numerical model for the existence of a stress intensity threshold, a stress intensity below which a fracture will not propagate, was developed. This transferrable model for predicting slow crack growth is being incorporated into mission-based programs.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.