Machine Learning Applications for Emergency Responders: A Rollout Strategy
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
As the U.S. electrifies the transportation sector, cyberattacks targeting vehicle charging could impact several critical infrastructure sectors including power systems, manufacturing, medical services, and agriculture. This is a growing area of concern as charging stations increase power delivery capabilities and must communicate to authorize charging, sequence the charging process, and manage load (grid operators, vehicles, OEM vendors, charging network operators, etc.). The research challenges are numerous and complicated because there are many end users, stakeholders, and software and equipment vendors interests involved. Poorly implemented electric vehicle supply equipment (EVSE), electric vehicle (EV), or grid operator communication systems could be a significant risk to EV adoption because the political, social, and financial impact of cyberattacks — or public perception of such — would ripple across the industry and produce lasting effects. Unfortunately, there is currently no comprehensive EVSE cybersecurity approach and limited best practices have been adopted by the EV/EVSE industry. There is an incomplete industry understanding of the attack surface, interconnected assets, and unsecured inter faces. Comprehensive cybersecurity recommendations founded on sound research are necessary to secure EV charging infrastructure. This project provided the power, security, and automotive industry with a strong technical basis for securing this infrastructure by developing threat models, determining technology gaps, and identifying or developing effective countermeasures. Specifically, the team created a cybersecurity threat model and performed a technical risk assessment of EVSE assets across multiple manufacturers and vendors, so that automotive, charging, and utility stakeholders could better protect customers, vehicles, and power systems in the face of new cyber threats.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
International Journal of Engine Research
Ducted fuel injection (DFI) is a novel combustion strategy that has been shown to significantly attenuate soot formation in diesel engines. While previous studies have used optical diagnostics and optical filter smoke number methods to show that DFI reduces in-cylinder soot formation and engine-out soot emissions, respectively, this is the first study to measure solid particle number (PN) emissions in addition to particle mass (PM). Furthermore, this study quantitatively evaluates the use of transient particle instruments for measuring particles from skip-fired operation in an optical single cylinder research engine (SCRE). Engine-out PN was measured using an engine exhaust particle sizer following a catalytic stripper, and PM was measured using a photoacoustic analyzer. The study improves on earlier preliminary emissions studies by clearly showing that DFI reduces overall PM by 76%–79% and PN for particles larger than 23 nm by 77% relative to conventional diesel combustion at a 1200-rpm, 13.3-bar gross indicated mean effective pressure operating condition. The degree of engine-out PM reduction with DFI was similar across both particulate measurement instruments used in the work. Through the use of bimodal distribution fitting, DFI was also shown to reduce the geometric mean diameter of accumulation mode particles by 26%, similar to the effects of increased injection pressure in conventional diesel combustion systems. This work clearly shows the significant solid particulate matter reductions enabled by DFI while also demonstrating that engine-out PN can be accurately measured from an optical SCRE operating in a skip-fired mode. Based on these results, it is believed that DFI has the potential to enable fuel savings when implemented in multi-cylinder engines, both by lowering the required frequency of active diesel particulate filter regeneration, and by reducing the backpressure imposed by exhaust filtration systems.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Sandia National Laboratories has tested and evaluated an updated version of the MB3a infrasound sensor, designed by CEA and manufactured by SeismoWave. The purpose of this infrasound sensor evaluation is to measure the performance characteristics in such areas as power consumption, sensitivity, full scale, self-noise, dynamic range, response, passband, sensitivity variation due to changes in barometric pressure and temperature, and sensitivity to acceleration. The MB3a infrasound sensors are being evaluated for use in the International Monitoring System (IMS) of the Preparatory Commission to the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO).
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Physical Review B
Magnetic, specific heat, and structural properties of the equiatomic Cantor alloy system are reported for temperatures between 5 and 300 K, and up to fields of 70 kOe. Magnetization measurements performed on as-cast, annealed, and cold-worked samples reveal a strong processing history dependence and that high-temperature annealing after cold working does not restore the alloy to a "pristine"state. Measurements on known precipitates show that the two transitions, detected at 43 and 85 K, are intrinsic to the Cantor alloy and not the result of an impurity phase. Experimental and ab initio density functional theory computational results suggest that these transitions are a weak ferrimagnetic transition and a spin-glass-like transition, respectively, and magnetic and specific heat measurements provide evidence of significant Stoner enhancement and electron-electron interactions within the material.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Geoscientific Model Development
Runoff is a critical component of the terrestrial water cycle, and Earth system models (ESMs) are essential tools to study its spatiotemporal variability. Runoff schemes in ESMs typically include many parameters so that model calibration is necessary to improve the accuracy of simulated runoff. However, runoff calibration at a global scale is challenging because of the high computational cost and the lack of reliable observational datasets. In this study, we calibrated 11 runoff relevant parameters in the Energy Exascale Earth System Model (E3SM) Land Model (ELM) using a surrogate-assisted Bayesian framework. First, the polynomial chaos expansion machinery with Bayesian compressed sensing is used to construct computationally inexpensive surrogate models for ELM-simulated runoff at 0.5 × 0.5 for 1991-2010. The error metric between the ELM simulations and the benchmark data is selected to construct the surrogates, which facilitates efficient calibration and avoids the more conventional, but challenging, construction of high-dimensional surrogates for the ELM simulated runoff. Second, the Sobol' index sensitivity analysis is performed using the surrogate models to identify the most sensitive parameters, and our results show that, in most regions, ELM-simulated runoff is strongly sensitive to 3 of the 11 uncertain parameters. Third, a Bayesian method is used to infer the optimal values of the most sensitive parameters using an observation-based global runoff dataset as the benchmark. Our results show that model performance is significantly improved with the inferred parameter values. Although the parametric uncertainty of simulated runoff is reduced after the parameter inference, it remains comparable to the multimodel ensemble uncertainty represented by the global hydrological models in ISMIP2a. Additionally, the annual global runoff trend during the simulation period is not well constrained by the inferred parameter values, suggesting the importance of including parametric uncertainty in future runoff projections.
Abstract not provided.
The binders, plasticizers, and dispersants in a polyvinylpyrrolidone/polyethylene glycol/glycerin binder system for PZT were evaluated. Kollidon VA 64 was investigated as a possible alternative binder to Kollidon 25 in a PZT powder system. The target amount of PEG300 in a Kollidon VA 64 system was predicted to be 15 to 30 wt.% PEG300 based on Tganalysis by DSC. The compaction properties (slide coefficient, cohesiveness, green strength, etc.) were analyzed for Kollidon VA 64 – x PEG300 – glycerin systems. The properties in the range of x = 0 to 20 for systems without glycerin and x = 5 to 20 for systems with glycerin all exceeded the performance of the baseline Kollidon 25 system, of which VA 64 – 10 wt.% PEG300 – 5 wt.% glycerin with adsorbed moisture was the most promising composition due to a compact cohesiveness of 0.84 at 40 kpsi compared to a baseline of 0.44. The effect of dispersants on the compaction properties of a Kollidon 25 – PEG300 binder system was also analyzed, and the compaction properties were also compared to that of a Aquazol 200 – PEG6000 binder system. The powders with dispersant exhibited comparabl e per formance to the baseline, suggesting good compatibility. The compacts produce with the Aquazol 200 – PEG6000 binder exhibited decreased performance when compared to the baseline .
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
A collection of x-ray computed tomography scans of specimens from the Museum of Southwestern Biology.
Abstract not provided.
Reliability Engineering and System Safety
Probability of loss of assured safety (PLOAS) is modeled for weak link (WL)/strong link (SL) systems in which one or more WLs or SLs could potentially degrade into a precursor condition to link failure that will be followed by an actual link failure after some amount of elapsed time. The descriptor loss of assured safety (LOAS) is used because failure of the WL system places the entire system in an inoperable configuration while failure of the SL system before failure of the WL system, although undesirable, does not necessarily result in an unintended operation of the entire system. Thus, safety is “assured” by failure of the WL system before failure of the SL system. The following topics are considered: (i) Definition of precursor occurrence time cumulative distribution functions (CDFs) for individual WLs and SLs, (ii) Formal representation, approximation and illustration of PLOAS with (a) constant delay times, (b) aleatory uncertainty in delay times, and (c) delay times defined by functions of link properties at occurrence times for link failure precursors, and (iii) Procedures for the verification of PLOAS calculations for the three indicated definitions of delayed link failure.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Physical Chemistry C
Achieving practical, high-energy-density calcium batteries requires controlling the stability of Ca2+electrolytes during calcium metal cycling. Because of the highly reactive nature of calcium, most typical electrolyte constituents are unstable, leading to electrode passivation and low Coulombic efficiency. Among various commercially available salts, calcium bis(trifluoromethylsulfonyl)imide (Ca(TFSI)2) is attractive because of its oxidative stability and high solubility in a variety of solvents. However, this salt does not allow for calcium metal plating, and it has been proposed that TFSI-instability induced by Ca2+coordination is to blame. In this work, we test the ability of strongly coordinating Ca2+cosalts such as halides and borohydrides to displace TFSI-from the first coordination shell of Ca2+and thereby stabilize TFSI-based electrolytes to enable calcium plating. Through spectroscopic analysis, we find that the effectiveness of these cosalts at displacing the TFSI-anion is dependent on the solvent's coordination strength toward Ca2+. Surprisingly, electrochemical calcium deposition behavior is not correlated to the population of bound or free TFSI-. Instead, the nature of the coordination interaction between Ca2+and the cosalt anion is more important for determining stability. Our findings indicate that TFSI-anions are inherently unstable during calcium deposition even in the nominally free state. Therefore, strategies aimed at eliminating the interactions of these anions with the electrode surface via interface/interphase design are required.
International Journal for Numerical Methods in Engineering
We present an adaptive algorithm for constructing surrogate models of multi-disciplinary systems composed of a set of coupled components. With this goal we introduce “coupling” variables with a priori unknown distributions that allow surrogates of each component to be built independently. Once built, the surrogates of the components are combined to form an integrated-surrogate that can be used to predict system-level quantities of interest at a fraction of the cost of the original model. The error in the integrated-surrogate is greedily minimized using an experimental design procedure that allocates the amount of training data, used to construct each component-surrogate, based on the contribution of those surrogates to the error of the integrated-surrogate. The multi-fidelity procedure presented is a generalization of multi-index stochastic collocation that can leverage ensembles of models of varying cost and accuracy, for one or more components, to reduce the computational cost of constructing the integrated-surrogate. Extensive numerical results demonstrate that, for a fixed computational budget, our algorithm is able to produce surrogates that are orders of magnitude more accurate than methods that treat the integrated system as a black-box.
Frontiers in Environmental Science
Climate change is an existential threat to the vast global permafrost domain. The diverse human cultures, ecological communities, and biogeochemical cycles of this tenth of the planet depend on the persistence of frozen conditions. The complexity, immensity, and remoteness of permafrost ecosystems make it difficult to grasp how quickly things are changing and what can be done about it. Here, we summarize terrestrial and marine changes in the permafrost domain with an eye toward global policy. While many questions remain, we know that continued fossil fuel burning is incompatible with the continued existence of the permafrost domain as we know it. If we fail to protect permafrost ecosystems, the consequences for human rights, biosphere integrity, and global climate will be severe. The policy implications are clear: the faster we reduce human emissions and draw down atmospheric CO2, the more of the permafrost domain we can save. Emissions reduction targets must be strengthened and accompanied by support for local peoples to protect intact ecological communities and natural carbon sinks within the permafrost domain. Some proposed geoengineering interventions such as solar shading, surface albedo modification, and vegetation manipulations are unproven and may exacerbate environmental injustice without providing lasting protection. Conversely, astounding advances in renewable energy have reopened viable pathways to halve human greenhouse gas emissions by 2030 and effectively stop them well before 2050. We call on leaders, corporations, researchers, and citizens everywhere to acknowledge the global importance of the permafrost domain and work towards climate restoration and empowerment of Indigenous and immigrant communities in these regions.
Data-Centric Engineering
Physics-informed machine learning (PIML) has emerged as a promising new approach for simulating complex physical and biological systems that are governed by complex multiscale processes for which some data are also available. In some instances, the objective is to discover part of the hidden physics from the available data, and PIML has been shown to be particularly effective for such problems for which conventional methods may fail. Unlike commercial machine learning where training of deep neural networks requires big data, in PIML big data are not available. Instead, we can train such networks from additional information obtained by employing the physical laws and evaluating them at random points in the space-time domain. Such PIML integrates multimodality and multifidelity data with mathematical models, and implements them using neural networks or graph networks. Here, we review some of the prevailing trends in embedding physics into machine learning, using physics-informed neural networks (PINNs) based primarily on feed-forward neural networks and automatic differentiation. For more complex systems or systems of systems and unstructured data, graph neural networks (GNNs) present some distinct advantages, and here we review how physics-informed learning can be accomplished with GNNs based on graph exterior calculus to construct differential operators; we refer to these architectures as physics-informed graph networks (PIGNs). We present representative examples for both forward and inverse problems and discuss what advances are needed to scale up PINNs, PIGNs and more broadly GNNs for large-scale engineering problems.
ACS Applied Materials and Interfaces
A recently discovered, enhanced Ge diffusion mechanism along the oxidizing interface of Si/SiGe nanostructures has enabled the formation of single-crystal Si nanowires and quantum dots embedded in a defect-free, single-crystal SiGe matrix. Here, we report oxidation studies of Si/SiGe nanofins aimed at gaining a better understanding of this novel diffusion mechanism. A superlattice of alternating Si/Si0.7Ge0.3layers was grown and patterned into fins. After oxidation of the fins, the rate of Ge diffusion down the Si/SiO2interface was measured through the analysis of HAADF-STEM images. The activation energy for the diffusion of Ge down the sidewall was found to be 1.1 eV, which is less than one-quarter of the activation energy previously reported for Ge diffusion in bulk Si. Through a combination of experiments and DFT calculations, we propose that the redistribution of Ge occurs by diffusion along the Si/SiO2interface followed by a reintroduction into substitutional positions in the crystalline Si.
A series of experiments will be performed to test the integral effects of molybdenum on the reactivity of a critical system. These experiments will use the 7uPCX assembly with the 1.55 cm triangular pitch grid plates. Molybdenum sleeves, consisting of 19.6 inch long 0.5-inch nominal outside diameter molybdenum tubes with 0.031-inch nominal wall thickness and centering hardware, will be placed on some of the fuel rods in the array. The purpose of this analysis is to examine two configurations of the 7uPCX using the 1.55 cm triangular pitch grid plates in fully-reflected approach-to-critical experiments with the number of fuel rods in the array as the approach parameter. This document presents the results of the analysis that was done to allow completion of the 7uPCX Configuration Checklist from Appendix A of SPRF-AP-005 [SNL 2020] for the cores noted above. The checklists for these cores are shown in Appendix A.
The inverse methods team provides a set of tools for solving inverse problems in structural dynamics and thermal physics, and also sensor placement optimization via Optimal Experimental Design (OED). These methods are used for designing experiments, model calibration, and verification/validation analysis of weapons systems. This document provides a user’s guide to the input for the three apps that are supported for these methods. Details of input specifications, output options, and optimization parameters are included.
The SIERRA Low Mach Module: Fuego, henceforth referred to as Fuego, is the key element of the ASC fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Using MPMD coupling, Scefire and Nalu handle the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.
Aria is a Galerkin finite element based program for solving coupled-physics problems described by systems of PDEs and is capable of solving nonlinear, implicit, transient and direct-to-steady state problems in two and three dimensions on parallel architectures. The suite of physics currently supported by Aria includes thermal energy transport, species transport, and electrostatics as well as generalized scalar, vector and tensor transport equations. Additionally, Aria includes support for manufacturing process flows via the incompressible Navier-Stokes equations specialized to a low Reynolds number (Re < 1) regime. Enhanced modeling support of manufacturing processing is made possible through use of either arbitrary Lagrangian-Eulerian (ALE) and level set based free and moving boundary tracking in conjunction with quasi-static nonlinear elastic solid mechanics for mesh control. Coupled physics problems are solved in several ways including fully-coupled Newton’s method with analytic or numerical sensitivities, fully-coupled Newton-Krylov methods and a loosely-coupled nonlinear iteration about subsets of the system that are solved using combinations of the aforementioned methods. Error estimation, uniform and dynamic ℎ-adaptivity and dynamic load balancing are some of Aria’s more advanced capabilities.
Journal of Applied Physics
A method is developed to calculate the length into a sample to which a Frequency Domain Thermoreflectance (FDTR) measurement is sensitive. Sensing depth and sensing radius are defined as limiting cases for the spherically spreading FDTR measurement. A finite element model for FDTR measurements is developed in COMSOL multiphysics and used to calculate sensing depth and sensing radius for silicon and silicon dioxide samples for a variety of frequencies and laser spot sizes. The model is compared to experimental FDTR measurements. Design recommendations for sample thickness are made for experiments where semi-infinite sample depth is desirable. For measurements using a metal transducer layer, the recommended sample thickness is three thermal penetration depths, as calculated from the lowest measurement frequency.
The SNL Sierra Mechanics code suite is designed to enable simulation of complex multiphysics scenarios. The code suite is composed of several specialized applications which can operate either in standalone mode or coupled with each other. Arpeggio is a supported utility that enables loose coupling of the various Sierra Mechanics applications by providing access to Framework services that facilitate the coupling. More importantly Arpeggio orchestrates the execution of applications that participate in the coupling. This document describes the various components of Arpeggio and their operability. The intent of the document is to provide a fast path for analysts interested in coupled applications via simple examples of its usage.
Aria is a Galerkin finite element based program for solving coupled-physics problems described by systems of PDEs and is capable of solving nonlinear, implicit, transient and direct-to-steady state problems in two and three dimensions on parallel architectures. The suite of physics currently supported by Aria includes thermal energy transport, species transport, and electrostatics as well as generalized scalar, vector and tensor transport equations. Additionally, Aria includes support for manufacturing process ows via the incompressible Navier-Stokes equations specialized to a low Reynolds number (Re < 1) regime. Enhanced modeling support of manufacturing processing is made possible through use of either arbitrary Lagrangian-Eulerian (ALE) and level set based free and moving boundary tracking in conjunction with quasi-static nonlinear elastic solid mechanics for mesh control. Coupled physics problems are solved in several ways including fully-coupled Newton’s method with analytic or numerical sensitivities, fully-coupled Newton-Krylov methods and a loosely-coupled nonlinear iteration about subsets of the system that are solved using combinations of the aforementioned methods. Error estimation, uniform and dynamic ℎ-adaptivity and dynamic load balancing are some of Aria’s more advanced capabilities.
Chemistry of Materials
The present study has used a variety of characterization techniques to determine the products and reaction pathways involved in the rechargeable Li-FeS2 system. We revisit both the initial lithiation and subsequent cycling of FeS2 employing an ionic liquid electrolyte to investigate the intermediate and final charge products formed under varying thermal conditions (room temperature to 100 °C). The detection of Li2S and hexagonal FeS as the intermediate phases in the initial lithiation and the electrochemical formation of greigite, Fe3S4, as a charge product in the rechargeable reaction differ significantly from previous reports. The conditions for Fe3S4 formation are shown to be dependent on both the temperature (∼60 °C) and the availability of sulfur to drive a FeS to Fe3S4 transformation. Upon further cycling, Fe3S4 transforms to a lower sulfur content iron sulfide phase, a process which coincides with the loss of sulfur based on the new reaction pathways established in this work. The connection between sulfur loss, capacity fade, and charge product composition highlights the critical need to retain sulfur in the active material upon cycling.
Presented in this document is a portion of the tests that exist in the Sierra Thermal/Fluids verification test suite. Each of these tests is run nightly with the Sierra/TF code suite and the results of the test checked under mesh refinement against the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.
The SIERRA Low Mach Module: Fuego, henceforth referred to as Fuego, is the key element of the ASC fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Using MPMD coupling, Scefire and Nalu handle the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.
HardwareX
As part of the development process, scaled testing of wave energy converter devices are necessary to prove a concept, study hydrodynamics, and validate control system approaches. Creating a low-cost, small, lightweight data acquisition system suitable for scaled testing is often a barrier for wave energy converter developers’ ability to test such devices. This paper outlines an open-source solution to these issues, which can be customized based on specific needs. Furthermore, this will help developers with limited resources along a path toward commercialization.
The SIERRA Low Mach Module: Fuego, henceforth referred to as Fuego, is the key element of the ASC fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Using MPMD coupling, Scefire and Nalu handle the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.
SIERRA/Aero is a compressible fluid dynamics program intended to solve a wide variety compressible fluid flows including transonic and hypersonic problems. This document describes the commands for assembling a fluid model for analysis with this module, henceforth referred to simply as Aero for brevity. Aero is an application developed using the SIERRA Toolkit (STK). The intent of STK is to provide a set of tools for handling common tasks that programmers encounter when developing a code for numerical simulation. For example, components of STK provide field allocation and management, and parallel input/output of field and mesh data. These services also allow the development of coupled mechanics analysis software for a massively parallel computing environment.
Proceedings of the Platform for Advanced Scientific Computing Conference, PASC 2022
The decomposition of higher-order joint cumulant tensors of spatio-temporal data sets is useful in analyzing multi-variate non-Gaussian statistics with a wide variety of applications (e.g. anomaly detection, independent component analysis, dimensionality reduction). Computing the cumulant tensor often requires computing the joint moment tensor of the input data first, which is very expensive using a naïve algorithm. The current state-of-the-art algorithm takes advantage of the symmetric nature of a moment tensor by dividing it into smaller cubic tensor blocks and only computing the blocks with unique values and thus reducing computation. We propose a refactoring of this algorithm by posing its computation as matrix operations, specifically Khatri-Rao products and standard matrix multiplications. An analysis of the computational and cache complexity indicates significant performance savings due to the refactoring. Implementations of our refactored algorithm in Julia show speedups up to 10x over the reference algorithm in single processor experiments. We describe multiple levels of hierarchical parallelism inherent in the refactored algorithm, and present an implementation using an advanced programming model that shows similar speedups in experiments run on a GPU.
Power Spectrum Analysis (PSA) is a Sandia-developed, non-intrusive, electrical technique that captures distinct frequency-domain signatures of microelectronics devices using an innovative, unconventional biasing scheme (off-normal biasing). PSA can identify subtle differences in devices and is applicable in various areas such as device screening, counterfeit identification, reliability assurance, and trust authentication. From October 2020 to April 2021, Sandia worked with entrepreneurs from a new start-up company, Chiplytics, to commercialize PSA technology through NNSA-sponsored FedTech Program. In September 2021, Sandia received funding through Covid-19 Technical Assistance Program (CTAP) to provide technical assistance to Chiplytics for commercialization. Under the CTAP Statement of Work, Sandia was tasked with providing technical assistance to Chiplytics in PSA pilot testing for Naval Surface Warfare Center (NSWC) at Crane and other pilot participants. Sandia was also tasked with assisting Chiplytics in hardware development and evaluation of Chiplytics prototype system.
HPDC 2022 - Proceedings of the 31st International Symposium on High-Performance Parallel and Distributed Computing
New and novel HPC platforms provide interesting challenges and opportunities. Analysis of these systems can provide a better understanding of both the specific platform being studied as well as large-scale systems in general. Arm is one such architecture that has been explored in HPC for several years, however little is still known about its viability for supporting large-scale production workloads in terms of system reliability. The Astra system at Sandia National Laboratories was the first public peta-FLOPS Arm-based system on the Top500 and has been successfully running production HPC applications for a couple of years. In this paper, we analyze memory failure data collected from Astra while the system was in production running unclassified applications. This analysis revealed several interesting contributions related to both the Arm platform and to HPC systems in general. First, we outline the number of components replaced due to reliability issues in standing-up this first-of-its-kind, large-scale HPC system. We show the distribution differences between correctable DRAM faults and errors on Astra, showing that, not properly accounting for faults can lead to erroneous conclusions. Additionally, we characterize DRAM faults on the system and show contrary to existing work that memory faults are uniformly distributed across CPU socket, DRAM column, bank and rack region, but are not uniform across node, DIMM rank, DIMM slot on the motherboard, and system rack: some racks, ranks and DIMM slots experience more faults than others. Similarly, we show the impact of temperature and power on DRAM correctable errors. Finally, we make a detailed comparison of results presented here with the positional affects found in several previous large-scale reliability studies. The results of this analysis provide valuable guidance to organizations standing-up first-in- class platforms in HPC, organizations using Arm in HPC, and the entire large-scale HPC community in general.
Journal of visualized experiments : JoVE
There is a need to understand materials exposed to overlapping extreme environments such as high temperature, radiation, or mechanical stress. When these stressors are combined there may be synergistic effects that enable unique microstructural evolution mechanisms to activate. Understanding of these mechanisms is necessary for the input and refinement of predictive models and critical for engineering of next generation materials. The basic physics and underlying mechanisms require advanced tools to be investigated. The in situ ion irradiation transmission electron microscope (I³TEM) is designed to explore these principles. To quantitatively probe the complex dynamic interactions in materials, careful preparation of samples and consideration of experimental design is required. Particular handling or preparation of samples can easily introduce damage or features that obfuscate the measurements. There is no one correct way to prepare a sample; however, many mistakes can be made. The most common errors and things to consider are highlighted within. The I³TEM has many adjustable variables and a large potential experimental space, therefore it is best to design experiments with a specific scientific question or questions in mind. Experiments have been performed on large number of sample geometries, material classes, and with many irradiation conditions. The following are a subset of examples that demonstrate unique in situ capabilities utilizing the I3TEM. Au nanoparticles prepared by drop casting have been used to investigate the effects of single ion strikes. Au thin films have been used in studies on the effects of multibeam irradiation on microstructure evolution. Zr films have been exposed to irradiation and mechanical tension to examine creep. Ag nanopillars were subjected to simultaneous high temperature, mechanical compression, and ion irradiation to study irradiation induced creep as well. These results impact fields including: structural materials, nuclear energy, energy storage, catalysis, and microelectronics in space environments.
Physical Review Letters
The magneto-Rayleigh-Taylor instability (MRTI) plays an essential role in astrophysical systems and in magneto-inertial fusion, where it is known to be an important degradation mechanism of confinement and target performance. In this Letter, we show for the first time experimental evidence of mode mixing and the onset of an inverse-cascade process resulting from the nonlinear coupling of two discrete preseeded axial modes (400- and 550-μm wavelengths) on an Al liner that is magnetically imploded using the 20-MA, 100-ns rise-time Z Machine at Sandia National Laboratories. Four radiographs captured the temporal evolution of the MRTI. We introduce a novel unfold technique to analyze the experimental radiographs and compare the results to simulations and to a weakly nonlinear model. We find good quantitative agreement with simulations using the radiation magnetohydrodynamics code hydra. Spectral analysis of the MRTI time evolution obtained from the simulations shows evidence of harmonic generation, mode coupling, and the onset of an inverse-cascade process. The experiments provide a benchmark for future work on the MRTI and motivate the development of new analytical theories to better understand this instability.
Physical Review. B
The E3 transition in irradiated GaAs observed in deep level transient spectroscopy (DLTS) was recently discovered in Laplace-DLTS to encompass three distinct components. The component designated E3c was found to be metastable, reversibly bleached under minority carrier (hole) injection, with an introduction rate dependent upon Si doping density. It is shown through first-principles modeling that the E3c must be the intimate Si-vacancy pair, best described as a Si sitting in a divacancy Sivv. The bleached metastable state is enabled by a doubly site-shifting mechanism: Upon recharging, the defect undergoes a second site shift rather returning to its original E3c-active configuration via reversing the first site shift. Identification of this defect offers insights into the short-time annealing kinetics in irradiated GaAs.
Journal of Physical Chemistry B
Artificial neural networks (ANNs) were developed to accurately predict the self-diffusion constants for individual components in binary fluid mixtures. The ANNs were tested on an experimental database of 4328 self-diffusion constants from 131 mixtures containing 75 unique compounds. The presence of strong hydrogen bonding molecules may lead to clustering or dimerization resulting in non-linear diffusive behavior. To address this, self- and binary association energies were calculated for each molecule and mixture to provide information on intermolecular interaction strength and were used as input features to the ANN. An accurate, generalized ANN model was developed with an overall average absolute deviation of 4.1%. Forward input feature selection reveals the importance of critical properties and self-association energies along with other fluid properties. Additional ANNs were developed with subsets of the full input feature set to further investigate the impact of various properties on model performance. The results from two specific mixtures are discussed in additional detail: one providing an example of strong hydrogen bonding and the other an example of extreme pressure changes, with the ANN models predicting self-diffusion well in both cases.
Microscopy and Microanalysis
A direct comparison between electron transparent transmission electron microscope (TEM) samples prepared with gallium (Ga) and xenon (Xe) focused ion beams (FIBs) is performed to determine if equivalent quality samples can be prepared with both ion species. We prepared samples using Ga FIB and Xe plasma focused ion beam (PFIB) while altering a variety of different deposition and milling parameters. The samples' final thicknesses were evaluated using STEM-EELS t/λ data. Using the Ga FIB sample as a standard, we compared the Xe PFIB samples to the standard and to each other. We show that although the Xe PFIB sample preparation technique is quite different from the Ga FIB technique, it is possible to produce high-quality, large area TEM samples with Xe PFIB. We also describe best practices for a Xe PFIB TEM sample preparation workflow to enable consistent success for any thoughtful FIB operator. For Xe PFIB, we show that a decision must be made between the ultimate sample thickness and the size of the electron transparent region.
ACM International Conference Proceeding Series
In this work we present the concept of g'clipping', scheduling receive events for wireless transmissions only on receivers within some distance of the transmitter. Combined with spatial indexing, this technique enables faster simulation of large-scale wireless networks containing tens of thousands or even hundreds of thousands of wireless nodes. We detail our additions and changes to ns-3 to implement this feature, demonstrate how it yields a 2 × speedup for a complex 5G scenario with minimal impact on simulation fidelity, and show how under special circumstances a speedup of over 40 × is achievable while producing identical results.
This is an addendum to the Sierra/SolidMechanics 5.8 User’s Guide that documents additional capabilities available only in alternate versions of the Sierra/SolidMechanics (Sierra/SM) code. These alternate versions are enhanced to provide capabilities that are regulated under the U.S. Department of State’s International Traffic in Arms Regulations (ITAR) export control rules. The ITAR regulated codes are only distributed to entities that comply with the ITAR export control requirements. The ITAR enhancements to Sierra/SM include material models with an energy-dependent pressure response (appropriate for very large deformations and strain rates) and capabilities for blast modeling. This document is an addendum only; the standard Sierra/SolidMechanics 5.8 User’s Guide should be referenced for most general descriptions of code capability and use.
Construction and Building Materials
Civil infrastructure is made primarily of concrete structures or components and therefore understanding durability and fracture behavior of concrete is of utmost importance. Concrete contains an interfacial transition zone (ITZ), a porous region surrounding the aggregates, that is often considered to be the weakest region in the concrete. The ITZ is poorly characterized and property estimates for the ITZ differ considerably. In this simulation study, representative concrete mesostructures are produced by packing coarse aggregates with realistic geometries into a mortar matrix. A meshless numerical method, peridynamics, is utilized to simulate the mechanical response including fracture under uniaxial compression and tension. The sensitivity of the stiffness and fracture toughness of the samples to the ITZ properties is computed, showing strong relationships between the ITZ properties and the effective modulus and effective yield strength of the concrete. These results provides insight into the influence of the poorly characterized ITZ on the stiffness and strength of concrete. This work showcases the applicability of peridynamics to concrete systems, matching experimental strength and modulus values. Additionally, relationships between the ITZ's mechanical properties and the overall concrete strength and stiffness are presented to enable future design decisions.
Optics Express
X-ray tomography is capable of imaging the interior of objects in three dimensions non-invasively, with applications in biomedical imaging, materials science, electronic inspection, and other fields. The reconstruction process can be an ill-conditioned inverse problem, requiring regularization to obtain satisfactory results. Recently, deep learning has been adopted for tomographic reconstruction. Unlike iterative algorithms which require a distribution that is known a priori, deep reconstruction networks can learn a prior distribution through sampling the training distributions. In this work, we develop a Physics-assisted Generative Adversarial Network (PGAN), a two-step algorithm for tomographic reconstruction. In contrast to previous efforts, our PGAN utilizes maximum-likelihood estimates derived from the measurements to regularize the reconstruction with both known physics and the learned prior. Compared with methods with less physics assisting in training, PGAN can reduce the photon requirement with limited projection angles to achieve a given error rate. The advantages of using a physics-assisted learned prior in X-ray tomography may further enable low-photon nanoscale imaging.
Materials Advances
Recoil nuclei produce high ionization and excitation densities in organic scintillators leading to reduced light yield via ionization quenching. To improve understanding of the relationship between organic scintillator specific luminescence and the characteristics of the recoil particle, this work evaluates proton and carbon light yield data using ionization quenching models over an energy range of tens of keV to several MeV for protons and 1-5 MeV for carbon ions. Previously-measured proton and carbon light yield data were examined for a variety of commercial and novel organic scintillating media: EJ-309, a liquid with pulse shape discrimination (PSD) properties; EJ-204, a fast plastic; EJ-276, a PSD-capable plastic; and a custom organic glass scintillator developed by Sandia National Laboratories. The canonical model of Birks did not adequately describe the ionization quenching behavior. Models proposed by Yoshida et al. and Voltz et al. provided a reasonable description of the proton light yield of a variety of organic scintillators over a broad energy range, but additional work is needed to extend the models to carbon ions. The impact of stopping power data was also investigated by comparing model predictions using SRIM and PSTAR/MSTAR libraries, and the results show a significant discrepancy for carbon ions. This work enhances understanding of ionization quenching and facilitates the accurate modeling of scintillator-based neutron detection systems relevant for medical physics, nuclear security and nonproliferation, and basic science studies.
This report updates the high-level test plan for evaluating surface deposition on three commercial 32PTH2 spent nuclear fuel (SNF) canisters inside NUTECH Horizontal Modular Storage (NUHOMS) Advanced Horizontal Storage Modules (AHSMs) from Orano (formerly Transnuclear Inc.) and provides a description of the surface characterization activities that have been conducted to date. The details contained in this report represent the best designs and approaches explored for testing as of this publication. Given the rapidly developing nature of this test program, some of these plans may change to accommodate new objectives or requirements. The goal of the testing is to collect highly defensible and detailed dust deposition measurements from the surface of dry storage canisters in a marine coastal environment to guide chloride-induced stress corrosion crack (CISCC) research. To facilitate surface sampling, the otherwise highly prototypic dry storage systems will not contain SNF but rather will be electrically heated to mimic the decay heat and thermal hydraulic environment. Test and heater design is supported by detailed computational fluid dynamics modeling. Instrumentation throughout the canister, storage module, and environment will provide extensive information about thermal-hydraulic behavior. Manual sampling over a comprehensive portion of the canister surface at regular time intervals will offer a high-fidelity quantification of the conditions experienced in a harsh yet realistic environment. Functional testing of the finalized heater assemblies and test apparatus is set to begin in December 2022. The proposed delivery of the canisters to the host test site is June/July 2023, which is well ahead of when the AHSM installations would be completed.
Abstract not provided.
Microscopy and Microanalysis
The characterization of the three-dimensional arrangement of dislocations is important for many analyses in materials science. Dislocation tomography in transmission electron microscopy is conventionally accomplished through intensity-based reconstruction algorithms. Although such methods work successfully, a disadvantage is that they require many images to be collected over a large tilt range. Here, we present an alternative, semi-automated object-based approach that reduces the data collection requirements by drawing on the prior knowledge that dislocations are line objects. Our approach consists of three steps: (1) initial extraction of dislocation line objects from the individual frames, (2) alignment and matching of these objects across the frames in the tilt series, and (3) tomographic reconstruction to determine the full three-dimensional configuration of the dislocations. Drawing on innovations in graph theory, we employ a node-line segment representation for the dislocation lines and a novel arc-length mapping scheme to relate the dislocations to each other across the images in the tilt series. We demonstrate the method for a dataset collected from a dislocation network imaged by diffraction-contrast scanning transmission electron microscopy. Based on these results and a detailed uncertainty analysis for the algorithm, we discuss opportunities for optimizing data collection and further automating the method.
Applied Energy
Accelerating the deep decarbonization of the world's electric grids requires the coordination of complex energy systems and infrastructures across timescales from seconds to decades. In this paper, we present a new multiscale simulation framework that integrates process- and grid-centric modeling paradigms to better design, operate, and control integrated energy systems (IESs), which combine multiple technologies, in wholesale energy markets. Traditionally, IESs are analyzed with a process-centric paradigm such as levelized cost of electricity (LCOE) or annualized net revenue, ignoring important interactions with electricity markets. This framework explicitly models the complex interactions between an IES's bidding, scheduling, and control decisions and the energy market's clearing and settlement processes, while incorporating operational uncertainties. Through two case studies, we show the importance of understanding and quantifying complex resource-grid interactions. In case study 1, we demonstrate that optimized bidding from one resource shifts the profit distribution for all energy systems in the market. This result suggests new and more flexible IES technologies can disrupt the economics of all market participants, possibly leading to accelerated retirements of less flexible resources. Interestingly, the optimized bidding has little impact on grid-level aggregate statistics, such as total generation costs and renewable penetration rate. While aggregate modeling strategies may remain valid under some IES adoption scenarios for analysis focused on regional outcomes, direct comparisons of IES technologies at specific locations without considering these interactions may lead to misleading or incorrect conclusions. In case study 2, we consider the design and flexible operation of IESs that hybridize conventional generators with energy storage. Through a sensitivity analysis, we find that as the size of the storage system increases, the total number of start-ups for coal- and natural gas-based IESs reduced by 25% and 33.6%, and the total thermal generator ramping (i.e., mileage) reduced by 86.5% and 62.5%, respectively. This shows the primary benefit of storage may not be reduced operational costs (which do not change significantly) but fewer start-ups and less ramping, which may greatly simplify the design, operation, and control of carbon capture systems. The new modeling and optimization capabilities from this work enable the coupling of rigorous, dynamic process models with grid-level production cost models to quantitatively identify the nuanced interdependencies across these vast timescales that must be addressed to realize clean, safe, and secure energy production. Moreover, the proposed general multiscale simulation framework is applicable to all IES technologies and can be easily extended to consider other energy carriers (e.g., hydrogen, ammonia) and energy infrastructures (e.g., natural gas pipelines).
ACS Applied Materials and Interfaces
Conversion cathodes represent a viable route to improve rechargeable Li+battery energy densities, but their poor electrochemical stability and power density have impeded their practical implementation. Here, we explore the impact cell fabrication, electrolyte interaction, and current density have on the electrochemical performance of FeS2/Li cells by deconvoluting the contributions of the various conversion and intercalation reactions to the overall capacity. By varying the slurry composition and applied pressure, we determine that the capacity loss is primarily due to the large volume changes during (de)lithiation, leading to a degradation of the conductive matrix. Through the application of an external pressure, the loss is minimized by maintaining the conductive matrix. We further determine that polysulfide loss can be minimized by increasing the current density (>C/10), thus reducing the sulfur formation period. Analysis of the kinetics determines that the conversion reactions are rate-limiting, specifically the formation of metallic iron at rates above C/8. While focused on FeS2, our findings on the influence of pressure, electrolyte interaction, and kinetics are broadly applicable to other conversion cathode systems.
Computers and Mathematics with Applications
Nonlocal operators of fractional type are a popular modeling choice for applications that do not adhere to classical diffusive behavior; however, one major challenge in nonlocal simulations is the selection of model parameters. In this work we propose an optimization-based approach to parameter identification for fractional models with an optional truncation radius. We formulate the inference problem as an optimal control problem where the objective is to minimize the discrepancy between observed data and an approximate solution of the model, and the control variables are the fractional order and the truncation length. For the numerical solution of the minimization problem we propose a gradient-based approach, where we enhance the numerical performance by an approximation of the bilinear form of the state equation and its derivative with respect to the fractional order. Several numerical tests in one and two dimensions illustrate the theoretical results and show the robustness and applicability of our method.