Recent high profile cyber attacks on critical infrastructures have raised awareness about the severe and widespread impacts that these attacks can have on everyday life. This awareness has spurred research into making industrial control systems and other cyber-physical systems more resilient. A plethora of cyber resilience metrics and frameworks have been proposed for cyber resilience assessments, but these approaches typically assume that data required to populate the metrics is readily available, an assumption that is frequently not valid. This paper describes a new cyber experimentation platform that can be used to generate relevant data and to calculate resilience metrics that quantify how resilient specified industrial control systems are to specified threats. Demonstration of the platform and analysis process are illustrated through a use case involving the control system for a pressurized water reactor.
Social systems are uniquely complex and difficult to study, but understanding them is vital to solving the world’s problems. The Ground Truth program developed a new way of testing the research methods that attempt to understand and leverage the Human Domain and its associated complexities. The program developed simulations of social systems as virtual world test beds. Not only were these simulations able to produce data on future states of the system under various circumstances and scenarios, but their causal ground truth was also explicitly known. Research teams studied these virtual worlds, facilitating deep validation of causal inference, prediction, and prescription methods. The Ground Truth program model provides a way to test and validate research methods to an extent previously impossible, and to study the intricacies and interactions of different components of research.
Recent high profile cyber attacks on critical infrastructures have raised awareness about the severe and widespread impacts that these attacks can have on everyday life. This awareness has spurred research into making industrial control systems and other cyber-physical systems more resilient. A plethora of cyber resilience metrics and frameworks have been proposed for cyber resilience assessments, but these approaches typically assume that data required to populate the metrics is readily available, an assumption that is frequently not valid. This paper describes a new cyber experimentation platform that can be used to generate relevant data and to calculate resilience metrics that quantify how resilient specified industrial control systems are to specified threats. Demonstration of the platform and analysis process are illustrated through a use case involving the control system for a pressurized water reactor.
A major challenge in shape optimization is the coupling of finite element method (FEM) codes in a way that facilitates efficient computation of shape derivatives. This is particularly difficult with multiphysics problems involving legacy codes, where the costs of implementing and maintaining shape derivative capabilities are prohibitive. The volume and boundary methods are two approaches to computing shape derivatives. Each has a major drawback: the boundary method is less accurate, while the volume method is more invasive to the FEM code. We introduce the strip method, which computes shape derivatives on a strip adjacent to the boundary. The strip method makes code coupling simple. Like the boundary method, it queries the state and adjoint solutions at quadrature nodes, but requires no knowledge of the FEM code implementations. At the same time, it exhibits the higher accuracy of the volume method. As an added benefit, its computational complexity is comparable to that of the boundary method, that is, it is faster than the volume method. We illustrate the benefits of the strip method with numerical examples.
Systems subjected to dynamic loads often require monitoring of their vibrational response, but limitations on the total number and placement of the measurement sensors can hinder the data-collection process. This paper presents an indirect approach to estimate a system's full-field dynamic response, including all uninstrumented locations, using response measurements from sensors sparsely located on the system. This approach relies on Bayesian inference that utilizes a system model to estimate the full-field response and quantify the uncertainty in these estimates. By casting the estimation problem in the frequency domain, this approach utilizes the modal frequency response functions as a natural, frequency-dependent weighting scheme for the system mode shapes to perform the expansion. This frequency-dependent weighting scheme enables an accurate expansion, even with highly correlated mode shapes that may arise from spatial aliasing due to the limited number of sensors, provided these correlated modes do not have natural frequencies that are closely spaced. Furthermore, the inherent regularization mechanism that arises in this Bayesian-based procedure enables the utilization of the full set of system mode shapes for the expansion, rather than any reduced subset. This approach can produce estimates when considering a single realization of the measured responses, and with some modification, it can also produce estimates for power spectral density matrices measured from many realizations of the responses from statistically stationary random processes. A simply supported beam provides an initial numerical validation, and a cylindrical test article excited by acoustic loads in a reverberation chamber provides experimental validation.
A computationally efficient radiative transport model is presented that predicts a camera measurement and accounts for the light reflected and blocked by an object in a scattering medium. The model is in good agreement with experimental data acquired at the Sandia National Laboratory Fog Chamber Facility (SNLFC). The model is applicable in computational imaging to detect, localize, and image objects hidden in scattering media. Here, a statistical approach was implemented to study object detection limits in fog.
This report describes the proposed surface sampling techniques and plan for the multi-year Canister Deposition Field Demonstration (CDFD). The CDFD is primarily a dust deposition test that will use three commercial 32PTH2 NUHOMS welded stainless steel storage canisters in Advanced Horizontal Storage Modules, with planned exposure testing for up to 10 years at an operating ISFSI site. One canister will be left at ambient condition, unheated; the other two will have heaters to achieve canister surface temperatures that match, to the degree possible, spent nuclear fuel (SNF) loaded canisters with heat loads of 10 kW and 40 kW. Surface sampling campaigns for dust analysis will take place on a yearly or bi-yearly basis. The goal of the planned dust sampling and analysis is to determine important environmental parameters that impact the potential occurrence of stress corrosion cracking on SNF dry storage canisters. Specifically, measured dust deposition rates and deposited particle sizes will improve parameterization of dust deposition models employed to predict the potential occurrence and timing of stress corrosion cracks on the stainless steel SNF canisters. The size, morphology, and composition of the deposited dust and salt particles will be quantified, as well as the soluble salt load per unit area and the rate of deposition, as a function of canister surface temperature, location, time, and orientation. Previously, a preliminary sampling plan was developed, identifying possible sampling locations on the canister surfaces and sampling intervals; possible sampling methods were also described. Further development of the sampling plan has commenced through three different tasks. First, canister surface roughness, a potentially important parameter for air flow and dust deposition, was characterized at several locations on one of the test canisters. Second, corrosion testing to evaluate the potential lifetime and aging of thermocouple wires, spot welds, and attachments was initiated. Third, hand sampling protocols were developed, and initial testing was carried out. The results of those efforts are presented in this report. The information obtained from the CDFD will be critical for ongoing efforts to develop a detailed understanding of the potential for stress corrosion cracking of SNF dry storage canisters.
We employ ultrafast mid-infrared transient absorption spectroscopy to probe the rapid loss of carbonyl ligands from gas-phase nickel tetracarbonyl following ultraviolet photoexcitation at 261 nm. Here, nickel tetracarbonyl undergoes prompt dissociation to produce nickel tricarbonyl in a singlet excited state; this electronically excited tricarbonyl loses another CO group over tens of picoseconds. Our results also suggest the presence of a parallel, concerted dissociation mechanism to produce nickel dicarbonyl in a triplet excited state, which likely dissociates to nickel monocarbonyl. Mechanisms for the formation of these photoproducts in multiple electronic excited states are theoretically predicted with one-dimensional cuts through the potential energy surfaces and computation of spin-orbit coupling constants using equation of motion coupled cluster methods (EOM-CC) and coupled cluster theory with single and double excitations (CCSD). Bond dissociation energies are calculated with CCSD, and anharmonic frequencies of ground and excited state species are computed using density functional theory (DFT) and time-dependent density functional theory (TD-DFT).
Hattar, Khalid M.; Mcgieson, Isak; Bird, Victoriea L.; Barr, Christopher M.; Reed, Bryan W.; Mckeown, Joseph T.; Yi, Feng; Santala, M.K.
The crystallization of an amorphous Ag–In–Sb–Te (AIST) phase change material (PCM) is studied using multiple in situ imaging techniques to directly quantify crystal growth rates over a broad range of temperatures. The measurable growth rates span from ≈ 10–9 to ≈ 20 m/s. Recent results using dynamic transmission electron microscopy (TEM), a photoemission TEM technique, and TEM with sub-framed imaging are reported here and placed into the context of previous growth rate measurements on AIST. Dynamic TEM experiments show a maximum observed crystal growth rate for as-deposited films to be > 20 m/s. It is shown that crystal growth above the glass transition can be imaged in a TEM through use of subframing and a high-frame-rate direct electron detection camera. Challenges associated with the determination of temperature during in situ TEM experiments are described. Preliminary nanocalorimetry results demonstrate the feasibility of collecting thermodynamic data for crystallization of PCMs with simultaneous TEM imaging. Graphical abstract: [Figure not available: see fulltext.]
Swelling clay hydration/dehydration is important to many environmental and industrial processes. Experimental studies usually probe equilibrium hydration states in an averaged manner and thus cannot capture the fast water transport and structural change in interlayers during hydration/dehydration. Using molecular simulations and thermogravimetric analyses, we observe a two-stage dehydration process. The first stage is controlled by evaporation at the edges: water molecules near hydrophobic sites and the first few water molecules of the hydration shell of cations move fast to particle edges for evaporation. The second stage is controlled by slow desorption of the last 1-2 water molecules from the cations and slow transport through the interlayers. The two-stage dehydration is strongly coupled with interlayer collapse and the coordination number changes of cations, all of which depend on layer charge distribution. This mechanistic interpretation of clay dehydration can be key to the coupled chemomechanical behavior in natural/engineered barriers.
Chemical interactions on the surface of a functional nanoparticle are closely related to its crystal facets, which can regulate the corresponding energy storage properties like hydrogen absorption. In this study, we reported a one-step growth of magnesium (Mg) particles with both close- and nonclose-packed facets, that is, {0001} and {21¯ 1¯ 6} planes, on atomically thin reduced graphene oxide (rGO). The detailed microstructures of Mg/rGO hybrids were revealed by X-ray diffraction, selected-area electron diffraction, high-resolution transmission electron microscopy, and fast Fourier transform analysis. Hydrogen storage performance of Mg/rGO hybrids with different orientations varies: Mg with preferential high-index {21¯ 1¯ 6} crystal surface shows remarkably increased hydrogen absorption up to 6.2 wt % compared with the system exposing no preferentially oriented crystal surfaces showing inferior performance of 5.1 wt % within the first 2 h. First-principles calculations revealed improved hydrogen sorption properties on the {21¯ 1¯ 6} surface with a lower hydrogen dissociation energy barrier and higher stability of hydrogen atoms than those on the {0001} basal plane, supporting the hydrogen uptake experiment. In addition, the hydrogen penetration energy barrier is found to be much lower than that of {0001} because of low surface atom packing density, which might be the most critical process to the hydrogenation kinetics. The experimental and calculation results present a new handle for regulating the hydrogen storage of metal hydrides by controlled Mg facets.
The resurgence of interest in a hydrogen economy and the development of hydrogen-related technologies has initiated numerous research and development efforts aimed at making the generation, storage, and transportation of hydrogen more efficient and affordable. Solar thermochemical hydrogen production (STCH) is a process that potentially exhibits numerous benefits such as high reaction efficiencies, tunable thermodynamics, and continued performance over extended cycling. Although CeO2 has been the de facto standard STCH material for many years, more recently 12R-Ba4CeMn3O12 (BCM) has demonstrated enhanced hydrogen production at intermediate H2/H2O conditions compared to CeO2, making it a contender for large-scale hydrogen production. However, the thermo-reduction stability of 12R-BCM dictates the oxygen partial pressure (pO2) and temperature conditions optimal for cycling. In this study, we identify the formation of a 6H-BCM polytype at high temperature and reducing conditions, experimentally and computationally, as a mechanism and pathway for 12R-BCM decomposition. 12R-BCM was synthesized with high purity and then controllably reduced using thermogravimetric analysis (TGA). Synchrotron X-ray diffraction (XRD) data is used to identify the formation of a 6H-Ba3Ce0.75Mn2.25O9 (6H-BCM) polytype that is formed at 1350 degrees C under strongly reducing pO2. Density functional theory (DFT) total energy and defect calculations show a window of thermodynamic stability for the 6H-polytype consistent with the XRD results. These data provide the first evidence of the 6H-BCM polytype and could provide a mechanistic explanation for the superior water-splitting behaviors of 12R-BCM.
The Synchronic Web is a network of information that is locked into a single global view of history. When clients notarize their data to the Synchronic Web, they gain the ability to irrefutably prove the following statement to the rest of the world: "I commit to this information—and only this information—at this moment in time." Much like encryption or digital signatures, this capability has the potential to bolster the integrity of public cyberspace at a foundational level and scale.
Interactions of ceramic proton conductors with the environment under operating conditions play an essential role on material properties and device performance. It remains unclear how the chemical environment of material, as modulated by the operating condition, affects the proton conductivity. Combining near-ambient pressure X-ray photoelectron spectroscopy and impedance spectroscopy, we investigate the chemical environment changes of oxygen and the conductivity of BaZr0.9Y0.1O3-δunder operating condition. Changes in O 1s core level spectra indicate that adding water vapor pressure increases both hydroxyl groups and active proton sites at undercoordinated oxygen. Applying external potential further promotes this hydration effect, in particular, by increasing the amount of undercoordinated oxygen. The enhanced hydration is accompanied by improved proton conductivity. This work highlights the effects of undercoordinated oxygen for improving the proton conductivity in ceramics.
Ion trap quantum computing utilizes electronic states of atomic ions such as Ca+ to encode information on to a qubit. To explore the fundamental properties of Ca+ inside molecular cavities, we describe here a computational study of Ca+ bound inside neutral [n]-cycloparaphenylenes (n = 5-12), often referred to as “nanohoops”. This ab initio study characterizes optimized structures, harmonic vibrational frequencies, potential energy surfaces, and ion molecular orbital distortion as functions of increasing nanohoop size. The results of this work provide a first step in guiding experimental studies of the spectroscopy of these ion-molecular cavity complexes.
Aria is a Galerkin nite element based program for solving coupled-physics problems described by systems of PDEs and is capable of solving nonlinear, implicit, transient and direct-to-steady state problems in two and three dimensions on parallel architectures. The suite of physics currently supported by Aria includes thermal energy transport, species transport, and electrostatics as well as generalized scalar, vector and tensor transport equations. Additionally, Aria includes support for manufacturing process flows via the incompressible Navier-Stokes equations specialized to a low Reynolds number (Re < 1) regime. Enhanced modeling support of manufacturing processing is made possible through use of either arbitrary Lagrangian-Eulerian (ALE) and level set based free and moving boundary tracking in conjunction with quasi-static nonlinear elastic solid mechanics for mesh control. Coupled physics problems are solved in several ways including fully-coupled Newton’s method with analytic or numerical sensitivities, fully-coupled Newton-Krylov methods and a loosely-coupled nonlinear iteration about subsets of the system that are solved using combinations of the aforementioned methods. Error estimation, uniform and dynamic ℎ-adaptivity and dynamic load balancing are some of Aria’s more advanced capabilities.
SIERRA/Aero is a compressible fluid dynamics program intended to solve a wide variety compressible fluid flows including transonic and hypersonic problems. This document describes the commands for assembling a fluid model for analysis with this module, henceforth referred to simply as Aero for brevity. Aero is an application developed using the SIERRA Toolkit (STK). The intent of STK is to provide a set of tools for handling common tasks that programmers encounter when developing a code for numerical simulation. For example, components of STK provide field allocation and management, and parallel input/output of field and mesh data. These services also allow the development of coupled mechanics analysis software for a massively parallel computing environment.
SIERRA/Aero is a compressible fluid dynamics program intended to solve a wide variety compressible fluid flows including transonic and hypersonic problems. This document describes the commands for assembling a fluid model for analysis with this module, henceforth referred to simply as Aero for brevity. Aero is an application developed using the SIERRA Toolkit (STK). The intent of STK is to provide a set of tools for handling common tasks that programmers encounter when developing a code for numerical simulation. For example, components of STK provide field allocation and management, and parallel input/output of field and mesh data. These services also allow the development of coupled mechanics analysis software for a massively parallel computing environment.
Presented in this document is a portion of the tests that exist in the Sierra Thermal/Fluids verification test suite. Each of these tests is run nightly with the Sierra/TF code suite and the results of the test checked under mesh refinement against the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.
The SNL Sierra Mechanics code suite is designed to enable simulation of complex multiphysics scenarios. The code suite is composed of several specialized applications which can operate either in standalone mode or coupled with each other. Arpeggio is a supported utility that enables loose coupling of the various Sierra Mechanics applications by providing access to Framework services that facilitate the coupling. More importantly Arpeggio orchestrates the execution of applications that participate in the coupling. This document describes the various components of Arpeggio and their operability. The intent of the document is to provide a fast path for analysts interested in coupled applications via simple examples of its usage.
Aria is a Galerkin finite element based program for solving coupled-physics problems described by systems of PDEs and is capable of solving nonlinear, implicit, transient and direct-to-steady state problems in two and three dimensions on parallel architectures. The suite of physics currently supported by Aria includes thermal energy transport, species transport, and electrostatics as well as generalized scalar, vector and tensor transport equations. Additionally, Aria includes support for manufacturing process flows via the incompressible Navier-Stokes equations specialized to a low Reynolds number ( $Re$ < 1 ) regime. Enhanced modeling support of manufacturing processing is made possible through use of either arbitrary Lagrangian-Eulerian (ALE) and level set based free and moving boundary tracking in conjunction with quasi-static nonlinear elastic solid mechanics for mesh control. Coupled physics problems are solved in several ways including fully-coupled Newton's method with analytic or numerical sensitivities, fully-coupled Newton-Krylov methods and a loosely-coupled nonlinear iteration about subsets of the system that are solved using combinations of the aforementioned methods. Error estimation, uniform and dynamic $h$-adaptivity and dynamic load balancing are some of Aria's more advanced capabilities.
The SIERRA Low Mach Module: Fuego, henceforth referred to as Fuego, is the key element of the ASC fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible ow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Using MPMD coupling, Scefire and Nalu handle the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.
The SIERRA Low Mach Module: Fuego, henceforth referred to as Fuego, is the key element of the ASC fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Using MPMD coupling, Scefire and Nalu handle the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.
The inverse methods team provides a set of tools for solving inverse problems in structural dynamics and thermal physics, and also sensor placement optimization via Optimal Experimental Design (OED). These methods are used for designing experiments, model calibration, and verfication/validation analysis of weapons systems. This document provides a user's guide to the input for the three apps that are supported for these methods. Details of input specifications, output options, and optimization parameters are included.
This project is part of a multi-lab consortium that leverages U.S. research expertise and facilities at national labs and universities to significantly advance electric drive power density and reliability, while simultaneously reducing cost. The final objective of the consortium is to develop a 100 kW traction drive system that achieves 33 kW/L, has an operational life of 300,000 miles, and a cost of less than $\$$6/kW. One element of the system is a 100 kW inverter with a power density of 100 kW/L and a cost of $\$$2.7/kW. New materials such as wide bandgap semiconductors, soft magnetic materials, and ceramic dielectrics, integrated using multi-objective co optimization design techniques, will be utilized to achieve these program goals. This project focuses on a subset of the power electronics work within the consortium, specifically the design, fabrication, and evaluation of vertical GaN power devices suitable for automotive applications.
The ECP Proxy Application Project has an annual milestone to assess the state of ECP proxy applications and their role in the overall ECP ecosystem. Our FY22 March/April milestone (ADCD- 504-28) proposed to: Assess the fidelity of proxy applications compared to their respective parents in terms of kernel and I/O behavior, and predictability. Similarity techniques will be applied for quantitative comparison of proxy/parent kernel behavior. MACSio evaluation will continue and support for OpenPMD backends will be explored. The execution time predictability of proxy apps with respect to their parents will be explored through a carefully designed scaling study and code comparisons. Note that in this FY, we also have quantitative assessment milestones that are due in September and are, therefore, not included in the description above or in this report. Another report on these deliverables will be generated and submitted upon completion of these milestones. To satisfy this milestone, the following specific tasks were completed: Study the ability of MACSio to represent I/O workloads of adaptive mesh codes. Re-define the performance counter groups for contemporary Intel and IBM platforms to better match specific hardware components and to better align across platforms (make cross-platform comparison more accurate). Perform cosine similarity study based on the new performance counter groups on the Intel and IBM P9 platforms. Perform detailed analysis of performance counter data to accurately average and align the data to maintain phases across all executions and develop methods to reduce the set of collected performance counters used in cosine similarity analysis. Apply a quantitative similarity comparison between proxy and parent CPU kernels. Perform scaling studies to understand the accuracy of predictability of the parent performance using its respective proxy application. This report presents highlights of these efforts.
The Department of Energy (DOE) is the owner and part operator of multiple facilities in Northern California. The facilities include those located at Lawrence Livermore National Laboratory (LLNL), Lawrence Berkeley National Laboratory (LBNL), Sandia National Laboratories/California (SNL/CA) and SLAC National Accelerator Laboratory (SLAC) among other sites. Through their operations, the facilities generate hazardous waste and, thereby, are subject to the requirements of Chapter 31 of the Title 22 California Code of Regulations, Waste Minimization. The Northern California sites are primarily research and development facilities in the areas relating to national security, high-energy physics, engineering, bioscience and environmental health and safety disciplines. As mentioned above these DOE facilities are primarily research and development facilities. The hazardous wastes generated may be associated with operations that range in size from small, bench scale R&D to major maintenance and operations waste streams. Therefore, even though this document breaks down the waste streams based on California Waste Codes (CWC), the quantities of waste within one waste code category could be from many different locations and dissimilar processes. Because of the nature of the work at the sites, it is not economically feasible to try to implement source reduction measures for every process that generates a portion of the waste stream. This document identifies the processes that generate the major portion of the waste within an identified major waste stream and reports on progress made toward source reduction. In accomplishing the mission, it is DOE’s goal to eliminate waste generation and emissions giving priority to those that may present the greatest risk to human health and the environment.
The SPECTACULAR model is a development extension of the Simplified Potential Energy Clock (SPEC) model. Both models are nonlinear viscoelastic constitutive models used to predict a wide range of time-dependent behaviors in epoxies and other glass-forming materials. This report documents the procedures used to generate SPECTACULAR calibrations for two particulate-filled epoxy systems, 828/CTBN/DEA/GMB and 828/DEA/GMB. No previous SPECTACULAR or SPEC calibration exists for 828/CTBN/DEA/GMB, while a legacy SPEC calibration exists for 828/DEA/GMB. To generate the SPECTACULAR calibrations, a step-by-step procedure was executed to determine parameters in groups with minimal coupling between parameter groups. This procedure has often been deployed to calibrate SPEC, therefore the resulting SPECTACULAR calibration is backwards compatible with SPEC (i.e. none of the extensions specific to SPECTACULAR are used). The calibration procedure used legacy Sandia experimental data stored on the Polymer Properties Database website. The experiments used for calibration included shear master curves, isofrequency temperature sweeps under oscillatory shear, the bulk modulus at room temperature, the thermal strain during a temperature sweep, and compression through yield at multiple temperatures below the glass transition temperature. Overall, the calibrated models fit the experimental data remarkably well. However, the glassy shear modulus varies depending on the experiment used to calibrate it. For instance, the shear master curve, isofrequency temperature sweep under oscillatory shear, and the Young's modulus in glassy compression yield values for the glassy shear modulus at the reference temperature that vary by as much as 15 %. Also, for 828/CTBN/DEA/GMB, the temperature dependence of the glassy shear modulus when fit to the Young's modulus at different temperatures is approximately four times larger than when it is determined from the isofrequency temperature sweep under oscillatory shear. For 828/DEA/GMB, the temperature dependence of the shear modulus determined from the isofrequency temperature sweep under oscillatory shear accurately predicts the Young's modulus at different temperatures. When choosing values for the shear modulus, fitting the glassy compression data was prioritized. The new and legacy calibrations for 828/DEA/GMB are similar and appear to have been calibrated from the same data. However, the new calibration improves the fit to the thermal strain data. In addition to the standard calibrations, development calibrations were produced that take advantage of development features of SPECTACULAR , including an updated equilibrium Helmholtz free energy that eliminates undesirable behavior found in previous work. In addition to the previously mentioned experimental data, the development calibrations require data for the heat capacity during a stress-free temperature sweep to calibrate thermal terms.
We demonstrate SONOS (silicon-oxide-nitride-oxide-silicon) analog memory arrays that are optimized for neural network inference. The devices are fabricated in a 40nm process and operated in the subthreshold regime for in-memory matrix multiplication. Subthreshold operation enables low conductances to be implemented with low error, which matches the typical weight distribution of neural networks, which is heavily skewed toward near-zero values. This leads to high accuracy in the presence of programming errors and process variations. We simulate the end-To-end neural network inference accuracy, accounting for the measured programming error, read noise, and retention loss in a fabricated SONOS array. Evaluated on the ImageNet dataset using ResNet50, the accuracy using a SONOS system is within 2.16% of floating-point accuracy without any retraining. The unique error properties and high On/Off ratio of the SONOS device allow scaling to large arrays without bit slicing, and enable an inference architecture that achieves 20 TOPS/W on ResNet50, a > 10× gain in energy efficiency over state-of-The-Art digital and analog inference accelerators.
Structural properties of the anionic surfactant dioctyl sodium sulfosuccinate (AOT or Aerosol-OT) adsorbed on the mica surface were investigated by molecular dynamics simulation, including the effect of surface loading in the presence of monovalent and divalent cations. The simulations confirmed recent neutron reflectivity experiments that revealed the binding of anionic surfactant to the negatively charged surface via adsorbed cations. At low loading, cylindrical micelles formed on the surface, with sulfate head groups bound to the surface by water molecules or adsorbed cations. Cation bridging was observed in the presence of weakly hydrating monovalent cations, while sulfate groups interacted with strongly hydrating divalent cations through water bridges. The adsorbed micelle structure was confirmed experimentally with cryogenic electronic microscopy, which revealed micelles approximately 2 nm in diameter at the basal surface. At higher AOT loading, the simulations reveal adsorbed bilayers with similar surface binding mechanisms. Adsorbed micelles were slightly thicker (2.2–3.0 nm) than the corresponding bilayers (2.0–2.4 nm). Upon heating the low loading systems from 300 K to 350 K, the adsorbed micelles transformed to a more planar configuration resembling bilayers. The driving force for this transition is an increase in the number of sulfate head groups interacting directly with adsorbed cations.
Operability thresholds that differentiate between functional RP-87 exploding bridge wire (EBW) detonators and nonfunctional RP-87 EBW detonators (duds) were determined by measuring the time delay between initiation and early wall movement (function time). The detonators were inserted into an externally heated hollow cylinder of aluminum and fired with current flow from a charged capacitor using an exploding bridge wire (EBW initiated). Functioning detonators responded like unheated pristine detonators when the function time was 4 μs or less. The operability thresholds of the detonators were characterized with a simple decomposition cookoff model calibrated using a modified version of the Sandia Instrumented Thermal Ignition (SITI) experiment. These thresholds are based on the calculated state of the PETN when the detonators fire. The operability threshold is proportional to the positive temperature difference (ΔT) between the maximum temperature within the PETN and the onset of decomposition (∼406 K). The temperature difference alone was not sufficient to define the operability threshold. The operability threshold was also proportional to the time that the PETN had been at elevated temperatures. That is, failure was proportional to both temperature and reaction rate. The reacted gas fraction is used in the current work for the reaction correlation. Melting of PETN also had a significant effect on the operability threshold. Detonator failure occurred when the maximum temperature exceeded the nominal melting point of PETN (414 K) for 45±5 s or more.
In this paper, we propose a method to estimate the position, orientation, and gain of a magnetic field sensor using a set of (large) electromagnetic coils. We apply the method for calibrating an array of optically pumped magnetometers (OPMs) for magnetoencephalography (MEG). We first measure the magnetic fields of the coils at multiple known positions using a well‐calibrated triaxial magnetometer, and model these discreetly sampled fields using vector spherical harmonics (VSH) functions. We then localize and calibrate an OPM by minimizing the sum of squared errors between the model signals and the OPM responses to the coil fields. We show that by using homogeneous and first‐order gradient fields, the OPM sensor parameters (gain, position, and orientation) can be obtained from a set of linear equations with pseudo‐inverses of two matrices. The currents that should be applied to the coils for approximating these low‐order field components can be determined based on the VSH models. Computationally simple initial estimates of the OPM sensor parameters follow. As a first test of the method, we placed a fluxgate magnetometer at multiple positions and estimated the RMS position, orientation, and gain errors of the method to be 1.0 mm, 0.2°, and 0.8%, respectively. Lastly, we calibrated a 48‐channel OPM array. The accuracy of the OPM calibration was tested by using the OPM array to localize magnetic dipoles in a phantom, which resulted in an average dipole position error of 3.3 mm. The results demonstrate the feasibility of using electromagnetic coils to calibrate and localize OPMs for MEG.
Wind energy can provide renewable, sustainable electricity to rural Native homes and power schools and businesses. It can even provide tribes with a source of income and economic development. The purpose of this research is to determine the potential for deploying community and utility-scale wind renewable technologies on Turtle Mountain Band of Chippewa tribal lands. Ideal areas for wind technology development were investigated, based on wind resources, terrain, land usage, and other factors. This was done using tools like the National Renewable Energy Laboratory Wind Prospector, in addition to consulting tribal members and experts in the field. The result was a preliminary assessment of wind energy potential on Turtle Mountain lands, which can be used to justify further investigation and investment into determining the feasibility of future wind technology projects.
A new method for generating locally orthogonal polygonal meshes from a set of generator points is presented in which polygon areas are a constraint. The area constraint property is particularly useful for particle methods where moving polygons track a discrete portion of material. Because Voronoi polygon meshes have some very attractive mathematical and numerical properties for numerical computation, a generalization of Voronoi polygon meshes was formulated that enforces a polygon area constraint. Area constrained moving polygonal meshes allow one to develop hybrid particle-mesh numerical methods that display some of the most attractive features of each approach. It is shown that this mesh construction method can continuously reconnect a moving, unstructured polygonal mesh in a pseudo-Lagrangian fashion without change in cell area/volume, and the method's ability to simulate various physical scenarios is shown. The advantages are identified for incompressible fluid flow calculations, with demonstration cases that include material discontinuities of all three phases of matter and large density jumps.
Nielsen, Erik N.; Mills, Adam R.; Guinn, Charles R.; Gullans, Michael J.; Sigillito, Anthony J.; Feldman, Mayer M.; Petta, Jason R.
Silicon spin qubits satisfy the necessary criteria for quantum information processing. However, a demonstration of high-fidelity state preparation and readout combined with high-fidelity single- and two-qubit gates, all of which must be present for quantum error correction, has been lacking. We use a two-qubit Si/SiGe quantum processor to demonstrate state preparation and readout with fidelity greater than 97%, combined with both singleand two-qubit control fidelities exceeding 99%. The operation of the quantum processor is quantitatively characterized using gate set tomography and randomized benchmarking. Our results highlight the potential of silicon spin qubits to become a dominant technology in the development of intermediate-scale quantum processors.
Patel, Jaymin R.; Oh, Joonseok; Crawford, Jason M.; Isaacs, Farren J.
Small molecules encoded by biosynthetic pathways mediate cross-species interactions and harbor untapped potential, which has provided valuable compounds for medicine and biotechnology. Since studying biosynthetic gene clusters in their native context is often difficult, alternative efforts rely on heterologous expression, which is limited by host-specific metabolic capacity and regulation. Here, in this work, we describe a computational-experimental technology to redesign genes and their regulatory regions with hybrid elements for cross-species expression in Gram-negative and -positive bacteria and eukaryotes, decoupling biosynthetic capacity from host-range constraints to activate silenced pathways. These synthetic genetic elements enabled the discovery of a class of microbiome-derived nucleotide metabolites—tyrocitabines—from Lactobacillus iners. Tyrocitabines feature a remarkable orthoester-phosphate, inhibit translational activity, and invoke unexpected biosynthetic machinery, including a class of “Amadori synthases” and “abortive” tRNA synthetases. Our approach establishes a general strategy for the redesign, expression, mobilization, and characterization of genetic elements in diverse organisms and communities.
The HyRAM+ software toolkit provides a basis for conducting quantitative risk assessment and consequence modeling for hydrogen, methane, and propane systems. HyRAM+ is designed to facilitate the use of state-of-the-art models to conduct robust, repeatable assessments of safety, hazards, and risk. HyRAM+ integrates deterministic and probabilistic models for quantifying accident scenarios, predicting physical effects, characterizing hazards (thermal effects from jet fires, overpressure effects from delayed ignition), and assessing impacts on people. HyRAM+ is developed at Sandia National Laboratories to support the development and revision of national and international codes and standards, and to provide developed models in a publicly-accessible toolkit usable by all stakeholders. This document provides a description of the methodology and models contained in HyRAM+ version 4.1. The two most significant changes for HyRAM+ version 4.1 from HyRAM+ version 4.0 are direct incorporation of unconfined overpressure into the QRA calculations and modification of the models for cryogenic liquid flow through an orifice. In QRA mode, the user no longer needs to input peak overpressure and impulse values that were calculated separately; rather, the unconfined overpressure is estimated for the given system inputs, leak size, and occupant location. The orifice flow model now solves for the maximum mass flux through the orifice at constant entropy while conserving energy, which does not require a direct speed of sound calculation. This does not affect the mass flow for all-gaseous releases; the method results in the same speed of sound for choked flow. However, this method does result in a higher (and more realistic) mass flow rate for a given leak size for liquid releases than was previously calculated.
High-speed, optical imaging diagnostics are presented for three-dimensional (3D) quantification of explosively driven metal fragmentation. At early times after detonation, Digital Image Correlation (DIC) provides non-contact measures of 3D case velocities, strains, and strain rates, while a proposed stereo imaging configuration quantifies in-flight fragment masses and velocities at later times. Experiments are performed using commercially obtained RP-80 detonators from Teledyne RISI, which are shown to create a reproducible fragment field at the benchtop scale. DIC measurements are compared with 3D simulations, which have been ‘leveled’ to match the spatial resolution of DIC. Results demonstrate improved ability to identify predicted quantities-of-interest that fall outside of measurement uncertainty and shot-to-shot variability. Similarly, video measures of fragment trajectories and masses allow rapid experimental repetition and provide correlated fragment size-velocity measurements. Measured and simulated fragment mass distributions are shown to agree within confidence bounds, while some statistically meaningful differences are observed between the measured and predicted conditionally averaged fragment velocities. Together these techniques demonstrate new opportunities to improve future model validation.
The objective of this project is the demonstration, and validation of hydrogen fuel cells in the marine environment. The prototype generator can be used to guide commercial development of a fuel cell generator product. Work includes assessment and validation of the commercial value proposition of both the application and the hydrogen supply infrastructure through third-party hosted deployment as the next step towards widespread use of hydrogen fuel cells in the maritime environment.
The nuclear accident consequence analysis code MACCS has traditionally modeled dispersion during downwind transport using a Gaussian plume segment model. MACCS is designed to estimate consequence measures such as air concentrations and ground depositions, radiological doses, and health and economic impacts on a statistical basis over the course of a year to produce annualaveraged output measures. The objective of this work is to supplement the Gaussian atmospheric transport and diffusion (ATD) model currently in MACCS with a new option using the HYSPLIT model. HYSPLIT/MACCS coupling has been implemented, with HYSPLIT as an alternative ATD option. The subsequent calculations in MACCS use the HYSPLIT-generated air concentration, and ground deposition values to calculate the same range of output quantities (dose, health effects, risks, etc.) that can be generated when using the MACCS Gaussian ATD model. Based on the results from the verification test cases, the implementation of the HYSPLIT/MACCS coupling is confirmed. This report contains technical details of the HYSPLIT/MACCS coupling and presents a benchmark analysis using the HYSPLIT/MACCS coupling system. The benchmark analysis, which involves running specific scenarios and sensitivity studies designed to examine how the results generated by the traditional MACCS Gaussian plume segment model compare to the new, higher fidelity HYSPLIT/MACCS modeling option, demonstrates the modeling results that can be obtained by using this new option. The comparisons provided herein can also help decision-makers evaluate the potential benefit of using results based on higher fidelity modeling with the additional computational burden needed to perform the calculations. Three sensitivity studies to investigate the potential impact of alternative modeling options, regarding 1) input meteorological data set, 2) method to estimate stability class, and 3) plume dispersion model for larger distances, on consequence results were also performed. The results of these analyses are provided and discussed in this report.
Fuel costs and emissions in maritime ports are an opportunity for transportation energy efficiency improvement and emissions reduction efforts. Ocean-going vessels, harbor craft, and cargo handling equipment are still major contributors to air pollution in and around ports. Diesel engine costs continually increase as tighter criteria pollutant regulations come into effect and will continue to do so with expected introduction of carbon emission regulations. Diesel fuel costs will also continue to rise as requirements for cleaner fuels are imposed. Both aspects will increase the cost of diesel-based power generation on the vessel and on shore. Although fuel cells have been used in many successful applications, they have not been technically or commercially validated in the port environment. One opportunity to do so was identified in Honolulu Harbor at the Young Brothers Ltd. wharf. At this facility, barges sail regularly to and from neighboring islands and containerized diesel generators provide power for the reefers while on the dock and on the barge during transport, nearly always at part load. Due to inherent efficiency characteristics of fuel cells and diesel generators, switching to a hydrogen fuel cell power generator was found to have potential emissions and cost savings. Deployment in Hawaii showed the unit needed greater reliability in the start-up sequence, as well as an improved interface to the end-user, thereby presenting opportunities for repairing/upgrading the unit for deployment in another locale. In FY2018, the unit was repaired and upgraded based on the Hawaii experience, and another deployment site was identified for another 6-month deployment of the 100 kW MarFC.
The Department of Energy (DOE) is the owner of multiple facilities in Northern California. The facilities include Lawrence Livermore National Laboratory (LLNL), Lawrence Berkeley National Laboratory (LBNL), Sandia National Laboratories/California (SNL/CA) and SLAC National Accelerator Laboratory (SLAC) among other sites. Through their operations, the facilities generate hazardous waste and, thereby, are subject to the requirements of Chapter 31 of the Title 22 California Code of Regulations, Waste Minimization. The Northern California sites are primarily research and development facilities in the areas relating to national security, high-energy physics, bioscience and the environment.
There has been ever-growing interest and engagement regarding net-zero and carbon neutrality goals, with many nations committing to steep emissions reductions by mid-century. Although water plays critical roles in various sectors, there has been a distinct gap in discussions to date about the role of water in the transition to a carbon neutral future. To address this need, a webinar was convened in April 2022 to gain insights into how water can support or influence active strategies for addressing emissions activities across energy, industrial, and carbon sectors. The webinar presentations and discussions highlighted various nuances of direct and indirect water use both within and across technology sectors (Figure ES-1). For example, hydrogen and concrete production, water for mining, and inland waterways transportation are all heavily influenced by the energy sources used (fossil fuels vs. renewable sources) as well as local resource availabilities. Algal biomass, on the other hand, can be produced across diverse geographies (terrestrial to sea) in a range of source water qualities, including wastewater and could also support pollution remediation through nutrient and metals recovery. Finally, water also influences carbon dynamics and cycling within natural systems across terrestrial, aquatic, and geologic systems. These dynamics underscore not only the critical role of water within the energy-water nexus, but also the extension into the energy-watercarbon nexus.
The Health Management Clinic (HMC) is a worksite specialty clinic designed to provide an exceptional level of health care for Sandia employees with diabetes, cholesterol and blood pressure disorders, and for those employees that need help with smoking cessation, depression, anxiety, sleep disorders, or weight management. With a unified commitment to the best care practices available, the HMC is Sandia’s interface to workplace healthcare and health plan services. The HMC provides Sandia employees access to onsite screenings, health care exams, preventative health education, disease management education, care management, periodic laboratory testing, immunizations, podiatry services, and behavioral, fitness, and nutrition counseling/education. Our multidisciplinary team of health professionals consists of physicians, nurses, medical assistants, certified diabetes educators, dietitians, health educators, and exercise specialists. Services offered by the Health Management clinic have been designed to reduce further complications from disease states and promote healthy behavior changes for Sandia employees.
Corynebacterium glutamicum has been successfully employed for the industrial production of amino acids and other bioproducts, partially due to its native ability to utilize a wide range of carbon substrates. We demonstrated C. glutamicum as an efficient microbial host for utilizing diverse carbon substrates present in biomass hydrolysates, such as glucose, arabinose, and xylose, in addition to its natural ability to assimilate lignin-derived aromatics. As a case study to demonstrate its bioproduction capabilities, L-lactate was chosen as the primary fermentation end product along with acetate and succinate. C. glutamicum was found to grow well in different aromatics (benzoic acid, cinnamic acid, vanillic acid, and p-coumaric acid) up to a concentration of 40 mM. Besides, 13C-fingerprinting confirmed that carbon from aromatics enter the primary metabolism via TCA cycle confirming the presence of β-ketoadipate pathway in C. glutamicum. 13C-fingerprinting in the presence of both glucose and aromatics also revealed coumarate to be the most preferred aromatic by C. glutamicum contributing 74 and 59% of its carbon for the synthesis of glutamate and aspartate respectively. 13C-fingerprinting also confirmed the activity of ortho-cleavage pathway, anaplerotic pathway, and cataplerotic pathways. Finally, the engineered C. glutamicum strain grew well in biomass hydrolysate containing pentose and hexose sugars and produced L-lactate at a concentration of 47.9 g/L and a yield of 0.639 g/g from sugars with simultaneous utilization of aromatics. Succinate and acetate co-products were produced at concentrations of 8.9 g/L and 3.2 g/L, respectively. Our findings open the door to valorize all the major carbon components of biomass hydrolysate by using C. glutamicum as a microbial host for biomanufacturing.
A straight fiber with nonlocal forces that are independent of bond strain is considered. These internal loads can either stabilize or destabilize the straight configuration. Transverse waves with long wavelength have unstable dispersion properties for certain combinations of nonlocal kernels and internal loads. When these unstable waves occur, deformation of the straight fiber into a circular arc can lower its potential energy in equilibrium. The equilibrium value of the radius of curvature is computed explicitly.