Questions for Nov 17th meeting
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers.
The tearing parameter criterion and failure propagation method currently used in the multilinear elastic-plastic constitutive model was added as an option to modular failure capabilities. Currently, this implementation is only available to the J2 plasticity model due to the formulation of the failure propagation approach. The implementation was verified against analytical solutions for both a uniaxial tension and a pure shear boundary-value problem. Possible improvements to, and necessary generalizations of, the failure method to extend it as a modular option for all plasticity models are highlighted.
Abstract not provided.
Multi-objective optimization methods can be criticized for lacking a statistically valid measure of the quality and representativeness of a solution. This stance is especially relevant to metaheuristic optimization approaches but can also apply to other methods that typically might only report a small representative subset of a Pareto frontier. Here we present a method to address this deficiency based on random sampling of a solution space to determine, with a specified level of confidence, the fraction of the solution space that is surpassed by an optimization. The Superiority of Multi-Objective Optimization to Random Sampling, or SMORS method, can evaluate quality and representativeness using dominance or other measures, e.g., a spacing measure for high-dimensional spaces. SMORS has been tested in a combinatorial optimization context using a genetic algorithm but could be useful for other optimization methods.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The Radiation Protection Center (RPC) of the Iraqi Ministry of Environment continues to evaluate the potential health impacts associated with the Adaya Burial Site, which is located 33 kilometers (20.5 miles) southwest of Mosul. This report documents the radiological analyses of 16 groundwater samples collected from wells located in the vicinity of the Adaya Burial Site and at other sites in northern Iraq. The Adaya Burial Site is a high-risk dump site because a large volume of radioactive material and contaminated soil is located on an unsecure hillside above the village of Tall ar Ragrag. The uranium activities for the 16 water samples in northern Iraq are considered to be naturally occurring and do not indicate artificial (man-made) contamination. With one exception, the alpha spectrometry results for the 16 wells that were sampled in 2019 indicate that the water quality concerning the three uranium isotopes (Uranium-233/234, Uranium-235/236, and Uranium-238) was acceptable for potable purposes (drinking and cooking). However, Well 7 in Mosul had a Uranium-233/234 activity concentration that slightly exceeded the World Health Organization guidance level. Eight of the 16 wells are located in the villages of Tall ar Ragrag and Adaya and had naturally occurring uranium concentrations. Wells in the villages of Tall ar Ragrag and Adaya are located near the Adaya Burial Site and should be sampled on an annual schedule. The list of groundwater analytes should include metals, total uranium, isotopic uranium, gross alpha/beta, gamma spectroscopy, organic compounds, and standard water quality parameters. Our current understanding of the hydrogeologic setting in the vicinity of the Adaya Burial Site is solely based on villager's domestic wells, topographic maps, and satellite imagery. To better understand the hydrogeologic setting, a Groundwater Monitoring Program needs to be developed and should include the installation of twelve groundwater monitoring wells in the vicinity of Tall ar Ragrag and the Adaya Burial Site. Characterization of the limestone aquifer and overlying alluvium is needed. RPC should continue to support health assessments for the villagers in Tall ar Ragrag and Adaya. Collecting samples for surface water (storm water), airborne dust, vegetation, and washway sediment should be conducted on a routine basis. Human access to the Adaya Burial Site needs to be strictly limited. Livestock access on or near the burial site needs to be eliminated. The surface-water exposure pathway is likely a greater threat than the groundwater exposure pathway. Installation of a surface-water diversion or collection system is recommended in order to reduce the potential for humans and livestock to come in contact with contaminated water and sediment. To reduce exposure to villagers, groundwater treatment should be considered if elevated uranium or other contaminants are detected in drinking water. Installing water-treatment systems would likely be quicker to accomplish than remediation and excavation of the Adaya Burial Site. The known potential for human exposure to uranium and metals (such as arsenic, chromium, selenium, and strontium) at the Adaya Burial Site is serious. Additional characterization , mitigation, and remediation efforts should be given a high priority.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Liquefied petroleum gas (LPG) is a viable, cleaner alternative to traditional diesel fuel used in busses and other heavy-duty vehicles and could play a role in helping the US meet its lower emission goals. While the LPG industry has focused efforts on developing vehicles and fueling infrastructure, we must also establish safe parameters for maintenance facilities which are servicing LPG fueled vehicles. Current safety standards aid in the design of maintenance facilities, but additional quantitative analysis is needed to prove safeguards are adequate and suggest improvements where needed. In this report we aim to quantify the amount of flammable mass associated with propane releases from vehicle mounted fuel vessels within enclosed garages. Furthermore, we seek to qualify harm mitigation with variable ventilations and facility layout. To accomplish this we leverage validated computational resources at Sandia National Laboratories to simulate various release scenarios representative of real world vehicles and maintenance facilities. Flow solvers are used to predict the dynamics of fuel systems as well as the evolution of propane during release events. From our simulated results we observe that both inflow and outflow ventilation locations play a critical role in reducing flammable cloud size and potential overpressure values during a possible combustion event.
A hallmark of the scientific process since the time of Newton has been the derivation of mathematical equations meant to capture relationships between observables. As the field of mathematical modeling evolved, practitioners specifically emphasized mathematical formulations that were predictive, generalizable, and interpretable. Machine learning’s ability to interrogate complex processes is particularly useful for the analysis of highly heterogeneous, anisotropic materials where idealized descriptions often fail. As we move into this new era, we anticipate the need to leverage machine learning to aid scientists in extracting meaningful, but yet sometimes elusive, relationships between observed quantities.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Semiconductor Science and Technology
To support the increasing demands for efficient deep neural network processing, accelerators based on analog in-memory computation of matrix multiplication have recently gained significant attention for reducing the energy of neural network inference. However, analog processing within memory arrays must contend with the issue of parasitic voltage drops across the metal interconnects, which distort the results of the computation and limit the array size. This work analyzes how parasitic resistance affects the end-to-end inference accuracy of state-of-the-art convolutional neural networks, and comprehensively studies how various design decisions at the device, circuit, architecture, and algorithm levels affect the system's sensitivity to parasitic resistance effects. A set of guidelines are provided for how to design analog accelerator hardware that is intrinsically robust to parasitic resistance, without any explicit compensation or re-training of the network parameters.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
In this report, we assess the data recorded by a Distributed Acoustic Sensing (DAS) cable deployed during the Source Physics Experiment, Phase II (DAG) in comparison with the data recorded by nearby 4.5-Hz geophones. DAS is a novel recording method with unprecedented spatial resolution, but there are significant concerns around the data fidelity as the technology is ramped up to more common usage. Here we run a series of tests to quantify the similarity between DAS data and more conventional data and investigate cases where the higher spatial resolution of the DAS can provide new insights into the wavefield. These tests include 1D modeling with seismic refraction and bootstrap uncertainties, assessing the amplitude spectra with distance from the source, measuring the frequency dependent inter-station coherency, estimating time-dependent phase velocity with beamforming and semblance, and measuring the cross-correlation between the geophone and the particle velocity inferred from the DAS. In most cases, we find high similarity between the two datasets, but the higher spatial resolution of the DAS provides increased details and methods of estimating uncertainty.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
We developed a simplistic physics-based model of an all-optical neural network that mimics the encoder part of an autoencoder neural network for image compression. Our approach relies on the generation of a MATLAB-based model for both data compression and decompression and utilizes MATLAB's built-in autoencoder networks in combination with simple propagation of optical fields between layers constituting phase elements via Fourier transform. We optimize the phase elements using the particle swarm optimization technique and using our model, we demonstrate a compression ratio of 25% for 2828-pixel input images containing numeric digits from 0 to 9.
Abstract not provided.
Abuse tests are designed to determine the safe operating limits of HEV\PHEV energy storage devices. Testing is intended to achieve certain worst-case scenarios to yield quantitative data on cell\module\pack response, allowing for failure mode determination and guiding developers toward improved materials and designs. Standard abuse tests with defined start and end conditions are performed on all devices to provide comparison between technologies. New tests and protocols are developed and evaluated to more closely simulate real world failure conditions. While robust mechanical models for vehicles and vehicle components exist, there is a gap for mechanical modeling of EV batteries. The challenge with developing a mechanical model for a battery is the heterogeneous nature of the materials and components (polymers, metals, metal oxides, liquids).
Abstract not provided.
Regulatory drivers and market demands for lower pollutant emissions, lower carbon dioxide emissions, and lower fuel consumption motivate the development of cleaner and more fuel-efficient engine operating strategies. Most current production heavy-duty diesel engines use a combination of both in-cylinder and exhaust emissions-control strategies to achieve these goals. The emissions and efficiency performance of in-cylinder strategies depend strongly on flow and mixing processes that can be influenced by using multiple fuel injections. Past work performed under this project showed that adding a second injection can reduce soot to levels below what would have been produced by an unchanged first injection, thereby increasing load while decreasing soot and potentially reducing brake specific fuel consumption. Information characterizing the important in-cylinder processes with multiple injections has been gleaned from ensemble-averaged planar laser-induced incandescence (PLII) imaging visualizing the soot cloud and planar induced fluorescence (PLIF) of OH characterizing the soot oxidation regions. PLII showed a consistent disruption of the first injection soot cloud by the second injection. In conjunction with OH-PLIF, differences in soot oxidation patterns for multiple injections compared to single injections were observed. This understanding was further enhanced in FY20, when high-speed imaging resolving the above-mentioned effects in a single cycle were combined with direct numerical simulations investigating the multiple-injection ignition process on the microscopic level of turbulence and chemistry interaction. In FY21, these findings in conjunction with findings from other researchers published in the scientific literature were composed into a preliminary multiple-injection conceptual model of fuel-mixing, injection and ignition processes. Remaining key research questions were also highlighted. In addition, wall heat flux was investigated experimentally and with numerical simulations to understand the potential of multiple injections to reduce the engine heat losses and further enhance the efficiency.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Rattlesnake is a combined-environments, multiple input/multiple output control system for dynamic excitation of structures under test. It provides capabilities to control multiple responses on the part using multiple exciters using various control strategies. Rattlesnake is written in the Python programming language to facilitate multiple input/multiple output vibration research by allowing users to prescribe custom control laws to the controller. Rattlesnake can target multiple hardware devices, or even perform synthetic control to simulate a test virtually. Rattlesnake has been used to execute control problems with up to 200 response channels and 12 drives. This document describes the functionality, architecture, and usage of the Rattlesnake controller to perform combined environments testing.
Abstract not provided.
Abstract not provided.
A new liquid sample adapter design for the Explosive Destruction Systems has been developed. The design features a semi-transparent fluoropolymer tube coupled to the vessel high pressure sample valve with a closing quick connect fitting. The sample tubes are the pressure-limiting component. The tubes were hydrostatically tested to establish failure characteristics and pressure limits at ambient and operational temperatures. A group of tubes from two manufacturing lots were tested to determine the consistency of the commercial part. An upper pressure limit was determined for typical operations.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Kier + Wright, as Qualified SWPPP Developer (QSD), puts forth this Storm Water Pollution Prevention Plan (SWPPP) for the Limited Area, Multi-Purpose (LAMP) High Bay Laboratory facility (Project) located at Sandia National Laboratories, 7011 East Avenue, CA. The property is owned by the U.S. Department of Energy, and managed and operated by National Technology & Engineering Solutions of Sandia, LLC. The project proposes converting an asphalt parking lot into a new high bay machine shop building and a low bay office building. Per the California State Water Resources Control Board’s (California State Water Board) Construction General Permit (CGP), a SWPPP is required when 1 acre or more of land is disturbed. The project site area of 1.6 acres exceeds the minimum acreage threshold of 1 acre and therefore requires SWPPP implementation. QSD has determined the sediment risk for this project, based on soil type at the site and starting and ending dates of construction, to be low (Section 3.4.1 and Appendix B). Receiving water for this project is the Arroyo Seco. QSD has determined the Arroyo Seco to be a high-risk receiving water because it has the three beneficial uses of “spawn”, “cold”, and “migratory” (Sections 3.3 and 3.4.2 and Appendix B). QSD has determined the overall risk level for the site to be Risk Level 2, based on a combination of low sediment risk and high receiving water risk (Appendix B). As such, QSD has delineated a variety of Best Management Practices (BMPs) to be employed during project construction to reduce or eliminate pollutants in stormwater runoff or any other discharges from the Project site. In addition to site-specific BMPs, this SWPPP report provides instruction for on site monitoring. Electronic copies of required documentation such as inspection reports, REAPs, annual report documentation, etc. shall be submitted to NTESS Sandia Delegated Representative via Newforma.
Graph algorithms enable myriad large-scale applications including cybersecurity, social network analysis, resource allocation, and routing. The scalability of current graph algorithm implementations on conventional computing architectures are hampered by the demise of Moore’s law. We present a theoretical framework for designing and assessing the performance of graph algorithms executing in networks of spiking artificial neurons. Although spiking neural networks (SNNs) are capable of general-purpose computation, few algorithmic results with rigorous asymptotic performance analysis are known. SNNs are exceptionally well-motivated practically, as neuromorphic computing systems with 100 million spiking neurons are available, and systems with a billion neurons are anticipated in the next few years. Beyond massive parallelism and scalability, neuromorphic computing systems offer energy consumption orders of magnitude lower than conventional high-performance computing systems. We employ our framework to design and analyze new spiking algorithms for shortest path and dynamic programming problems. Our neuromorphic algorithms are message-passing algorithms relying critically on data movement for computation. For fair and rigorous comparison with conventional algorithms and architectures, which is challenging but paramount, we develop new models of data-movement in conventional computing architectures. This allows us to prove polynomial-factor advantages, even when we assume a SNN consisting of a simple grid-like network of neurons. To the best of our knowledge, this is one of the first examples of a rigorous asymptotic computational advantage for neuromorphic computing.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The TChem open-source software is a toolkit for computing thermodynamic properties, source term, and source term’s Jacobian matrix for chemical kinetic models that involve gas and surface reactions.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Mechanics of Materials
Simultaneous data of the quasi-static compaction and electrical conductivity of porous, binary powder mixtures have been collected as a function of bulk density. The powder mixtures consist of a metal conductor, either titanium or iron, an insulator, and pores filled with ambient air. The data show a dependency of the conductivity in terms of relative bulk density and metal volume fraction on conductor type and conductor particle characteristics of size and shape. Finite element models using particle domains generated by discrete element method are used to simulate the bulk conductivity near its threshold while the general effective media equation is used to model the conductivity across the compression regime.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Heavy-Duty diesel engine manufacturers are continuously in pursuit of simple and low-cost technologies that can reduce emissions. Ducted fuel injection (DFI) and Cooled Spray (CS) technologies are two technologies that continue to show promise for significant particulate emissions reductions. These technologies represent a breakthrough in diesel engine combustion from the potential of nearly sootless diesel combustion. This can provide a significant decrease in harmful PM emissions and may enable further system optimization for reduced NOx emissions and increased efficiency. Combustion vessel experiments and engine demonstrations at Sandia, together with the large bore engine tests performed by Wabtec show that this technology may be applicable to heavy duty diesel engines across a wide range of engine sizes and speeds representing the majority of off-road diesel engines. However, very little is known about the ideal geometry, scaling properties or effectiveness of these technologies over the engine operating map. This project will address those uncertainties through a series of experiments performed in an optical and a metal single-cylinder engine.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The Computer Science Research Institute (CSRI) brings university faculty and students to Sandia National Laboratories for focused collaborative research on Department of Energy (DOE) computer and computational science problems. The institute provides an opportunity for university researches to learn about problems in computer and computational science at DOE laboratories, and help transfer results of their research to programs at the labs. Some specific CSRI research interest areas are: scalable solvers, optimization, algebraic preconditioners, graph-based, discrete, and combinatorial algorithms, uncertainty estimation, validation and verification methods, mesh generation, dynamic load-balancing, virus and other malicious-code defense, visualization, scalable cluster computers, beyond Moore’s Law computing, exascale computing tools and application design, reduced order and multiscale modeling, parallel input/output, and theoretical computer science. The CSRI Summer Program is organized by CSRI and includes a weekly seminar series and the publication of a summer proceedings.
Abstract not provided.
IEEE Transactions on Plasma Science
In this article, we derive the vacuum electric fields within specific cylindrically symmetric magnetically insulated transmission lines (MITLs) in the limit of an infinite speed of light for an arbitrary time-dependent current. We focus our attention on two types of MITLs: the radial MITL and a spherically curved MITL. We then simulate the motion of charged particles, such as electrons, present in these MITLs due to the vacuum fields. In general, the motion of charged particles due to the vacuum fields is highly nonlinear since the fields are nonlinear functions of spatial coordinates and depend on an arbitrary time-dependent current drive. Using guiding center theory, however, one can describe the gross particle kinetics using a combination of $\textbf {E} \times \textbf {B}$ and $\nabla B$ drifts. In addition, we compare our approximate inner MITL field models and particle kinetics with those from a fully electromagnetic simulation code. We find that the agreement between the approximate model and the electromagnetic simulations is excellent.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The need to reduce the carbon footprint from medium- and heavy-duty diesel engines is clear; low-carbon biofuels are a powerful means to achieve this. Liquid fuels are rapidly deployed because existing infrastructure can be utilized for their production, transport, and distribution. Their impact is unique as they can decrease the greenhouse gas (GHG) emissions of existing vehicles and in applications resistant to electrification. However, introducing new diesel-like bio-blends into the market is very challenging. At a minimum, it requires a comprehensive understanding of the life-cycle GHG emissions of the fuels, the implications for refinery optimization and economics, the fuel’s impact on the infrastructure, the effect on the combustion performance of current and future vehicle fleets, and finally the implications for exhaust aftertreatment systems and compliance with emissions regulations. Such understanding is sought within the Co-Optima project.
U.S. nuclear power facilities face increasing challenges in meeting dynamic security requirements caused by evolving and expanding threats while keeping cost reasonable to make nuclear energy competitive. The past approach has often included implementing security features after a facility has been designed and without attention to optimization, which can lead to cost overruns. Incorporating security in the design process can provide robust, cost effective, and sufficient physical protection systems. The purpose of this work is both to develop a framework for the integration of security into the design phase of a microreactor and increase the use of modeling and simulation tools to optimize the design of physical protection systems. Specifically, this effort focuses on integrating security into the design phase of a model microreactor that meets current Nuclear Regulatory Commission (NRC) physical protection requirements and providing advanced solutions to improve physical protection and decrease costs. A suite of tools, including SCRIBE3D©, PATHTRACE© and Blender© were used to model a hypothetical, generic domestic microreactor facility. Physical protection elements such as sensors, cameras, barriers, and guard forces were added to the model based on best practices for physical protection systems. Multiple outsider sabotage scenarios were examined with four-to-eight adversaries to determine security metrics. The results of this work will influence physical protection system designs and facility designs for U.S. domestic microreactors. This work will also demonstrate how a series of experimental and modeling capabilities across the Department of Energy (DOE) Complex can impact the design of and complete Safeguards and Security by Design (SSBD) for microreactors. The conclusions and recommendations in this document may be applicable to all microreactor designs.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report documents an experimental program designed to investigate High Energy Arcing Fault (HEAF) phenomena for medium voltage electrical switchgear containing aluminum conductors. This report covers full-scale laboratory experiments using representative nuclear power plant (NPP) three-phase electrical equipment. Electrical, thermal, and pressure data were recorded for each experiment and documented in this report. This report covers four of the fourteen planned medium voltage electrical enclosure experiments. Subsequent reports will document the additional experiments performed in the future. The experiments were performed at KEMA Labs located in Chalfont, Pennsylvania. The experimental design, setup, and execution were completed by staff from the United States Nuclear Regulatory Commission (NRC), the National Institute of Standards and Technology (NIST), Sandia National Laboratories (SNL) and KEMA. In addition, representatives from the Electric Power Research Institute (EPRI) observed some of the experimental setups and execution. The HEAF experiments were performed on four near-identical units of General Electric metal-clad medium voltage switchgear. The three-phase arcing fault was initiated on the primary cable connection bus. All four experiments used the same system voltage (6.9 kV) but varied the current and duration. Real-time electrical operating conditions, including voltage, current and frequency, were measured during the experiments. Heat fluxes and incident energies were measured with plate thermometers and slug calorimeters at various locations around the electrical enclosures. Internal enclosure pressures were measured during the experiments. The experiments were documented with normal and high-speed videography, infrared imaging, and photography. Insights from the experimental series included timing information related to enclosure breach, event progression, mass loss measurements for electrodes and steel enclosures, peak pressure rise, particle analysis, along with visual and thermal imaging data to better understand and characterize the hazard. These results will be used in subsequent efforts to advance the state of knowledge related to HEAF.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This assessment analyzes Environment, Safety, and Health (ES&H) occurrences and Non-Occurrence Trackable Events (NOTEs) from fiscal year (FY) 2021. For this report, assessors used three primary methods for categorizing occurrence and NOTE data: issue categorization, DOE reporting criteria groups, and DOE cause codes. The FY 2021 Q1 occurrence and NOTE total was the lowest since this type of analysis began in FY 2018 Q4, following a downward trend from the FY 2019 Q3 high point. The FY 2021 Q2 occurrence and NOTE total was nearly double the FY 2021 Q1 total; occurrence totals declined slightly in Q3 and Q4, while NOTE totals remained the same. NOTEs in each of the final three quarters were more than double the amount from Q1. This assessment resulted in one observation. As COVID-19 vaccination rates increase and COVID 19 impacts on operations decrease, the number of workers on-site and the amount of activity-level work will increase. With these changes, focused attention on the following areas related to work planning and controls may reduce the probability of future events, hazard identification and analysis, compliance with standards, formality of operations, and job scoping.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Physics of Plasmas
At the Z Facility at Sandia National Laboratories, the magnetized liner inertial fusion (MagLIF) program aims to study the inertial confinement fusion in deuterium-filled gas cells by implementing a three-step process on the fuel: premagnetization, laser preheat, and Z-pinch compression. In the laser preheat stage, the Z-Beamlet laser focuses through a thin polyimide window to enter the gas cell and heat the fusion fuel. However, it is known that the presence of the few μm thick window reduces the amount of laser energy that enters the gas and causes window material to mix into the fuel. These effects are detrimental to achieving fusion; therefore, a windowless target is desired. The Lasergate concept is designed to accomplish this by "cutting"the window and allowing the interior gas pressure to push the window material out of the beam path just before the heating laser arrives. In this work, we present the proof-of-principle experiments to evaluate a laser-cutting approach to Lasergate and explore the subsequent window and gas dynamics. Further, an experimental comparison of gas preheat with and without Lasergate gives clear indications of an energy deposition advantage using the Lasergate concept, as well as other observed and hypothesized benefits. While Lasergate was conceived with MagLIF in mind, the method is applicable to any laser or diagnostic application requiring direct line of sight to the interior of gas cell targets.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This white paper describes the work performed by Sandia National Laboratories in the New Mexico Small Business Agreement with BayoTech. BayoTech is a hydrogen generation and distribution company that is located in Albuquerque, NM. Their goal is to distribute hydrogen via their hydrogen systems which utilize the core design that was developed by Sandia. However, because the hydrogen economy is in its nascency, the safety and operation of the generating systems require independent validation. Additionally, in their pursuit of permitting at various locations around the nation, they require fire protection engineering support in discussions with local fire marshals and neighboring industrial entities. Sandia National Laboratories has subject matter expertise in hydrogen risk modeling of consequence (overpressure and dispersion) as well as fire protection engineering. Throughout this project, Sandia has worked with BayoTech to provide our expertise in these subject areas to facilitate the market entry of their hydrogen generation project to address the dire need for decarbonization due to climate change. The general approach of the support by Sandia is outlined in the main body, while the location specific evaluation for the Port of Stockton is contained in Appendix A.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This project will test the coupling of light emitted from silicon vacancy and nitrogen vacancy defects in diamond into additively manufactured photonic wire bonds toward integration into an "on-chip quantum photonics platform". These defects offer a room-temperature solid state solution for quantum information technologies but suffer from issues such as low activation rate and variable local environments. Photonic wire bonding will allow entanglement of pre-selected solid-state defects alleviating some of these issues and enable simplified integration with other photonic devices. These developments could prove to be key technologies to realize quantum secured networks for national security applications.
As part of the Advanced Simulation and Computing Verification and Validation (ASCVV) program, a 0.3-m diameter hydrocarbon pool fire with multiple fuels was modeled and simulated. In the study described in this report, systematic examination was performed on the radiation model used in a series of coupled Fuego/Nalu simulations. A calibration study was done with a medium-scale methanol pool fire and the effect of calibration traced throughout the radiation model. This analysis provided a more detailed understanding of the effect of radiation model parameters on each other and on other quantities in the simulations. Heptane simulation results were also examined using this approach and possible areas for further improvement of the models were identified. The effect of soot on radiative losses was examined by comparing heptane and methanol results.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Manufacturing Science and Engineering
Shear-based deformation processing by hybrid cutting-extrusion and free machining are used to make continuous strip, of thickness up to one millimeter, from low-workability AA6013-T6 in a single deformation step. The intense shear can impose effective strains as large as 2 in the strip without pre-heating of the workpiece. The creation of strip in a single step is facilitated by three factors inherent to the cutting deformation zone: highly confined shear deformation, in situ plastic deformation-induced heating and high hydrostatic pressure. The hybrid cutting-extrusion, which employs a second die located across from the primary cutting tool to constrain the chip geometry, is found to produce strip with smooth surfaces (Sa < 0.4 μm) that is similar to cold-rolled strip. The strips show an elongated grain microstructure that is inclined to the strip surfaces – a shear texture – that is quite different from rolled sheet. Furthermore, this shear texture (inclination) angle is determined by the deformation path. Through control of the deformation parameters such as strain and temperature, a range of microstructures and strengths could be achieved in the strip. When the cutting-based deformation was done at room temperature, without workpiece pre-heating, the starting T6 material was further strengthened by as much as 30% in a single step. In elevated-temperature cutting-extrusion, dynamic recrystallization was observed, resulting in a refined grain size in the strip. Implications for deformation processing of age-hardenable Al alloys into sheet form, and microstructure control therein, are discussed.
Physical Review B
We study the structure of the threading edge dislocations, or “elbows,” which are an essential component of the well-known herringbone reconstruction of the (111) surface of Au. Previous work had shown that these dislocations can be stabilized by long-range elastic relaxations into the bulk. However, the validity of the harmonic spring model that had been used to estimate the energies of the dislocations is uncertain. To enable a more refined model of the dislocation energetics, we have imaged the atomic structure of these dislocations using scanning tunneling microscopy. We find that the harmonic spring model does not adequately reproduce the observed structure. We are able to reproduce the structure, however, with a two-dimensional Frenkel-Kontorova (FK) model that uses a pairwise Morse potential to describe the interactions between the top layer Au atoms on a rigid substrate. The parameters of the potential were obtained by fitting the energy of uniaxially compressed phases, or “stripes”, computed with density functional theory, as a function of surface Au density. Within this model, the formation of the threading dislocations remains unfavorable. However, the large forces on the substrate atoms near the threading-dislocation cores, render the assumption of a completely rigid substrate questionable. Indeed, if the FK parameters are modified to account for the relaxation of just one more atomic layer, threading dislocations can, in principle, become favorable, even without bulk elastic relaxations. Additional evidence for a small elbow energy is that our computed change in the Au(111) surface stress tensor caused by the (√ 3 × 22) reconstruction is considerably smaller than previous estimates.
Journal of Applied Physics
The high-pressure dynamic response of titanium dioxide (TiO 2) is not only of interest because of its numerous industrial applications but also because of its structural similarities to silica (SiO 2). We performed plate impact experiments in a two-stage light gas gun, at peak stresses from 64 to 221 GPa to determine the TiO 2 response along the Hugoniot. The lower stress experiment at 64 GPa shows an elastic behavior followed by an elastic-plastic transition, whereas the high stress experiments above 64 GPa show a single wave structure. Previous shock studies have shown the presence of high-pressure phases (HPP) I (26 GPa) and HPP II (100 GPa); however, our data suggest that the HPP I phase is stable up to 150 GPa. Using a combination of data from our current study and our previous Z-data, we determine that TiO 2 likely melts on the Hugoniot at 157 GPa. Furthermore, our data confirm that TiO 2 is not highly incompressible as shown by a previous study.
Geophysical Research Letters
Flooding impacts are on the rise globally, and concentrated in urban areas. Currently, there are no operational systems to forecast flooding at spatial resolutions that can facilitate emergency preparedness and response actions mitigating flood impacts. We present a framework for real-time flood modeling and uncertainty quantification that combines the physics of fluid motion with advances in probabilistic methods. The framework overcomes the prohibitive computational demands of high-fidelity modeling in real-time by using a probabilistic learning method relying on surrogate models that are trained prior to a flood event. This shifts the overwhelming burden of computation to the trivial problem of data storage, and enables forecasting of both flood hazard and its uncertainty at scales that are vital for time-critical decision-making before and during extreme events. The framework has the potential to improve flood prediction and analysis and can be extended to other hazard assessments requiring intense high-fidelity computations in real-time.
The Big Adaptive Rotor (BAR) project was initiated by the U.S. Department of Energy (DOE) in 2018 with the goal of identifying novel technologies that can enable large (>100 meter [m]) blades for low-specific-power wind turbines. Five distinct tasks were completed to achieve this goal: 1. Assessed the trends, impacts, and value of low-specific-power wind turbines; 2. Developed a wind turbine blade cost-reduction road map study; 3. Completed research-and-development opportunity screening; 4. Performed detailed design and analysis; and, 5. Assessed low-cost carbon fiber. These tasks were completed by the national laboratory team consisting of Sandia National Laboratories (Sandia), the National Renewable Energy Laboratory (NREL), and Lawrence Berkeley National Laboratory.
Journal of Physical Chemistry A
Two-color infrared multiphoton dissociation (2C-IRMPD) spectroscopy is a technique that mitigates spectral distortions due to nonlinear absorption that is inherent to one-color IRMPD. We use a 2C-IRMPD scheme that incorporates two independently tunable IR sources, providing considerable control over the internal energy content and type of spectrum obtained by varying the trap temperature, the time delays and fluences of the two infrared lasers, and whether the first or second laser wavelength is scanned. In this work, we describe the application of this variant of 2C-IRMPD to conformationally complex peptide ions. The 2C-IRMPD technique is used to record near-linear action spectra of both cations and anions with temperatures ranging from 10 to 300 K. We also determine the conditions under which it is possible to record IR spectra of single conformers in a conformational mixture. Furthermore, we demonstrate the capability of the technique to explore conformational unfolding by recording IR spectra with widely varying internal energy in the ion. The protonated peptide ions YGGFL (NH3+-Tyr-Gly-Gly-Phe-Leu, Leu-enkephalin) and YGPAA (NH3+-Tyr-Gly-Pro-Ala-Ala) are used as model systems for exploring the advantages and disadvantages of the method when applied to conformationally complex ions.
Corrosion Science
Structural alloys may experience corrosion when exposed to molten chloride salts due to selective dissolution of active alloying elements. One way to prevent this is to make the molten salt reducing. For the KCl + MgCl2 eutectic salt mixture, pure Mg can be added to achieve this. However, Mg can form intermetallic compounds with nickel at high temperatures, which may cause alloy embrittlement. This work shows that an optimum level of excess Mg could be added to the molten salt which will prevent corrosion of alloys like 316 H, while not forming any detectable Ni-Mg intermetallic phases on Ni-rich alloy surfaces.
ACS Applied Materials and Interfaces
The ability to form pristine interfaces after etching and regrowth of GaN is a prerequisite for epitaxial selective area doping, which in turn is needed for the formation of lateral PN junctions and advanced device architectures. In this work, we report the electrical properties of etched-and-regrown GaN PN diodes using an in situ Cl-based precursor, tertiary butylchloride (TBCl). We demonstrated a regrowth diode with I–V characteristics approaching that from a continuously grown reference diode. The sources of unintentional contamination from the silicon (Si) impurity and the mediating effect of Si during the TBCl etching are also investigated in this study. Furthermore, this work points to the potential of in situ TBCl etching toward the realization of GaN lateral PN junctions.
Waveform modeling is crucial to improving our understanding of observed seismograms. Forward simulation of wavefields provides quantitative methods of testing interactions between complicated source functions and the propagation medium. Here, we discuss three experiments designed to improve under standing of high frequency seismic wave propagation. First, we compare observed and predicted travel times of crustal phases for a set of real observed earthquakes with calculations and synthetic seismograms. Second, we estimate the frequency content of a series of nearly co-located earthquakes of varying magnitude for which we have a relatively well- known 1D velocity model. Third, we apply stochastic perturbations on top of a 3D tomographic model and qualitatively assess how those variations map to differences in the seismograms. While different in scope and aim, these three vignettes illustrate the current state of crustal scale waveform modeling and the potential for future studies to better constrain the structure of the crust.
The following SNL document contains requested radiological survey information, as part of the documentation for the LANL MLU shipment performed by the LANL MLU team the week of October 18th . The surveys were performed in TA-5 October 19th – 21st, 2021. The surveys were for the official shipments of 4 loaded TRUPACTs and 2 empty TRUPACTs. Surveys were completed after the trucks were hitched to their respective trailers.
ACS Applied Energy Materials
The galvanostatic intermittent titration technique (GITT) is widely used to evaluate solid-state diffusion coefficients in electrochemical systems. However, the existing analysis methods for GITT data require numerous assumptions, and the derived diffusion coefficients typically are not independently validated. To investigate the validity of the assumptions and derived diffusion coefficients, we employ a direct-pulse fitting method for interpreting the GITT data that involves numerically fitting an electrochemical pulse and subsequent relaxation to a one-dimensional, single-particle, electrochemical model coupled with non-ideal transport to directly evaluate diffusion coefficients. Our non-ideal diffusion coefficients, which are extracted from GITT measurements of the intercalation regime of FeS2 and independently verified through discharge predictions, prove to be 2 orders of magnitude more accurate than ideal diffusion coefficients extracted using conventional methods. We further extend our model to a polydisperse set of particles to show the validity of a single-particle approach when the modeled radius is proportional to the total volume-to-surface-area ratio of the system.
This work demonstrated both NbN and Nb make good electrodes for stabilizing orthorhombic phase of Hf0.6Zr0.4O2 ferroelectric films. Wake up are < 100 cycles. Pr can be as high as 30 µC/cm2 - respectively 14 and 18 µC/cm2 here. Further, capacitance suggests an orthorhombic phase can be stabilized. Addition of a linear dielectric under modest thickness can tune the Pr and reduce leakage.
Area-selective atomic layer deposition (AS-ALD) is an appealing bottom-up fabrication technique that can produce atomic-scale device features, overcoming challenges in current industrial techniques such as edge alignment errors. TiCI4 is a common thermal ALD precursor for Ti02 thin films, which are appealing candidates for DRAM capacitors due to their excellent dielectric constants. Hydrogen and chlorine termination passivate the Si surface, allowing for selective deposition of TiCI4 onto HO-terminated areas. However, selectivity loss occurs after several ALD cycles. Ti oxide nucleates onto surface defects on Cl- and H-Si resists. Previously, the use of H-Si as an ALD resist has been studied extensively, but less work has focused on chemical forces driving nucleation, especially for Cl-Si. Here, formation of defect nuclei was investigated with selectivity loss during Ti02 ALD with TiCI4 and water on the (100) and (111) crystal surfaces of hydrogenated, chlorinated, and oxidized Si.
This report documents the activities in a preliminary phase of development for three models: 1) waste package breach model, 2) fuel/basket degradation model, and 3) dual-purpose canister (DPC) crush model. The waste package breach model describes the coupling of mass flow, heat transport, and canister shell deformation in response to a heat-generating (criticality) event. The fuel/basket degradation model describes potential weakening and disaggregation of the DPC structure from corrosion, possibly accelerated by seismic ground motion. Progressively degraded three-dimensional (3D) configurations of the fuel, basket, and shell are generated for future analysis of reactivity (with as-loaded DPC fuel characteristics). Another important application for the fuel/basket degradation model is validation of the two stylized degradation cases currently being used by other investigators for analysis of the as-loaded DPC inventory under disposal conditions. The DPC crush model investigates stability of a typical DPC after breach of the disposal overpack allows fluids from the repository near-field environment to penetrate and externally pressurize the canister shell. Preliminary results show that large deformation of the DPC could occur for external pressure on the order of 10 to 15 MPa, or the shell could be stable (not collapse) with pressure of 20 MPa or greater if the basket plates are fully welded at the connections.
The security of the electric grid and supporting energy systems is crucial to national security. One of the complexities in analyzing the security of energy systems is the safety consequences that may result from accidents. For energy systems, the goal is to ensure that they operate as intended and that any consequences are mitigated or prevented. The integration of safety and security is paramount to protecting these systems from attacks and ensuring that large consequences are prevented. This report describes an integrated safety and security methodology to evaluate cybersecurity events that can lead to large consequences. This novel approach first describes how Systems-Theoretic Process Analysis (STPA) provides a digital causal analysis for Bayesian Networks (BNs). The use of STPA causal analysis provides a systematic approach to constructing BNs that adequately model cyber scenarios that result in consequences. When combined with the technical principles described in Risk-Informed Management of Enterprise Systems (RIMES), a comprehensive risk-informed cybersecurity analysis results that allows decision-makers to prioritize systems that most impact risk.
Frontiers in Energy Research
Solar thermochemical hydrogen (STCH) production is a promising method to generate carbon neutral fuels by splitting water utilizing metal oxide materials and concentrated solar energy. The discovery of materials with enhanced water-splitting performance is critical for STCH to play a major role in the emerging renewable energy portfolio. While perovskite materials have been the focus of many recent efforts, materials screening can be time consuming due to the myriad chemical compositions possible. This can be greatly accelerated through computationally screening materials parameters including oxygen vacancy formation energy, phase stability, and electron effective mass. In this work, the perovskite Gd0.5La0.5Co0.5Fe0.5O3 (GLCF), was computationally determined to be a potential water splitter, and its activity was experimentally demonstrated. During water splitting tests with a thermal reduction temperature of 1,350°C, hydrogen yields of 101 μmol/g and 141 μmol/g were obtained at re-oxidation temperatures of 850 and 1,000°C, respectively, with increasing production observed during subsequent cycles. This is a significant improvement from similar compounds studied before (La0.6Sr0.4Co0.2Fe0.8O3 and LaFe0.75Co0.25O3) that suffer from performance degradation with subsequent cycles. Confirmed with high temperature x-ray diffraction (HT-XRD) patterns under inert and oxidizing atmosphere, the GLCF mainly maintained its phase while some decomposition to Gd2-xLaxO3 was observed.
Journal of Mechanical Design
Bayesian optimization (BO) is an efficient and flexible global optimization framework that is applicable to a very wide range of engineering applications. To leverage the capability of the classical BO, many extensions, including multi-objective, multi-fidelity, parallelization, and latent-variable modeling, have been proposed to address the limitations of the classical BO framework. In this work, we propose a novel multi-objective BO formalism, called srMO-BO-3GP, to solve multi-objective optimization problems in a sequential setting. Three different Gaussian processes (GPs) are stacked together, where each of the GPs is assigned with a different task. The first GP is used to approximate a single-objective computed from the multi-objective definition, the second GP is used to learn the unknown constraints, and the third one is used to learn the uncertain Pareto frontier. At each iteration, a multi-objective augmented Tchebycheff function is adopted to convert multi-objective to single-objective, where the regularization with a regularized ridge term is also introduced to smooth the single-objective function. Finally, we couple the third GP along with the classical BO framework to explore the convergence and diversity of the Pareto frontier by the acquisition function for exploitation and exploration. The proposed framework is demonstrated using several numerical benchmark functions, as well as a thermomechanical finite element model for flip-chip package design optimization.
Nanoscale
Two-dimensional (2D) materials with robust ferromagnetic behavior have attracted great interest because of their potential applications in next-generation nanoelectronic devices. Aside from graphene and transition metal dichalcogenides, Bi-based layered oxide materials are a group of prospective candidates due to their superior room-temperature multiferroic response. Here, an ultrathin Bi3Fe2Mn2O10+δ layered supercell (BFMO322 LS) structure was deposited on an LaAlO3 (LAO) (001) substrate using pulsed laser deposition. Microstructural analysis suggests that a layered supercell (LS) structure consisting of two-layer-thick Bi-O slabs and two-layer-thick Mn/Fe-O octahedra slabs was formed on top of the pseudo-perovskite interlayer (IL). A robust saturation magnetization value of 129 and 96 emu cm-3 is achieved in a 12.3 nm thick film in the in-plane (IP) and out-of-plane (OP) directions, respectively. The ferromagnetism, dielectric permittivity, and optical bandgap of the ultrathin BFMO films can be effectively tuned by thickness and morphology variation. In addition, the anisotropy of all ultrathin BFMO films switches from OP dominating to IP dominating as the thickness increases. This study demonstrates the ultrathin BFMO film with tunable multifunctionalities as a promising candidate for novel integrated spintronic devices. This journal is
The following SNL document contains requested forms related to used equipment, as part of the documentation for the MLU shipment that was performed by the LANL MLU team and Crane Services personnel.
The following SNL document contains requested radiological survey information, as part of the documentation for the MLU shipment being performed by the LANL MLU team. The survey was performed in TA-5, on October 19th, 2021. This survey was for radiological coverage for the disassembly of two TRUPACTs, the assembly and loading of their payloads, and the reassembly of the TRUPACTs.
Sandia National Labs (SNL)-designed, portable chemical warfare agent (CWA) detection systems consist of three-stages: collection, separation, and detection. We use microfabrication technologies to miniaturize these stages and to reduce the overall size, weight, power, and (potentially) cost of the final system. Our newest system consists of a multi-dimensional separation stage and an miniature ion mobility spectrometer (IMS) detector for unprecedented system sensitivity, selectivity, and depth of target list.
As new and modernized systems are fielded by the U.S. and Russia, and as China expands its nuclear stockpile with 21st century technology, it is important to ask when do new technologies in the nuclear domain actually matter? When do emerging capabilities and replacements of existing systems change the military realities of the world’s nuclear powers and the lived experiences of the people in these countries? Specifically, it is important to consider what are the attributes of emerging weapon systems that may impact nuclear strategic stability, or in even narrower terms, which attributes of newly fielded military systems may make nuclear conflict more likely or less likely? It is through this understanding that policy makers, voters, and the broader nuclear weapons community can evaluate when and how to respond to emerging technology while reducing the likelihood of nuclear escalation.
Compound Semiconductor
Vertical GaN p-n diodes combine excellent efficiencies with incredibly fast protection from unwanted electromagnetic pulses.
Timing spread between the thirty-six Saturn modules affects peak electrical power delivered to the Bremsstrahlung diode and can affect vacuum power flow and impedance behavior of the load. To reduce the module spread, a new megavolt gas-insulated closing switch was developed employing design techniques developed for the Z-machine laser triggered switches while retaining Saturn’s simpler electrical triggering. Two modules were temporarily outfitted with the new switches and used separately into local resistive loads (instead of the usual Saturn electron beam load). A reliable operating point and switch time jitter at that point were the goals of the experiments. The target switch reliability is less than one pre-fire in one thousand switch-shots, and a timing standard deviation of 4 nanoseconds. The switches were able to meet both requirements but the number of tests at the chosen point are limited.
The recent introduction of a new generation of "smart NICs" have provided new accelerator platforms that include CPU cores or reconfigurable fabric in addition to traditional networking hardware and packet offloading capabilities. While there are currently several proposals for using these smartNICs for low-latency, in-line packet processing operations, there remains a gap in knowledge as to how they might be used as computational accelerators for traditional high-performance applications. This work aims to look at benchmarks and mini-applications to evaluate possible benefits of using a smartNIC as a compute accelerator for HPC applications. We investigate NVIDIA's current-generation BlueField-2 card, which includes eight Arm CPUs along with a small amount of storage, and we test the networking and data movement performance of these cards compared to a standard Intel server host. We then detail how two different applications, YASK and miniMD can be modified to make more efficient use of the BlueField-2 device with a focus on overlapping computation and communication for operations like neighbor building and halo exchanges. Our results show that while the overall compute performance of these devices is limited, using them with a modified miniMD algorithm allows for potential speedups of 5 to 20% over the host CPU baseline with no loss in simulation accuracy.
A fade study was performed for the Thermo Electron Model 8825 whole-body dosimeter over a six-month period. The fade equation was evaluated using the method described the Thermo dose calculation algorithm (Thermo, 2007).
The following SNL document contains requested radiological survey information, as part of the documentation for the MLU shipment being performed by the LANL MLU team. The surveys were performed in TA-5, on October 11th - 15th, 2021. These surveys were of the shipping containers, the dunnage container, MLU equipment trailer, and contracted mobile crane.
Neuromorphic computers are hardware systems that mimic the brain’s computational process phenomenology. This is in contrast to neural network accelerators, such as the Google TPU or the Intel Neural Compute Stick, which seek to accelerate the fundamental computation and data flows of neural network models used in the field of machine learning. Neuromorphic computers emulate the integrate and fire neuron dynamics of the brain to achieve a spiking communication architecture for computation. While neural networks are brain-inspired, they drastically oversimplify the brain’s computation model. Neuromorphic architectures are closer to the true computation model of the brain (albeit, still simplified). Neuromorphic computing models herald a 1000x power improvement over conventional CPU architectures. Sandia National Labs is a major contributor to the research community on neuromorphic systems by performing design analysis, evaluation, and algorithm development for neuromorphic computers. Space-based remote sensing development has been a focused target of funding for exploratory research into neuromorphic systems for their potential advantage in that program area; SNL has led some of these efforts. Recently, neuromorphic application evaluation has reached the NA-22 program area. This same exploratory research and algorithm development should penetrate the unattended ground sensor space for SNL’s mission partners and program areas. Neuromorphic computing paradigms offer a distinct advantage for the SWaP-constrained embedded systems of our diverse sponsor-driven program areas.
With this work, we aim to speed up simulation and reduce computational complexity of Converter Dominated Power System (CDPS) within an acceptable accuracy.
In this position paper, we discuss exciting recent advancements in sketching algorithms applied to distributed systems. That is, we look at randomized algorithms that simultaneously reduce the data dimensionality, offer potential privacy benefits, while maintaining verifiably high levels of algorithm accuracy and performance in multi-node computational setups. We look at next steps and discuss the applicability to real systems.
The objective of this project was to eliminate and/or render bulk agent unusable by a threat entity via neutralization and/or polymerization of the bulk agent using minimal quantities of additives. We proposed the in situ neutralization and polymerization of bulk chemical agents (CAs) by performing reactions in the existing CA storage container via wet chemical approaches using minimal quantities of chemical based materials. This approach does not require sophisticated equipment, fuel to power generators, electricity to power equipment, or large quantities of decontaminating materials. By utilizing the CA storage container as the batch reactor, the amount of logistical resources can be significantly reduced. Fewer personnel are required since no sophisticated equipment needs to be set up, configured, or operated. Employing the CA storage container as the batch reactor enables the capability to add materials to multiple containers in a short period of time as opposed to processing one container at a time for typical batch reactor approaches. In scenarios where a quick response is required, the material can be added to all the CA containers and left to react on its own without intervention. Any attempt to filter the CA plus material solution will increase the rate of reaction due to increased agitation of the solution.