Liquefied petroleum gas (LPG) is a viable, cleaner alternative to traditional diesel fuel used in busses and other heavy-duty vehicles and could play a role in helping the US meet its lower emission goals. While the LPG industry has focused efforts on developing vehicles and fueling infrastructure, we must also establish safe parameters for maintenance facilities which are servicing LPG fueled vehicles. Current safety standards aid in the design of maintenance facilities, but additional quantitative analysis is needed to prove safeguards are adequate and suggest improvements where needed. In this report we aim to quantify the amount of flammable mass associated with propane releases from vehicle mounted fuel vessels within enclosed garages. Furthermore, we seek to qualify harm mitigation with variable ventilations and facility layout. To accomplish this we leverage validated computational resources at Sandia National Laboratories to simulate various release scenarios representative of real world vehicles and maintenance facilities. Flow solvers are used to predict the dynamics of fuel systems as well as the evolution of propane during release events. From our simulated results we observe that both inflow and outflow ventilation locations play a critical role in reducing flammable cloud size and potential overpressure values during a possible combustion event.
Rapid growth in data, computational methods, and computing power is driving a remarkable revolution in what variously is termed machine learning (ML), statistical learning, computational learning, and artificial intelligence. In addition to highly visible successes in machine-based natural language translation, playing the game Go, and self-driving cars, these new technologies also have profound implications for computational and experimental science and engineering, as well as for the exascale computing systems that the Department of Energy (DOE) is developing to support those disciplines. Not only do these learning technologies open up exciting opportunities for scientific discovery on exascale systems, they also appear poised to have important implications for the design and use of exascale computers themselves, including high-performance computing (HPC) for ML and ML for HPC. The overarching goal of the ExaLearn co-design project is to provide exascale ML software for use by Exascale Computing Project (ECP) applications, other ECP co-design centers, and DOE experimental facilities and leadership class computing facilities.
Theristis, Marios; Livera, Andreas; Micheli, Leonardo; Ascencio-Vasquez, Julian; Makrides, George; Georghiou, George E.; Stein, Joshua S.
A linear performance drop is generally assumed during the photovoltaic (PV) lifetime. However, operational data demonstrate that the PV module degradation rate (Rd) is often nonlinear, which, if neglected, may increase the financial uncertainty. Although nonlinear behavior has been the subject of numerous publications, it was only recently that statistical models able to detect change-points and extract multiple Rd values from PV performance time-series were introduced. A comparative analysis of six open-source libraries, which can detect change-points and calculate nonlinear Rd, is presented in this article. Since the real Rd and change-point locations are unknown in field data, 960 synthetic datasets from six locations and two PV module technologies have been generated using different aggregation and normalization decisions and nonlinear degradation rate patterns. The results demonstrated that coarser temporal aggregation (i.e., monthly vs. weekly), temperature correction, and both PV module technologies and climates with lower seasonality can benefit the change-point detection and Rd extraction. This also raises a concern that statistical models typically deployed for Rd analysis may be highly climatic-and technology-dependent. The comparative analysis of the six approaches demonstrated median mean absolute errors (MAE) ranging from 0.06 to 0.26%/year, given a maximum absolute Rd of 2.9%/year. The median MAE in change-point position detection varied from 3.5 months to 6 years.
Calcium-silicate-hydrate (C–S–H) represents a key microstructural phase that governs the mechanical properties of concrete at a large scale. Defects in the C–S–H phase are also responsible for the poor ductility and low tensile strength of concrete. Manipulating the microstructure of C–S–H can lead to new cementitious materials with improved structural performance. This paper presents an experimental investigation aiming to characterize a new synthetic polymer-modified synthetic calcium-silicate-hydrate (C–S–H)/styrene-butadiene rubber (SBR) binder. The new C–S–H/SBR binder is produced by calcining calcium carbonate and mixing this with fumed silica (SiO2), deionized water and SBR. Mechanical, physical, chemical and microstructural characterization was conducted to measure the properties of new hardened C–S–H binder. Results from the experimental investigation demonstrate the ability to engineer a new C–S–H binder with low elastic modulus and improved toughness and bond strength by controlling the SBR content and method of C–S–H synthesis. The new binder suggests the possible development of a new family of low-modulus silica-polymer binders that might fit many engineering applications such as cementing oil and gas wells.
Adams, Brian H.; Bohnhoff, William J.; Dalbey, Keith R.; Ebeida, Mohamed S.; Eddy, John P.; Eldred, Michael S.; Hooper, Russell W.; Hough, Patricia D.; Hu, Kenneth T.; Jakeman, John D.; Khalil, Mohammad; Maupin, Kathryn A.; Monschke, Jason A.; Ridgway, Elliott M.; Rushdi, Ahmad A.; Seidl, Daniel T.; Stephens, John A.; Swiler, Laura P.; Laros, James H.; Winokur, Justin G.
The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
The HyRAM+ software toolkit provides a basis for conducting quantitative risk assessment and consequence modeling for hydrogen, methane, and propane infrastructure and transportation systems. HyRAM+ is designed to facilitate the use of state-of-the-art science and engineering models to conduct robust, repeatable assessments of safety, hazards, and risk. HyRAM+ includes generic probabilities for equipment failures, probabilistic models for the impact of heat flux on humans and structures, and experimentally validated first-order models of release and flame physics. HyRAM+ integrates deterministic and probabilistic models for quantifying accident scenarios, predicting physical effects, and characterizing hazards (thermal effects from jet fires, overpressure effects from delayed ignition), and assessing impact on people and structures. HyRAM+ is developed at Sandia National Laboratories to support the development and revision of national and international codes and standards. HyRAM+ is a research software in active development and thus the models and data may change. This report will be updated at appropriate developmental intervals. This document provides a description of the methodology and models contained in HyRAM+ version 4.0. The most significant change for HyRAM+ version 4.0 from HyRAM version 3.1 is the incorporation of other alternative fuels, namely methane (as a proxy for natural gas) and propane into the toolkit. This change necessitated significant changes to the installable graphical user interface as well as changes to the back-end Python models. A second major change is the inclusion of physics models for the overpressure associated with the delayed ignition of an unconfined jet/plume of flammable gas.
Simultaneous data of the quasi-static compaction and electrical conductivity of porous, binary powder mixtures have been collected as a function of bulk density. The powder mixtures consist of a metal conductor, either titanium or iron, an insulator, and pores filled with ambient air. The data show a dependency of the conductivity in terms of relative bulk density and metal volume fraction on conductor type and conductor particle characteristics of size and shape. Finite element models using particle domains generated by discrete element method are used to simulate the bulk conductivity near its threshold while the general effective media equation is used to model the conductivity across the compression regime.
A new liquid sample adapter design for the Explosive Destruction Systems has been developed. The design features a semi-transparent fluoropolymer tube coupled to the vessel high pressure sample valve with a closing quick connect fitting. The sample tubes are the pressure-limiting component. The tubes were hydrostatically tested to establish failure characteristics and pressure limits at ambient and operational temperatures. A group of tubes from two manufacturing lots were tested to determine the consistency of the commercial part. An upper pressure limit was determined for typical operations.
The annual Energy Storage Pricing Survey (ESPS) series is designed to provide a standardized reference system price for various energy storage technologies across a range of different power and energy ratings. This is an essential first step in comparing systems of the different technologies’ usage costs and total cost of ownership. The final system prices are developed based on data from an extensive set of interviews with representatives across the manufacturing and project development value chain, plus available published data. This information is incorporated into a consistent methodology structure that will allow pricing information to be incorporated at whatever level it was obtained, ranging from component to fully installed system. The ESPS system pricing methodology breaks down the cost of an energy storage system into the following component categories: the storage module; the balance of system; the power conversion system; the energy management system; and the engineering, procurement, and construction costs. By evaluating each of the different component costs separately, a more accurate system cost can be developed that provides internal pricing consistency between different project sizes using the same technology, as well as between different technologies that utilize similar components.
The need to reduce the carbon footprint from medium- and heavy-duty diesel engines is clear; low-carbon biofuels are a powerful means to achieve this. Liquid fuels are rapidly deployed because existing infrastructure can be utilized for their production, transport, and distribution. Their impact is unique as they can decrease the greenhouse gas (GHG) emissions of existing vehicles and in applications resistant to electrification. However, introducing new diesel-like bio-blends into the market is very challenging. At a minimum, it requires a comprehensive understanding of the life-cycle GHG emissions of the fuels, the implications for refinery optimization and economics, the fuel’s impact on the infrastructure, the effect on the combustion performance of current and future vehicle fleets, and finally the implications for exhaust aftertreatment systems and compliance with emissions regulations. Such understanding is sought within the Co-Optima project.
The Computer Science Research Institute (CSRI) brings university faculty and students to Sandia National Laboratories for focused collaborative research on Department of Energy (DOE) computer and computational science problems. The institute provides an opportunity for university researches to learn about problems in computer and computational science at DOE laboratories, and help transfer results of their research to programs at the labs. Some specific CSRI research interest areas are: scalable solvers, optimization, algebraic preconditioners, graph-based, discrete, and combinatorial algorithms, uncertainty estimation, validation and verification methods, mesh generation, dynamic load-balancing, virus and other malicious-code defense, visualization, scalable cluster computers, beyond Moore’s Law computing, exascale computing tools and application design, reduced order and multiscale modeling, parallel input/output, and theoretical computer science. The CSRI Summer Program is organized by CSRI and includes a weekly seminar series and the publication of a summer proceedings.
The complexity and associated uncertainties involved with atmospheric-turbine-wake interactions produce challenges for accurate wind farm predictions of generator power and other important quantities of interest (QoIs), even with state-of-the-art high-fidelity atmospheric and turbine models. A comprehensive computational study was undertaken with consideration of simulation methodology, parameter selection, and mesh refinement on atmospheric, turbine, and wake QoIs to identify capability gaps in the validation process. For neutral atmospheric boundary layer conditions, the massively parallel large eddy simulation (LES) code Nalu-Wind was used to produce high-fidelity computations for experimental validation using high-quality meteorological, turbine, and wake measurement data collected at the Department of Energy/Sandia National Laboratories Scaled Wind Farm Technology (SWiFT) facility located at Texas Tech University's National Wind Institute. The wake analysis showed the simulated lidar model implemented in Nalu-Wind was successful at capturing wake profile trends observed in the experimental lidar data.
Rattlesnake is a combined-environments, multiple input/multiple output control system for dynamic excitation of structures under test. It provides capabilities to control multiple responses on the part using multiple exciters using various control strategies. Rattlesnake is written in the Python programming language to facilitate multiple input/multiple output vibration research by allowing users to prescribe custom control laws to the controller. Rattlesnake can target multiple hardware devices, or even perform synthetic control to simulate a test virtually. Rattlesnake has been used to execute control problems with up to 200 response channels and 12 drives. This document describes the functionality, architecture, and usage of the Rattlesnake controller to perform combined environments testing.
Reverse engineering (RE) analysts struggle to address critical questions about the safety of binary code accurately and promptly, and their supporting program analysis tools are simply wrong sometimes. The analysis tools have to approximate in order to provide any information at all, but this means that they introduce uncertainty into their results. And those uncertainties chain from analysis to analysis. We hypothesize that exposing sources, impacts, and control of uncertainty to human binary analysts will allow the analysts to approach their hardest problems with high-powered analytic techniques that they know when to trust. Combining expertise in binary analysis algorithms, human cognition, uncertainty quantification, verification and validation, and visualization, we pursue research that should benefit binary software analysis efforts across the board. We find a strong analogy between RE and exploratory data analysis (EDA); we begin to characterize sources and types of uncertainty found in practice in RE (both in the process and in supporting analyses); we explore a domain-specific focus on uncertainty in pointer analysis, showing that more precise models do help analysts answer small information flow questions faster and more accurately; and we test a general population with domain-general sudoku problems, showing that adding "knobs" to an analysis does not significantly slow down performance. This document describes our explorations in uncertainty in binary analysis.
This report describes an assessment of flamelet based soot models in a laminar ethylene coflow flame with a good selection of measurements suitable for model validation. Overall flow field and temperature predictions were in good agreement with available measurements. Soot profiles were in good agreement within the flame except for near the centerline where imperfections with the acetylene-based soot-production model are expected to be greatest. The model was challenged to predict the transition between non-sooting and sooting conditions with non-negligible soot emissions predicted even down to small flow rates or flame sizes. This suggests some possible deficiency in the soot oxidation models that might alter the amount of smoke emissions from flames, though this study cannot quantify the magnitude of the effect for large fires.
This report summarizes molecular and continuum simulation studies focused on developing physics - based predictive models for the evolution of polymer molecular order during the nonlinear processing flows of additive manufacturing. Our molecular simulations of polymer elongation flows identified novel mechanisms of fluid dissipation for various polymer architectures that might be harnessed to enhance material processability. In order to predict the complex thermal and flow history of polymer realistic additive manufacturing processes, we have developed and deployed a high - performance mesh - free hydrodynamics module in Sandia's LAMMPS software. This module called RHEO – short for Reproducing Hydrodynamics and Elastic Objects – hybridizes an updated - Lagrange reproducing - kernel method for complex fluids with a bonded particle method (BPM) to capture solidification and solid objects in multiphase flows. In combination, our two methods allow rapid, multiscale characterization of the hydrodynamics and molecular evolution of polymers in realistic processing geometries.
Hansen, Nils H.; Fan, Xuefeng; Sun, Wenyu; Gao, Yi; Chen, Bingjie; Pitsch, Heinz; Bin YangBin
To further understand the combustion characteristics and the reaction pathways of acyclic ethers, the oxidation of di-n-propyl ether (DPE) was investigated in a jet-stirred reactor (JSR) combined with a photoionization molecular-beam mass spectrometer. The experiments were carried out at near-atmospheric pressure (700 Torr) and over a temperature range of 425–850 K. Based on the experimental data and previous studies on ether oxidation, a new kinetic model was constructed and used to interpret the oxidation chemistry of DPE. In DPE oxidation, a high reactivity at low temperatures and two negative temperature coefficient (NTC) zones were observed. These behaviors are explained in this work by taking advantage of the obtained species information and the modeling analyses: the two NTC zones are caused by the competition of chain branching and termination reactions of the fuel itself and specific oxidation intermediates, respectively. Furthermore, the general requirements to have double-NTC behavior are discussed. A variety of crucial fuel-specific C6 species, such as ketohydroperoxides and diones, were detected in the species pool of DPE oxidation. Their formation pathways are illuminated based on rate-of-production (ROP) analyses. Propanal was identified as the most abundant small molecule intermediate, and its related reactions have an important impact on the oxidation process of DPE. Both acetic acid and propionic acid were detected in high concentrations. A new formation pathway of propionic acid is proposed and incorporated into the kinetic model to achieve a more accurate prediction for propionic acid mole fractions.
Graph algorithms enable myriad large-scale applications including cybersecurity, social network analysis, resource allocation, and routing. The scalability of current graph algorithm implementations on conventional computing architectures are hampered by the demise of Moore’s law. We present a theoretical framework for designing and assessing the performance of graph algorithms executing in networks of spiking artificial neurons. Although spiking neural networks (SNNs) are capable of general-purpose computation, few algorithmic results with rigorous asymptotic performance analysis are known. SNNs are exceptionally well-motivated practically, as neuromorphic computing systems with 100 million spiking neurons are available, and systems with a billion neurons are anticipated in the next few years. Beyond massive parallelism and scalability, neuromorphic computing systems offer energy consumption orders of magnitude lower than conventional high-performance computing systems. We employ our framework to design and analyze new spiking algorithms for shortest path and dynamic programming problems. Our neuromorphic algorithms are message-passing algorithms relying critically on data movement for computation. For fair and rigorous comparison with conventional algorithms and architectures, which is challenging but paramount, we develop new models of data-movement in conventional computing architectures. This allows us to prove polynomial-factor advantages, even when we assume a SNN consisting of a simple grid-like network of neurons. To the best of our knowledge, this is one of the first examples of a rigorous asymptotic computational advantage for neuromorphic computing.
The following report contains data and data summaries collected for the SkySun LLC elevated Ganged PV arrays. These arrays were fabricated as a series of PV panels in various orientations, suspended by cables, at the National Solar Thermal Test Facility (NSTTF) at Sandia National Laboratories (SNL). Starting in February of 2021, Sandia personnel have collected power and accelerometer data for these arrays to assess design and operational efficacy of varying ganged- PV configurations. The purpose of this power data collection was to see how the various array orientations compare in power collection capability depending on the time of day, year, and the specific daily solar direct normal irradiance (DNI). The power data was collected as a measurement of the power output from the various series strings. The project team measured direct current (DC) voltage and current from the respective arrays. The accelerometer data was collected with the purpose of demonstrating potential destructive mode shapes that could take place with each of the arrays when exposed to high winds. This allowed the team to evaluate whether impacts with respect to specific array orientations using suspended cables is a safe design. All data collection was performed during calendar year 2021.
This report provides basic background data on the Manipulate-2020 code. This code is used for processing and "manipulation" of nuclear data in support of radiation metrology applications. The code is made available on the open GitHub repository and is available to the general nuclear data community.
The Computer Science Research Institute (CSRI) brings university faculty and students to Sandia National Laboratories for focused collaborative research on Department of Energy (DOE) computer and computational science problems. The institute provides an opportunity for university researches to learn about problems in computer and computational science at DOE laboratories, and help transfer results of their research to programs at the labs. Some specific CSRI research interest areas are: scalable solvers, optimization, algebraic preconditioners, graph-based, discrete, and combinatorial algorithms, uncertainty estimation, validation and verification methods, mesh generation, dynamic load-balancing, virus and other malicious-code defense, visualization, scalable cluster computers, beyond Moore’s Law computing, exascale computing tools and application design, reduced order and multiscale modeling, parallel input/output, and theoretical computer science. The CSRI Summer Program is organized by CSRI and includes a weekly seminar series and the publication of a summer proceedings.
CSPlib is an open source software library for analyzing general ordinary differential equation (ODE) systems and detailed chemical kinetic ODE/DAE systems. It relies on the computational singular perturbation (CSP) method for the analysis of these systems.
In this article, we derive the vacuum electric fields within specific cylindrically symmetric magnetically insulated transmission lines (MITLs) in the limit of an infinite speed of light for an arbitrary time-dependent current. We focus our attention on two types of MITLs: the radial MITL and a spherically curved MITL. We then simulate the motion of charged particles, such as electrons, present in these MITLs due to the vacuum fields. In general, the motion of charged particles due to the vacuum fields is highly nonlinear since the fields are nonlinear functions of spatial coordinates and depend on an arbitrary time-dependent current drive. Using guiding center theory, however, one can describe the gross particle kinetics using a combination of $\textbf {E} \times \textbf {B}$ and $\nabla B$ drifts. In addition, we compare our approximate inner MITL field models and particle kinetics with those from a fully electromagnetic simulation code. We find that the agreement between the approximate model and the electromagnetic simulations is excellent.
Atomically precise ultradoping of silicon is possible with atomic resists, area-selective surface chemistry, and a limited set of hydride and halide precursor molecules, in a process known as atomic precision advanced manufacturing (APAM). It is desirable to expand this set of precursors to include dopants with organic functional groups and here we consider aluminium alkyls, to expand the applicability of APAM. We explore the impurity content and selectivity that results from using trimethyl aluminium and triethyl aluminium precursors on Si(001) to ultradope with aluminium through a hydrogen mask. Comparison of the methylated and ethylated precursors helps us understand the impact of hydrocarbon ligand selection on incorporation surface chemistry. Combining scanning tunneling microscopy and density functional theory calculations, we assess the limitations of both classes of precursor and extract general principles relevant to each.