In this work, we introduce the concept of virtual transmission using large-scale energy storage systems. We also develop an optimization framework to maximize the monetized benefits of energy storage providing virtual transmission in wholesale markets. These benefits often come from relieving congestion for a transmission line, including both reduction in energy cost for the downstream loads and increase in production revenue for the upstream generators of the congested line. A case study is conducted using ISO-New England data to demonstrate the framework.
We present an optical wavelength division multiplexer enabled by a ring resonator tuned by MEMS electrostatic actuation. Analytical analysis, simulation and fabrication are discussed leading to results showing controlled tuning greater than one FSR.
A methodology for the design of control systems for wide-area power systems using solid-state transformers (SSTs) as actuators is presented. Due to their ability to isolate the primary side from the secondary side, an SST can limit the propagation of disturbances, such as frequency and voltage deviations, from one side to the other. This paper studies a control strategy based on SSTs deployed in the transmission grid to improve the resilience of power grids to disturbances. The control design is based on an empirical model of an SST that is appropriate for control design in grid level applications. A simulation example illustrating the improvement provided by an SST in a large-scale power system via a reduction in load shedding due to severe disturbances are presented.
A methodology for the design of control systems for wide-area power systems using solid-state transformers (SSTs) as actuators is presented. Due to their ability to isolate the primary side from the secondary side, an SST can limit the propagation of disturbances, such as frequency and voltage deviations, from one side to the other. This paper studies a control strategy based on SSTs deployed in the transmission grid to improve the resilience of power grids to disturbances. The control design is based on an empirical model of an SST that is appropriate for control design in grid level applications. A simulation example illustrating the improvement provided by an SST in a large-scale power system via a reduction in load shedding due to severe disturbances are presented.
Multivariate designs using three optimization procedures were performed on a low Reynolds number (order 100,000) turbine blade that maximized lift over drag. The turbine blade was created to interface to AeroMINE, a novel wind energy harvester that has no external moving parts. To speed up the optimization process, an interpolation-based procedure using the Proper Orthogonal Decomposition (POD) method was used. This method was used in two ways: by itself (POD-i) and as an initial guess to a full-order model (FOM) solution that is truncated before it reaches full convergence (POD-i with truncated FOM). To compare the result of these methods and their efficiency, optimization using a FOM was also conducted. It was found that there exists a trade off between efficiency and optimal result. The FOM found the highest L/D of 28.87 while POD-i found a L/D of 16.19 and POD-i with truncated FOM found a L/D of 19.11. Nonetheless, POD-i and POD-i with truncated FOM were 32,302 and 697 times faster than the FOM, respectively.
High pressure Type 2 hoop-wrapped, thick-walled vessels are commonly used at hydrogen refueling stations. Vessels installed at stations circa 2010 are now reaching their design cycle limit and are being retired, which is the motivation for exploring life extension opportunities. The number of design cycles is based on a fatigue life calculation using a fracture mechanics assessment according to ASME Section VIII, Division 3, which assumes each cycle is the full pressure range identified in the User's Design Specification for a given pressure vessel design; however, assessment of service data reveals that the actual pressure cycles are more conservative than the design specification. A case study was performed in which in-service pressure cycles were used to re-calculate the design cycles. It was found that less than 1% of the allowable crack extension was consumed when crack growth was assessed using in-service design pressures compared to the original design fatigue life from 2010. Additionally, design cycles were assessed on the 2010 era vessels based on design curves from the recently approved ASME Code Case 2938, which were based on fatigue crack growth rate relationships over a broader range of K. Using the Code Case 2938 design curves yielded nearly 2.7 times greater design cycles compared to the 2010 vessel original design basis. The benefits of using inservice pressure cycles to assess the design life and the implications of using the design curves in Code Case 2938 are discussed in detail in this paper.
Persistent memory (PMEM) devices can achieve comparable performance to DRAM while providing significantly more capacity. This has made the technology compelling as an expansion to main memory. Rethinking PMEM as storage devices can offer a high performance buffering layer for HPC applications to temporarily, but safely store data. However, modern parallel I/O libraries, such as HDF5 and pNetCDF, are complicated and introduce significant software and metadata overheads when persisting data to these storage devices, wasting much of their potential. In this work, we explore the potential of PMEM as storage through pMEMCPY: a simple, lightweight, and portable I/O library for storing data in persistent memory. We demonstrate that our approach is up to 2x faster than other popular parallel I/O libraries under real workloads.
We present a deep learning image reconstruction method called AirNet-SNL for sparse view computed tomography. It combines iterative reconstruction and convolutional neural networks with end-to-end training. Our model reduces streak artifacts from filtered back-projection with limited data, and it trains on randomly generated shapes. This work shows promise to generalize learning image reconstruction.
For over 50 years, performance assessment (PA) has been used throughout the world to inform decisions concerning the storage and management of radioactive waste. Some of the applications of PA include environmental assessments of nuclear disposal sites, development of methodologies and regulations for the long-term storage of nuclear waste, regulatory assessment for site selection and licensing at the Waste Isolation Pilot Plant and Yucca Mountain, and safety assessments for nuclear reactors. PA begins with asking the following questions: 1) What can happen? 2) How likely is it to happen? 3) What are the consequences when it does happen? and 4) What is the uncertainty of the first three questions? This work presents an approach for applying PA methodologies to geothermal resource evaluation that is adaptable and conformable to all phases of geothermal energy production. It provides a consistent and transparent framework for organizing data and information in a manner that supports decision making and accounts for uncertainties. The process provides a better understanding of the underlying risks that can jeopardize the development and/or performance of a geothermal project and identifies the best pathways for reducing or eliminating those risks. The approach is demonstrated through hypothetical examples of both hydrothermal and enhanced geothermal systems (EGS).
We study the problem of decentralized classification conducted over a network of mobile sensors. We model the multiagent classification task as a hypothesis testing problem where each sensor has to almost surely find the true hypothesis from a finite set of candidate hypotheses. Each sensor makes noisy local observations and can also share information on their observations with other mobile sensors in communication range. In order to address the state-space explosion in the multiagent system, we propose a decentralized synthesis procedure that guarantees that each sensor will almost surely converge to the true hypothesis even in the presence of faulty or malicious agents. Additionally, we employ a contract-based synthesis approach that produces trajectories designed to empirically increase information-sharing between mobile sensors in order to converge faster to the true hypothesis. We implement and test the approach on experiments with both physical and simulated hardware to showcase the approach's scalability and viability in real-world systems. Finally, we run a Gazebo/ROS simulated experiment with 12 agents to demonstrate the scalability of our approach in large environments with many agents.
Melting and flowing of aluminum alloys is a challenging problem for computational codes. Unlike most common substances, the surface of an aluminum melt exhibits rapid oxidation and elemental migration, and like a bag filled with water can remain 2-dimensionally unruptured while the metal inside is flowing. Much of the historical work in this area focuses on friction welding and neglects the surface behavior due to the high stress of the application. We are concerned with low-stress melting applications, in which the bag behavior is more relevant. Adapting models and measurements from the literature, we have developed a formulation for the viscous behavior of the melt based on an abstraction of historical measurement, and a construct for the bag behavior. These models are implemented and demonstrated in a 3D level-set multi-phase solver package, SIERRA/Aria. A series of increasingly complex simulation scenarios are illustrated that help verify implementation of the models in conjunction with other required model components like convection, radiation, gravity, and surface interactions.
The FAIR principles of open science (Findable, Accessible, Interoperable, and Reusable) have had transformative effects on modern large-scale computational science. In particular, they have encouraged more open access to and use of data, an important consideration as collaboration among teams of researchers accelerates and the use of workflows by those teams to solve problems increases. How best to apply the FAIR principles to workflows themselves, and software more generally, is not yet well understood. We argue that the software engineering concept of technical debt management provides a useful guide for application of those principles to workflows, and in particular that it implies reusability should be considered as 'first among equals'. Moreover, our approach recognizes a continuum of reusability where we can make explicit and selectable the tradeoffs required in workflows for both their users and developers. To this end, we propose a new abstraction approach for reusable workflows, with demonstrations for both synthetic workloads and real-world computational biology workflows. Through application of novel systems and tools that are based on this abstraction, these experimental workflows are refactored to rightsize the granularity of workflow components to efficiently fill the gap between end-user simplicity and general customizability. Our work makes it easier to selectively reason about and automate the connections between trade-offs across user and developer concerns when exposing degrees of freedom for reuse. Additionally, by exposing fine-grained reusability abstractions we enable performance optimizations, as we demonstrate on both institutional-scale and leadership-class HPC resources.
In power grid operation, optimal power flow (OPF) problems are solved several times per day to find economically optimal generator setpoints that balance given load demands. Ideally, we seek an optimal solution that is also “N-1 secure”, meaning the system can absorb contingency events such as transmission line or generator failure without loss of service. Current practice is to solve the OPF problem and then check a subset of contingencies against heuristic values, resulting in, at best, suboptimal solutions. Unfortunately, online solution of the OPF problem including the full N-1 contingencies (i.e., two-stage stochastic programming formulation) is intractable for even modest sized electrical grids. To address this challenge, this work presents an efficient method to embed N-1 security constraints into the solution of the OPF by using Neural Network (NN) models to represent the security boundary. Our approach introduces a novel sampling technique, as well as a tuneable parameter to allow operators to balance the conservativeness of the security model within the OPF problem. Our results show that we are able to solve contingency formulations of larger size grids than reported in literature using non-linear programming (NLP) formulations with embedded NN models to local optimality. Solutions found with the NN constraint have marginally increased computational time but are more secure to contingency events.
Fault tolerance poses a major challenge for future large-scale systems. Current research on fault tolerance has been principally focused on mitigating the impact of uncorrectable errors: errors that corrupt the state of the machine and require a restart from a known good state. However, correctable errors occur much more frequently than uncorrectable errors and may be even more common on future systems. Although an application can safely continue to execute when correctable errors occur, recovery from a correctable error requires the error to be corrected and, in most cases, information about its occurrence to be logged. The potential performance impact of these recovery activities has not been extensively studied in HPC. In this paper, we use simulation to examine the relationship between recovery from correctable errors and application performance for several important extreme-scale workloads. Our paper contains what is, to the best of our knowledge, the first detailed analysis of the impact of correctable errors on application performance. Our study shows that correctable errors can have significant impact on application performance for future systems. We also find that although the focus on correctable errors is focused on reducing failure rates, reducing the time required to log individual errors may have a greater impact on overheads at scale. Finally, this study outlines the error frequency and durations targets to keep correctable overheads similar to that of today's systems. This paper provides critical analysis and insight into the overheads of correctable errors and provides practical advice to systems administrators and hardware designers in an effort to fine-tune performance to application and system characteristics.
Job scheduling aims to minimize the turnaround time on the submitted jobs while catering to the resource constraints of High Performance Computing (HPC) systems. The challenge with scheduling is that it must honor job requirements and priorities while actual job run times are unknown. Although approaches have been proposed that use classification techniques or machine learning to predict job run times for scheduling purposes, these approaches do not provide a technique for reducing underprediction, which has a negative impact on scheduling quality. A common cause of underprediction is that the distribution of the duration for a job class is multimodal, causing the average job duration to fall below the expected duration of longer jobs. In this work, we propose the Top Percent predictor, which uses a hierarchical classification scheme to provide better accuracy for job run time predictions than the user-requested time. Our predictor addresses multimodal job distributions by making a prediction that is higher than a specified percentage of the observed job run times. We integrate the Top Percent predictor into scheduling algorithms and evaluate the performance using schedule quality metrics found in literature. To accommodate the user policies of HPC systems, we propose priority metrics that account for job flow time, job resource requirements, and job priority. The experiments demonstrate that the Top Percent predictor outperforms the related approaches when evaluated using our proposed priority metrics.
Transient operating temperatures often allow a lens cell to expand before the lens itself, potentially leading to stresses well in excess of the lens tensile strength. The transients thus affect the calculation of the athermal bond-line thickness, estimates of which have historically been based on thermal equilibrium conditions. In this paper, we present both analytical expressions and finite-element modeling results for thermal-transient bond-line design. Our results show that a cell with a large CTE and a bond thickness based on thermal transients is the best strategy for reducing the tensile stress on the bonded lens over a range of operating temperatures.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Milewicz, Reed M.; Pirkelbauer, Peter; Soundararajan, Prema; Ahmed, Hadia; Skjellum, Tony
A source-to-source compiler is a type of translator that accepts the source code of a program written in a programming language as its input and produces an equivalent source code in the same or different programming language. S2S techniques are commonly used to enable fluent translation between high-level programming languages, to perform large-scale refactoring operations, and to facilitate instrumentation for dynamic analysis. Negative perceptions about S2S’s applicability in High Performance Computing (HPC) are studied and evaluated here. This is a first study that brings to light reasons why scientists do not use source-to-source techniques for HPC. The primary audience for this paper are those considering S2S technology in their HPC application work.
This paper develops a power packet network (PPN) for integrating wave energy converter (WEC) arrays into microgrids. First a simple AC Resistor-Inductor-Capacitor (RLC) circuit operating at a power factor of one is introduced and shown to be a PPN. Next, an AC inverter-based network is analyzed and shown to be a PPN. Then this basic idea is utilized to asynchronously connect a WEC array to an idealized microgrid without additional energy storage. Specifically, NWECs can be physically positioned such that the incoming regular waves will produce an output emulating an N-phase AC system such that the PPN output power is constant. In the final example, the benefits of utilizing PPN phasing is demonstrated that analyzes a grid to substation to WEC array configuration. The numerical simulation results show that for ideal physical WEC buoy phasing of 60 and 120 degrees the energy storage system (ESS) peak power and energy capacity requirements are at the minimum.
For systems that require complete metallic enclosures, it is impossible to power and communicate with interior electronics using conventional electromagnetic techniques. Instead, pairs of ultrasonic transducers can be used to send and receive elastic waves through the enclosure, forming an equivalent electrical transmission line that bypasses the Faraday cage effect. These mechanical communication systems introduce the possibility for electromechanical crosstalk between channels on the same barrier, in which receivers output erroneous electrical signals due to ultrasonic guided waves generated by transmitters in adjacent communication channels. To minimize this crosstalk, this work investigates the use of a phononic crystal/metamaterial machined into the barrier via periodic grooving. Barriers with simultaneous ultrasonic power and data transfer are fabricated and tested to measure the effect of grooving on crosstalk between channels.
Kohtanen, Eetu; Sugino, Christopher; Allam, Ahmed; El-Kady, Ihab F.
Ultrasonic transducers can be leveraged to transmit power and data through metallic enclosures such as Faraday cages for which standard electromagnetic methods are infeasible. The design of these systems features a number of variables that must be carefully tweaked for optimal data and power transfer rate and efficiency. The objective of this work is to present a toolkit, COMET, standing for Computational Optimization of Mechanical Energy Transduction, in which the design process and analysis of such transducer systems is streamlined. The toolkit features flexible tools for introducing an arbitrary number of backing/bonding layers, material libraries, parameter sweeps, and optimization.
This paper focuses on the role of the Marine Renewable Energy (MRE) Software Knowledge Hub on the Portal and Repository for Information on Marine Renewable Energy (PRIMRE). The MRE Software Knowledge Hub provides online services for MRE software users and developers, and seeks to develop assessments and recommendations for improving MRE software in the future. Online software discovery platforms, known as the Code Hub and the Code Catalog, are provided. The Code Hub is a collection of open-source MRE software that includes a landing page with search functionality, linked to files hosted on the MRE Code Hub GitHub organization. The Code Catalog is a searchable online platform for discovery of useful (open-source or commercial) software packages, tools, codes, and other software products. To gather information about the existing MRE software landscape, a software survey is being performed, the preliminary results of which are presented herein. Initially, the data collected in the MRE software survey will be used to populate the MRE Software knowledge hub on PRIMRE, and future work will use data from the survey to perform a gap analysis and develop a vision for future software development. Additionally, as one of PRIMRE’s roles is to support development of MRE software within project partners, a silo of knowledge relating to best practices has been gathered. An early draft of new guidance developed from this knowledge is presented.
Several applications, such as underwater vehicles or waste containers, require the ability to transfer data from transducers enclosed by metallic structures. In these cases, Faraday shielding makes electromagnetic transmission highly inefficient, and suggests the employment of ultrasonic transmission as a promising alternative. While ultrasonic data transmission by piezoelectric transduction provides a practical solution, the amplitude of the transmitted signal strongly depends on acoustic resonances of the transmission line, which limits the bandwidth over which signals are sent and the rate of data transmission. The objective of this work is to investigate piezoelectric acoustic transducer configurations that enable data transmission at a relatively constant amplitude over large frequency bands. This is achieved through structural modifications of the transmission line, which includes layering of the transducers, as well as the introduction of electric circuits connected to both transmitting and receiving transducers. Both strategies lead to strong enhancements in the available bandwidth and show promising directions for the design of effective acoustic transmission across metallic barriers.
The displacement of rotational generation and the consequent reduction in system inertia is expected to have major stability and reliability impacts on modern power systems. Fast-frequency support strategies using energy storage systems (ESSs) can be deployed to maintain the inertial response of the system, but information regarding the inertial response of the system is critical for the effective implementation of such control strategies. In this paper, a moving horizon estimation (MHE)-based approach for online estimation of inertia constant of low inertia microgrids is presented. Based on the frequency measurements obtained in response to a non-intrusive excitation signal from an ESS, the inertia constant was estimated using local measurements from the ESS's phase-locked loop. The proposed MHE formulation was first tested in a linearized power system model, followed by tests in a modified microgrid benchmark from Cordova, Alaska. Even under moderate measurement noise, the technique was able to estimate the inertia constant of the system well within ±20% of the true value. Estimates provided by the proposed method could be utilized for applications such as fast-frequency support, adaptive protection schemes, and planning and procurement of spinning reserves.
Ultrasonic waves can be used to transfer power and data to electronic devices in sealed metallic enclosures. Two piezoelectric transducers are used to transmit and receive elastic waves that propagate through the metal. For an efficient power transfer, both transducers are typically bonded to the metal or coupled with a gel which limits the device portability. We present an ultrasonic power transfer system with a detachable transmitter that uses a dry elastic layer and a magnetic joint for efficient coupling. We show that the system can deliver more than 2 W of power to an electric load with 50% efficiency.
Ultrasounds have been investigated for data communication to transmit data across enclosed metallic structures affected by Faraday shielding. A typical channel consists in two piezoelectric transducers bonded across the structure, communicating through elastic mechanical waves. The rate of data communication is proportional to the transmission bandwidth, which can be widened by reducing the thickness of the transducers. However, thin transducers become brittle, difficult to bond and have a high capacitance that would draw a high electric current from function generators. This work focuses on investigating novel transducer shapes that would allow to provide a constant transmission across a large bandwidth while maintaining large-enough thickness to avoid brittleness and electrical impedance constraints. The transducers are shaped according to a staircase thickness distribution, whose geometry has been designed through an analytical model describing its electro-mechanical behavior formulated for this purpose.
In order to predict material failure accurately, it is critical to have knowledge of deformation physics. Uniquely challenging is determination of the conversion coefficient of plastic work into thermal energy. Here, we examine the heat transfer problem associated with the experimental determination of β in copper and stainless steel. A numerical model of the tensile test sample is used to estimate temperature rises across the mechanical test sample at a variety of convection coefficients, as well as to estimate heat losses to the chamber by conduction and convection. This analysis is performed for stainless steel and copper at multiple environmental conditions. These results are used to examine the relative importance of convection and conduction as heat transfer pathways. The model is additionally used to perform sensitivity analysis on the parameters that will ultimately determine b. These results underscore the importance of accurate determination of convection coefficients and will be used to inform future design of samples and experiments. Finally, an estimation of convection coefficient for an example mechanical test chamber is detailed as a point of reference for the modeling results.
A class of sequential multiscale models investigated in this study consists of discrete dislocation dynamics (DDD) simulations and continuum strain gradient plasticity (SGP) models to simulate the size effect in plastic deformation of metallic micropillars. The high-fidelity DDD explicitly simulates the microstructural (dislocation) interactions. These simulations account for the effect of dislocation densities and their spatial distributions on plastic deformation. The continuum SGP captures the size-dependent plasticity in micropillars using two length parameters. The main challenge in predictive DDD-SGP multiscale modeling is selecting the proper constitutive relations for the SGP model, which is necessitated by the uncertainty in computational prediction due to DDD's microstructural randomness. This contribution addresses these challenges using a Bayesian learning and model selection framework. A family of SGP models with different fidelities and complexities is constructed using various constitutive relation assumptions. The parameters of the SGP models are then learned from a set of training data furnished by the DDD simulations of micropillars. Bayesian learning allows the assessment of the credibility of plastic deformation prediction by characterizing the microstructural variability and the uncertainty in training data. Additionally, the family of the possible SGP models is subjected to a Bayesian model selection to pick the model that adequately explains the DDD training data. The framework proposed in this study enables learning the physics-based multiscale model from uncertain observational data and determining the optimal computational model for predicting complex physical phenomena, i.e., size effect in plastic deformation of micropillars.
Young, Joseph; Weaver, Wayne; Wilson, David G.; Robinett, Rush D.
The following research presents an optimal control framework called Oxtimal that facilitates the efficient use and control of photovoltaic (PV) solar arrays. This framework consists of reduced order models (ROM) of photovoltaics and DC connection components connected to an electric power grid (EPG), a discretization of the resulting state equations using an orthogonal spline collocation method (OSCM), and an optimization driver to solve the resulting formulation. Once formulated, the framework is validated using realistic solar profiles and loads from actual residential applications.
Melting and flowing of aluminum alloys is a challenging problem for computational codes. Unlike most common substances, the surface of an aluminum melt exhibits rapid oxidation and elemental migration, and like a bag filled with water can remain 2-dimensionally unruptured while the metal inside is flowing. Much of the historical work in this area focuses on friction welding and neglects the surface behavior due to the high stress of the application. We are concerned with low-stress melting applications, in which the bag behavior is more relevant. Adapting models and measurements from the literature, we have developed a formulation for the viscous behavior of the melt based on an abstraction of historical measurement, and a construct for the bag behavior. These models are implemented and demonstrated in a 3D level-set multi-phase solver package, SIERRA/Aria. A series of increasingly complex simulation scenarios are illustrated that help verify implementation of the models in conjunction with other required model components like convection, radiation, gravity, and surface interactions.
The primary parameter of a standard k-ϵ model, Cμ, was calculated from stereoscopic particle image velocimetry (PIV) data for a supersonic jet exhausting into a transonic crossflow. This required the determination of turbulent kinetic energy, turbulent eddy viscosity, and turbulent energy dissipation rate. Image interrogation was optimized, with different procedures used for mean strain rates and Reynolds stresses, to produce useful turbulent eddy viscosity fields. The eddy viscosity was calculated by a least-squares fit to all components of the three-dimensional strain-rate tensor that were available from the PIV data. This eliminated artifacts and noise observed when using a single strain component. Local dissipation rates were determined via Kolmogorov’s similarity hypotheses and the second-order structure function. The eddy viscosity and dissipation rates were then combined to determine Cμ. Considerable spatial variation was observed in Cμ, with the highest values found in regions where turbulent kinetic energy was relatively ow but where turbulent mixing was important, e.g., along the high-strain jet edges and in the wake region. This suggests that use of a constant Cμ in modeling may lead to poor Reynolds stress predictions at mixing interfaces. A data-driven modeling approach that can predict this spatial variation of Cμ based on known state variables may lead to improved simulation results without the need for calibration.
The low- and high-temperature ignition and combustion processes in a high-pressure spray flame of n-dodecane were investigated using simultaneous 50-kHz formaldehyde (HCHO) planar laser-induced fluorescence (PLIF) and 100-kHz schlieren imaging. PLIF measurements were facilitated through the use of a pulse-burst-mode Nd:YAG laser, and the high-speed HCHO PLIF signal was imaged using a non-intensified CMOS camera with dynamic background emission correction. The experiments were conducted in the Sandia constant-volume preburn vessel equipped with a new Spray A injector. The effects of ambient conditions on the ignition delay times of the two-stage ignition events, HCHO structures, and lift-off length values were examined. Consistent with past studies of traditional Spray A flames, the formation of HCHO was first observed in the jet peripheries where the equivalence ratio (Φ) is expected to be leaner and hotter and then grows in size and in intensity downstream into the jet core where Φ is expected to be richer and colder. The measurements showed that the formation and propagation of HCHO from the leaner to richer region leads to high-temperature ignition events, supporting the identification of a phenomenon called “cool-flame wave propagation” during the transient ignition process. Subsequent high-temperature ignition was found to consume the previously formed HCHO in the jet head, while the formation of HCHO persisted in the fuel-rich zone near the flame base over the entire combustion period.
Szybist, James P.; Busch, Stephen; Mccormick, Robert L.; Pihl, Josh A.; Splitter, Derek A.; Ratcliff, Matthew A.; Kolodziej, Christopher P.; Storey, John M.E.; Moses-Debusk, Melanie; Vuilleumier, David; Sjoberg, Carl M.; Sluder, C.S.; Rockstroh, Toby; Miles, Paul
The Co-Optimization of Fuels and Engines (Co-Optima) initiative from the US Department of Energy aims to co-develop fuels and engines in an effort to maximize energy efficiency and the utilization of renewable fuels. Many of these renewable fuel options have fuel chemistries that are different from those of petroleum-derived fuels. Because practical market fuels need to meet specific fuel-property requirements, a chemistry-agnostic approach to assessing the potential benefits of candidate fuels was developed using the Central Fuel Property Hypothesis (CFPH). The CFPH states that fuel properties are predictive of the performance of the fuel, regardless of the fuel's chemical composition. In order to use this hypothesis to assess the potential of fuel candidates to increase efficiency in spark-ignition (SI) engines, the individual contributions towards efficiency potential in an optimized engine must be quantified in a way that allows the individual fuel properties to be traded off for one another. This review article begins by providing an overview of the historical linkages between fuel properties and engine efficiency, including the two dominant pathways currently being used by vehicle manufacturers to reduce fuel consumption. Then, a thermodynamic-based assessment to quantify how six individual fuel properties can affect efficiency in SI engines is performed: research octane number, octane sensitivity, latent heat of vaporization, laminar flame speed, particulate matter index, and catalyst light-off temperature. The relative effects of each of these fuel properties is combined into a unified merit function that is capable of assessing the fuel property-based efficiency potential of fuels with conventional and unconventional compositions.
We propose a new family of depth measures called the elastic depths that can be used to greatly improve shape anomaly detection in functional data. Shape anomalies are functions that have considerably different geometric forms or features from the rest of the data. Identifying them is generally more difficult than identifying magnitude anomalies because shape anomalies are often not distinguishable from the bulk of the data with visualization methods. The proposed elastic depths use the recently developed elastic distances to directly measure the centrality of functions in the amplitude and phase spaces. Measuring shape outlyingness in these spaces provides a rigorous quantification of shape, which gives the elastic depths a strong theoretical and practical advantage over other methods in detecting shape anomalies. A simple boxplot and thresholding method is introduced to identify shape anomalies using the elastic depths. We assess the elastic depth’s detection skill on simulated shape outlier scenarios and compare them against popular shape anomaly detectors. Finally, we use hurricane trajectories to demonstrate the elastic depth methodology on manifold valued functional data.
This work proposes an extension of neural ordinary differential equations (NODEs) by introducing an additional set of ODE input parameters to NODEs. This extension allows NODEs to learn multiple dynamics specified by the input parameter instances. Our extension is inspired by the concept of parameterized ODEs, which are widely investigated in computational science and engineering contexts, where characteristics of the governing equations vary over the input parameters. We apply the proposed parameterized NODEs (PNODEs) for learning latent dynamics of complex dynamical processes that arise in computational physics, which is an essential component for enabling rapid numerical simulations for time-critical physics applications. For this, we propose an encoder-decoder-type framework, which models latent dynamics as PNODEs. We demonstrate the effectiveness of PNODEs on benchmark problems from computational physics.
We use Monte Carlo simulations to explore the effects of earth model uncertainty on the estimation of the seismic source time functions that correspond to the six independent components of the point source seismic moment tensor. Specifically, we invert synthetic data using Green's functions estimated from a suite of earth models that contain stochastic density and seismic wave-speed heterogeneities. We find that the primary effect of earth model uncertainty on the data is that the amplitude of the first-arriving seismic energy is reduced, and that this amplitude reduction is proportional to the magnitude of the stochastic heterogeneities. Also, we find that the amplitude of the estimated seismic source functions can be under-or overestimated, depending on the stochastic earth model used to create the data. This effect is totally unpredictable, meaning that uncertainty in the earth model can lead to unpredictable biases in the amplitude of the estimated seismic source functions.
Many microreactor (<10MWh) sites are expected to be remote locations requiring off-grid power or in some cases military bases. However, before this new class of nuclear reactor can be fully developed and implemented by designers, an effort must be made to explore the technical issues and provide reasonable assurance to the public regarding health and safety impacts centered on various technical issues. One issue not yet fully explored is the possible change in role of the operations and support personnel. Due to the passive safety features of microreactors and their low level of nuclear material, the microreactor facilities may automate more functions and rely on inherent safety features more than its predecessor nuclear power plants. In some instances, human operators may not be located onsite and may instead be operating or monitoring the facility from a remote location. Some designs also call for operators to supervise and control multiple microreactors from the control room. This paper explores issues around reduced staffing of microreactors, highlights the historical safety functions associated with human operators, assesses current licensing requirements for appropriateness to varying levels of personnel support, and describes a recommended regulatory approach for reviewing the impact of reduced staff to the operation of microreactors.
Introduction: Over the past decade, loop-mediated isothermal amplification (LAMP) technology has played an important role in molecular diagnostics. Amongst numerous nucleic acid amplification assays, LAMP stands out in terms of sample-to-answer time, sensitivity, specificity, cost, robustness, and accessibility, making it ideal for field-deployable diagnostics in resource-limited regions. Areas covered: In this review, we outline the front-end LAMP design practices for point-of-care (POC) applications, including sample handling and various signal readout methodologies. Next, we explore existing LAMP technologies that have been validated with clinical samples in the field. We summarize recent work that utilizes reverse transcription (RT) LAMP to rapidly detect SARS-CoV-2 as an alternative to standard PCR protocols. Finally, we describe challenges in translating LAMP from the benchtop to the field and opportunities for future LAMP assay development and performance reporting. Expert opinion: Despite the popularity of LAMP in the academic research community and a recent surge in interest in LAMP due to the COVID-19 pandemic, there are numerous areas for improvement in the fundamental understanding of LAMP, which are needed to elevate the field of LAMP assay development and characterization.
In recent years, the pervasive use of lithium ion (Li-ion) batteries in applications such as cell phones, laptop computers, electric vehicles, and grid energy storage systems has prompted the development of specialized battery management systems (BMS). The primary goal of a BMS is to maintain a reliable and safe battery power source while maximizing the calendar life and performance of the cells. To maintain safe operation, a BMS should be programmed to minimize degradation and prevent damage to a Li-ion cell, which can lead to thermal runaway. Cell damage can occur over time if a BMS is not properly configured to avoid overcharging and discharging. To prevent cell damage, efficient and accurate cell charging cycle characteristics algorithms must be employed. In this paper, computationally efficient and accurate ensemble learning algorithms capable of detecting Li-ion cell charging irregularities are described. Additionally, it is shown using machine and deep learning that it is possible to accurately and efficiently detect when a cell has experienced thermal and electrical stress due to cell overcharging by measuring charging cycle divergence.
This paper explores unsupervised learning approaches for analysis and categorization of turbulent flow data. Single point statistics from several high-fidelity turbulent flow simulation data sets are classified using a Gaussian mixture model clustering algorithm. Candidate features are proposed, which include barycentric coordinates of the Reynolds stress anisotropy tensor, as well as scalar and angular invariants of the Reynolds stress and mean strain rate tensors. A feature selection algorithm is applied to the data in a sequential fashion, flow by flow, to identify a good feature set and an optimal number of clusters for each data set. The algorithm is first applied to Direct Numerical Simulation data for plane channel flow, and produces clusters that are consistent with turbulent flow theory and empirical results that divide the channel flow into a number of regions (viscous sub-layer, log layer, etc). Clusters are then identified for flow over a wavy-walled channel, flow over a bump in a channel, and flow past a square cylinder. Some clusters are closely identified with the anisotropy state of the turbulence, as indicated by the location within the barycentric map of the Reynolds stress tensor. Other clusters can be connected to physical phenomena, such as boundary layer separation and free shear layers. Exemplar points from the clusters, or prototypes, are then identified using a prototype selection method. These exemplars summarize the dataset by a factor of 10 to 1000. The clustering and prototype selection algorithms provide a foundation for physics-based, semi-automated classification of turbulent flow states and extraction of a subset of data points that can serve as the basis for the development of explainable machine-learned turbulence models.
By strategically curtailing active power and providing reactive power support, photovoltaic (PV) systems with advanced inverters can mitigate voltage and thermal violations in distribution networks. Quasi-static time-series (QSTS) simulations are increasingly being utilized to study the implementation of these inverter functions as alternatives to traditional circuit upgrades. However, QSTS analyses can yield significantly different results based on the availability and resolution of input data and other modeling considerations. In this paper, we quantified the uncertainty of QSTS-based curtailment evaluations for two different grid-support functions (autonomous Volt-Var and centralized PV curtailment for preventing reverse power conditions) through extensive sensitivity analyses and hardware testing. We found that Volt-Var curtailment evaluations were most sensitive to poor inverter convergence (-56.4%), PV time-series data (-18.4% to +16.5%), QSTS resolution (-15.7%), and inverter modeling uncertainty (+14.7%), while the centralized control case was most sensitive to load modeling (-26.5% to +21.4%) and PV time-series data (-6.0% to +12.4%). These findings provide valuable insights for improving the reliability and accuracy of QSTS analyses for evaluating curtailment and other PV impact studies.
Shah, Chinmay; Campo-Ossa, Daniel D.; Patarroyo-Montenegro, Juan F.; Guruwacharya, Nischal; Bhujel, Niranjan; Trevizan, Rodrigo D.; Andrade, Fabio; Shirazi, Mariko; Tonkoski, Reinaldo; Wies, Richard; Hansen, Timothy M.; Cicilio, Phylicia
In response to national and international carbon reduction goals, renewable energy resources like photovoltaics (PV) and wind, and energy storage technologies like fuel-cells are being extensively integrated in electric grids. All these energy resources require power electronic converters (PECs) to interconnect to the electric grid. These PECs have different response characteristics to dynamic stability issues compared to conventional synchronous generators. As a result, the demand for validated models to study and control these stability issues of PECs has increased drastically. This paper provides a review of the existing PEC model types and their applicable uses. The paper provides a description of the suitable model types based on the relevant dynamic stability issues. Challenges and benefits of using the appropriate PEC model type for studying each type of stability issue are also presented.
Airborne contaminants from fires containing nuclear waste represent significant health hazards and shape the design and operation of nuclear facilities. Much of the data used to formulate DOE-HDBK-3010-94, “Airborne Release Fractions/Rates and Respirable Fractions for Nonreactor Nuclear Facilities,” from the U.S. Department of Energy, were taken over 40 years ago. The objectives of this study were to reproduce experiments from Pacific Northwest Laboratories conducted in June 1973 employing current aerosol measurement methods and instrumentation, develop an enhanced understanding of particulate formation and transport from fires containing nuclear waste, and provide modeling and experimental capabilities for updating current standards and practices in nuclear facilities. A special chamber was designed to conduct small fires containing 25 mL of flammable waste containing lutetium nitrate, ytterbium nitrate, or depleted uranium nitrate. Carbon soot aerosols showed aggregates of primary particles ranging from 20 to 60 nm in diameter. In scanning electron microscopy, ~200-nm spheroidal particles were also observed dispersed among the fractal aggregates. The 200-nm spherical particles were composed of metal phosphates. Airborne release fractions (ARFs) were characterized by leaching filter deposits and quantifying metal concentrations with mass spectrometry. The average mass-based ARF for 238U experiments was 1.0 × 10−3 with a standard deviation of 7.5 × 10−4. For the original experiments, DOE-HDBK-3010-94 states, “Uranium ARFs range from 2 × 10−4 to 3 × 10−3, an uncertainty of approximately an order of magnitude.” Thus, current measurements were consistent with DOE-HDBK-3010-94 values. ARF values for lutetium and ytterbium were approximately one to two orders of magnitude lower than 238U. Metal nitrate solubility may have varied with elemental composition and temperature, thereby affecting ARF values for uranium surrogates (Yb and Lu). In addition to ARF data, solution boiling temperatures and evaporation rates can also be deduced from experimental data.
Dalbey, Keith R.; Eldred, Michael S.; Geraci, Gianluca; Jakeman, John D.; Maupin, Kathryn A.; Monschke, Jason A.; Seidl, Daniel T.; Tran, Anh; Menhorn, Friedrich; Zeng, Xiaoshu
The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of Dakota’s iterative analysis capabilities.
This report discusses the progress on the collaboration between Sandia National Laboratories (Sandia) and Japan Atomic Energy Agency (JAEA) on the sodium fire research in fiscal year 2020. First, the current sodium pool fire model in MELCOR, which is adapted from CONTAIN-LMR code, is discussed. The associated sodium fire input requirements are also presented. These input requirements are flexible enough to permit further model development via control functions to enhance the current model without modifying the source code. The theoretical pool fire model improvement developed at Sandia is discussed. A control function model has been developed from this improvement. Then, the validation study of the sodium pool fire model in MELCOR carried out by both Sandia and JAEA’s staff is described. To validate this pool fire model with the enhancement, a JAEA sodium pool fire experiment (F7-1 test) is used. The results of the calculation are discussed as well as suggestions for further model improvement. Finally, recommendations are made for new MELCOR simulations for next fiscal year, 2021.
A hotel room unit consisting of a bedroom and bathroom was modelled using computational fluid dynamics (CFD) to investigate airborne pathogen dispersal patterns. The full-scale model includes a ‘typical’ hotel room configuration, furniture, and vents. The air sources and sinks include a bathroom vent, a heating, ventilation, and cooling (HVAC) unit located in the bedroom, and a ½” gap at the bottom of the entry door. In addition, the entry door and window can be opened or closed, as desired. Three key configuration simulations were conducted: 1) both the bathroom vent and HVAC were on, 2) only the HVAC was on, and 3) only the bathroom vent was on. If the HVAC air is from a fresh, clean source, or passes through a high-efficiency filter/UV device, then the first configuration is the safest, as contaminated air is highly reduced. The second configuration is also safe, but does not benefit from the outsourcing of potentially-infected air, such as contaminated air flowing through an ineffective filter. The third configuration should be avoided, as the bathroom vent causes air to flow from the hallway, which can be of dubious origin. The CFD simulations also showed that recirculation and swirling regions tend to accumulate the largest concentrations of heavier airborne particles, pathogens, dust, etc. These regions are associated with the largest turbulence kinetic energy (TKE) , and tend to occur in areas with flow recirculation and corners. Therefore, TKE presents a reasonable metric to guide the strategic location of pathogen mitigation devices. The simulations show complex flow patterns with distinct upper and lower flow regions, swirling flow, and significant levels of turbulent mixing. These simulations provide intriguing insights that can be applied to help mitigate pathogen aerosol dispersal, generate building design guidelines, as well as provide insights for the strategic placement of mitigation devices, such as ultraviolet (UV) light, supplemental fans, and filters.
Grubelich, Mark C.; Venkatesh, Prashanth B.; Entremont, Scott E.'.; Meyer, Scott E.; Bane, Sally P.M.
The present work investigates high initial pressure detonations of a stoichiometric mixture of ethylene and nitrous oxide (C2H4 + 6N2O) as a method of fracturing rock beneath the ground surface. These tests were conducted at a test site operated by the Energetic Materials Research and Testing Center (EMRTC), Socorro, New Mexico. The volume under the surface used for testing (called the Down Hole Assembly) consists of a 0.438 in. ID x 50 ft. long stainless-steel tube running down from the test site to a well bore which is 3 in. ID x 10 ft. long and the rock in the well bore is exposed to the propagating combustion wave. The testing carried out at Zucrow Laboratories in the smaller, alloy steel combustion vessel provided a scaling of pressures expected in the well bore. The combustion is initiated by energizing an EBW (Exploding Bridge Wire) above the ground surface. The experimental setup accommodates one high pressure (100,000 psia) transducer to measure the pressure peak and is placed approximately 5 ft. above the ground surface and 5 ft. downstream of the EBW. The focus of this series of experiments is to investigate the dependence of fracture to the rock beneath the surface on initial pressures of the mixture of ethylene and nitrous oxide. Experiments were carried out at initial pressures varying between 125 psia and 300 psi. The transducer recorded elevated pressures, which were 2.3 to 2.6 times in excess of the CJ values. The experimental results are discussed and explained in this report.
A 304L-VAR stainless steel is mechanically characterized in tension over a full range of strain rates from low, intermediate, to high using a variety of apparatuses. While low- and high-strain-rate tests are conducted with a conventional Instron and a Kolsky tension bar, the tensile tests at intermediate strain rates are conducted with a fast MTS and a Drop-Hopkinson bar. The fast MTS used in this study is able to obtain reliable tensile response at the strain rates up to 150 s-1, whereas the lower limit for the Drop-Hopkinson bar is 100 s-1. Combining the fast MTS and the Drop-Hopkinson bar closes the gap within the intermediate strain rate regime. Using these four apparatuses, the tensile stress-strain curves of the 304L-VAR stainless steel are obtained at strain rates on each order of magnitude ranging from 0.0001 to 2580 s-1. All tensile stress-strain curves exhibit linear elasticity followed by significant work hardening prior to necking. After necking occurrs, the specimen load decreases, and the deformation becomes highly localized until fracture. The tensile stress-strain response of the 304L-VAR stainless steel exhibits strain rate dependence. The flow stress increases with increasing strain rate and is described with a power law. The strain-rate sensitivity is also strain-dependent, possibly due to thermosoftening caused by adiabatic heating at high strain rates. The 304L-VAR stainless steel shows significant ductility. The true strains at the onset of necking and at failure are determined. The results show that the true strains at both onset of necking and failure decrease with increasing strain rate. The true failure strains are approximately 200% at low strain rates but are significantly lower (~100%) at high strain rates. The transition of true failure strain occurs within the intermediate strain rate range between 10-2 and 102 s-1. A Boltzmann description is used to present the effect of nominal strain rate on true failure strain.
The accumulation of point defects and defect clusters in materials, as seen in irradiated metals for example, can lead to the formation and growth of voids. Void nucleation is derived from the condensation of supersaturated vacancies and depends strongly on the stress state. It is usually assumed that such stress states can be produced by microstructural defects such dislocations, grain boundaries or triple junctions, however, much less attention has been brought to the formation of voids near microcracks. In this paper, we investigate the coupling between point-defect diffusion/recombination and concentrated stress fields near mode-I crack tips via a spatially-resolved rate theory approach. A modified chemical potential enables point-defect diffusion to be partially driven by the mechanical fields in the vicinity of the crack tip. Simulations are carried out for microcracks using the Griffith model with increasing stress intensity factor K1. Our results show that below a threshold for the stress intensity factor, the microcrack acts purely as a microstructural sink, absorbing point defects. Above this threshold, vacancies accumulate at the crack tip. These results suggest that, even in the absence of plastic deformation, voids can form in the vicinity of a microcrack for a given load when the crack’s characteristic length is above a critical length. While in ductile metals, irradiation damage generally causes hardening and corresponding quasi-brittle cleavage, our results show that irradiation conditions can favor void formation near microstructural stressors such as crack tips leading to lower resistance to crack propagation as predicted by traditional failure analysis.
In response to the global SARS-CoV-2 transmission pandemic, Sandia National Laboratories Rapid Lab-Directed Research and Development (LDRD) COVID-19 initiative has deployed a multi-physics, droplet-laden, turbulent low-Mach simulation tool to model pathogen-containing water droplets that emanate from synthetic human coughing and breathing. The low-Mach turbulent Eulerian/point-particle Lagrangian methodology directly couples mass, momentum, energy, and species to capture droplet evaporation physics that supports the ability to distinguish between droplets that deposit and those that persist in the environment. Additionally, the cough mechanism is modeled as a pulsed-spray with a prescribed log-normal droplet size distribution. Simulations demonstrate direct droplet deposition lengths in excess of three meters while the persistence of a droplet nuclei entrained within a buoyant plume is noted. Including the effect of protective barriers demonstrates effective mitigation of large-droplet transport. For coughs into a protective barrier, jet impingement and large-scale re-circulation can drive droplets vertically and back towards the subject while supporting persistence of droplet nuclei. Simulations in quiescent conditions demonstrate droplet preferential concentrations due to the coupling between vortex ring shedding and the subsequent advection of a series of three-dimensional rings that tilt and rise vertically due to a misalignment between the initial principle vortex trajectory and gravity. These resolved coughing simulations note vortex ring formation, roll-up and breakdown, while entraining droplet nuclei for large distances and time scales.
University research is a strong focus of the Office of Nuclear Energy within the Department of Energy. This research complements existing work in the various program areas and provides support and training for students entering the field. Four university projects have provided support to the Material Protection Accounting and Controls Technologies (MPACT) 2020 milestone focused on safeguards for electrochemical processing facilities. The University of Tennessee Knoxville has examined data fusion of NDA measurements such as Hybrid K-Edge Densitometry and Cyclic Voltammetry. Oregon State University and Virginia Polytechnic Institute have examined the integration of accountancy data with process monitoring data for safeguards. The Ohio State University and the University of Utah have developed a Ni-Pt SiC Schottky diode capable of high temperature alpha spectroscopy for actinide detection of molten salts. Finally, the University of Colorado has developed a key enabling technology for the use of Microcalorimetry.
Synthetic Aperture Radar (SAR) projects a 3-D scene’s reflectivity into a 2-D image. In doing so, it generally focusses the image to a surface, usually a ground plane. Consequently, scatterers above or below the focal/ground plane typically exhibit some degree of distortion manifesting as a geometric distortion and misfocusing or smearing. Limits to acceptable misfocusing define a Height of Focus (HOF), analogous to Depth of Field in optical systems. This may be exacerbated by the radar’s flightpath during the synthetic aperture data collection. It might also be exploited for target height estimation and offer insight to other height estimation techniques.
The National Solar Thermal Test Facility (NSTTF) at Sandia National Laboratories New Mexico (SNL/NM) developed this Project Execution Plan (PEP) to document its process for executing, monitoring, controlling and closing-out Phase 3 of the Gen 3 Particle Pilot Plant G3P3. This plan serves as a resource for stakeholders who wish to be knowledgeable of project objectives and how they will be accomplished. The plan is intended to be used by the development partners, principal investigator, and the federal project director. Project objectives are derived from the mission needs statement, and an integrated project team assists in development of the PEP. This plan is a living document and will be updated throughout the project to describe current and future processes and procedures. The scope of the PEP covers: Cost, schedule, and scope Project reporting Staffing plan Quality assurance plan Environment, safety, security, and health This document is a tailored approach for the Facilities Management and Operations Center (FMOC) to meet the project management principles of DOE Order 413.3B, Program and Project Management for the Acquisition of Capital Assets , and DOE G 413.3-15, DOE Guide for Project Execution Plans. This document will elaborate on content as knowledge of the project is gained or refined.
Technical procedures systematically describe a series of steps for the operation, maintenance, or testing of systems or components. They are widely used as a method for ensuring consistency, reducing human error, and improving the quality of the end-product. This guide provides specific guidance to procedure writers to help them generate high-quality technical procedures. The guidance is aimed at reducing confusion or ambiguity on the part of the operator, thereby increasing efficiency and reducing errors and rework. The appendices to this document define key terms associated with the creation of technical procedures, list common error traps, and define a set of action verbs that should be used in technical procedures.
The Arroyo Seco Improvement Program is being carried out at Sandia National Laboratories, California in order to address erosion and other streambed instability issues in the Arroyo Seco as it crosses the Sandia National Laboratories, California. The work involves both repair of existing eroded areas, and habitat enhancement. This work is being carried out under the requirements of Army Corps of Engineers permit 2006-400195S and California Regional Water Quality Control Board, San Francisco Bay Region Water Quality Certification Site No. 02-01-C0987.
In this study, we present spectral equivalence results for higher-order tensor product edge-, face- and interior-based finite elements. Specifically, we show for certain choices of shape functions that the mass and stiffness matrices of the higher-order elements are spectrally equivalent to those for an assembly of lowest-order elements on the associated Gauss-Lobatto-Legendre mesh. Based on this equivalence, efficient preconditioners can be designed with favorable computational complexity. Numerical results are presented which confirm the theory and demonstrate the benefits of the equivalence results for overlapping Schwarz preconditioners.
PixelStorm is a software application for displaying native high-performance applications from remote cloud environments. It is tailored for remote sensing missions that require high framerates, high resolutions, and minimal loss of quality. PixelStorm utilizes hardware-accelerated video compression on graphics processing units and a Sandia-developed streaming network protocol. Using our architecture, we can demonstrate interactive native applications running across two 4K monitors at 60 frames per second while maintaining the visual fidelity required by our missions. This technology allows for the migration of mission critical desktop applications to cloud environments.
This report presents the results of a collaborative effort under the Verification, Validation, and Uncertainty Quantification (VVUQ) thrust area of the North American Energy Resilience Model (NAERM) program. The goal of the effort described in this report was to integrate the Dakota software with the NAERM software framework to demonstrate sensitivity analysis of a co-simulation for NAERM.
Two surface chemical explosive tests were observed for the Large Surface Explosion Coupling Experiment (LSECE) at the Nevada National Security Site in October 2020. The tests consisted of two one-ton explosions, one occurring before dawn and one occurring mid- afternoon. LSECE was performed in the same location as previous underground tests and aimed to explore the relationship between surface and underground explosions in support of global nonproliferation efforts. Several pieces of remote sensing equipment were deployed from a trailer 2.02 km from ground zero including high-speed cameras, radiometers and a spectrometer. The data collected from these tests will increase the knowledge of large surface chemical explosive signatures.
The Online Waste Library (OWL) provides a consolidated source of information on Department of Energy-managed radioactive waste likely to require deep geologic disposal. With the release of OWL Version 1.0 in fiscal year 2019 (FY2019), much of the FY2020 work involved developing the OWL change control process and the OWL release process. These two processes (in draft form) were put into use for OWL Version 2.0, which was released in early FY2021. With the knowledge gained, the OWL team refined and documented the two processes in two separate reports. This report focuses on the change control process and discusses the following: (1) definitions and system components; (2) roles and responsibilities; (3) origin of changes; (4) the change control process including the Change List, Task List, activity categories, implementation examples, and checking and review; and (5) the role of the re lease process in ensuring changes in the Change List are incorporated into a public release.
The Online Waste Library (OWL) provides one consolidated source of information on Department of Energy-managed wastes likely to require deep geologic disposal. With the release of OWL Version 1.0 in fiscal year (FY) 2019, much of the FY2020 work involved developing the OWL change control process and the OWL release process. These two processes (in draft form) were put into use for OWL Version 2.0, which was released in early FY2021. With the knowledge gained, the OWL team refined and documented the two processes in two separate reports. This report addresses the release process starting with a definition of release management in Section 2. Section 3 describes the Information Technology Infrastructure Library (ITIL) framework, part of which includes the three different environments used for release management. Section 4 presents the OWL components existing in the different environments and provides details on the release schedule and procedures.
In this project, ceramic encapsulation materials were studied for high temperature (>~°500 C) applications where typical polymer encapsulants are unstable. A new low temperature (<~°200 C) method of processing ceramics, the cold sintering process was examined. Additionally, commercially available high temperature ceramic cements were investigated. In both cases, the mechanical strengths of available materials are less than desired (i.e., desired strengths similar to Si3N4), limiting applicability. Composite designs to increase mechanical strength are suggested. Additionally, non-uniformities in stresses and densification while embedding alumina sheets in encapsulants via cold sintering using uni-axial pressing led to fracture of sheets, and an alternative iso-static based approach is recommended for future studies.
Sandia National Laboratories (SNL) conducted an independent assessment of three different certified N95 respirators for the State of New Mexico Department of Homeland Security and Emergency Management. The testing conducted under this effort mimicked traditional NIOSH certification testing methodologies, where possible (NIOSH 2019). This included the use of a commercially available off-the-shelf (COTS) instrument typically used in industry for N95 respirator certification (ATI 2018). The COTS system, an Air Techniques International 100Xs automated filter tester, was used for all the testing reported in this document. It is important to note that SNL is NOT a certification laboratory, and all quantitative results are for informational purposes only. Additional technical information of N95-related efforts conducted by this team may be found in: Omana et al. (2020a), Omana et al. (2020b), Celina et al. (2020)
Digital Instrumentation and Control Systems (ICSs) have replaced analog control systems in nuclear power plants raising cybersecurity concerns. To study and understand the cybersecurity risks of nuclear power plants both high fidelity models of the plant physics and controllers must be created, and a framework to test and evaluate cyber security events must be established. A testing and evaluation framework of cybersecurity events consists of a method of interfering with control systems, a simulation of the plant network, and a network packet capture and recording tool. Sandia National Labs (SNL) in collaboration with the University of New Mexico’s Institute for Space and Nuclear Power Studies (UNM-ISNPS) is developing such a cybersecurity testing framework.
This report describes the high-level accomplishments from the Plasma Science and Engineering Grand Challenge LDRD at Sandia National Laboratories. The Laboratory has a need to demonstrate predictive capabilities to model plasma phenomena in order to rapidly accelerate engineering development in several mission areas. The purpose of this Grand Challenge LDRD was to advance the fundamental models, methods, and algorithms along with supporting electrode science foundation to enable a revolutionary shift towards predictive plasma engineering design principles. This project integrated the SNL knowledge base in computer science, plasma physics, materials science, applied mathematics, and relevant application engineering to establish new cross-laboratory collaborations on these topics. As an initial exemplar, this project focused efforts on improving multi-scale modeling capabilities that are utilized to predict the electrical power delivery on large-scale pulsed power accelerators. Specifically, this LDRD was structured into three primary research thrusts that, when integrated, enable complex simulations of these devices: (1) the exploration of multi-scale models describing the desorption of contaminants from pulsed power electrodes, (2) the development of improved algorithms and code technologies to treat the multi-physics phenomena required to predict device performance, and (3) the creation of a rigorous verification and validation infrastructure to evaluate the codes and models across a range of challenge problems. These components were integrated into initial demonstrations of the largest simulations of multi-level vacuum power flow completed to-date, executed on the leading HPC computing machines available in the NNSA complex today. These preliminary studies indicate relevant pulsed power engineering design simulations can now be completed in (of order) several days, a significant improvement over pre-LDRD levels of performance.
Numerical simulations of Greenland and Antarctic ice sheets involve the solution of large-scale highly nonlinear systems of equations on complex shallow geometries. This work is concerned with the construction of Schwarz preconditioners for the solution of the associated tangent problems, which are challenging for solvers mainly because of the strong anisotropy of the meshes and wildly changing boundary conditions that can lead to poorly constrained problems on large portions of the domain. Here, two-level GDSW (Generalized Dryja–Smith–Widlund) type Schwarz preconditioners are applied to different land ice problems, i.e., a velocity problem, a temperature problem, as well as the coupling of the former two problems. We employ the MPI-parallel implementation of multi-level Schwarz preconditioners provided by the package FROSch (Fast and Robust Schwarz)from the Trilinos library. The strength of the proposed preconditioner is that it yields out-of-the-box scalable and robust preconditioners for the single physics problems. To our knowledge, this is the first time two-level Schwarz preconditioners are applied to the ice sheet problem and a scalable preconditioner has been used for the coupled problem. The pre-conditioner for the coupled problem differs from previous monolithic GDSW preconditioners in the sense that decoupled extension operators are used to compute the values in the interior of the sub-domains. Several approaches for improving the performance, such as reuse strategies and shared memory OpenMP parallelization, are explored as well. In our numerical study we target both uniform meshes of varying resolution for the Antarctic ice sheet as well as non uniform meshes for the Greenland ice sheet are considered. We present several weak and strong scaling studies confirming the robustness of the approach and the parallel scalability of the FROSch implementation. Among the highlights of the numerical results are a weak scaling study for up to 32 K processor cores (8 K MPI-ranks and 4 OpenMP threads) and 566 M degrees of freedom for the velocity problem as well as a strong scaling study for up to 4 K processor cores (and MPI-ranks) and 68 M degrees of freedom for the coupled problem.
Objectives of the project include: Enable the use of high strength steel hydrogen pipelines, as significant cost savings can result by implementing high strength steels as compared to lower strength pipes. Demonstrate that girth welds in high-strength steel pipe exhibit fatigue performance similar to lower-strength steels in high-pressure hydrogen gas. Identify pathways for developing high-strength pipeline steels by establishing the relationship between microstructure constituents and hydrogen-accelerated fatigue crack growth (HA-FCG)
The supercritical carbon dioxide (sCO2) Brayton cycle is a promising candidate for future nuclear reactors due to its ability to improve power cycle energy conversion efficiency. The sCO2 Brayton cycle can operate with an efficiency of 45-50% at operating temperatures of 550-700 C. One of the greatest hurdles currently faced by sCO2 Brayton cycles is the corrosivity of sCO2 and the lack of long-term alloy corrosion and mechanical performance data, as these will be key to enhancing the longevity of the system, and thus the levelized cost of electricity. Past studies have shown that sCO2 corrosion occurs through the formation of metal carbonates, oxide layers, and carburization, and alloys with Cr, Mo and Ni generally exhibit less corrosion. While stainless steels may offer sufficient corrosion resistance at the lower range of temperatures seen by the sCO2 Brayton cycles, more expensive nickel-based alloys are typically needed for the higher temperature regions. This study investigates the effects of corrosion on the Haynes 230 alloy, with a preliminary view on changes in the mechanical properties. High temperature CO2 is used for this study as the corrosion products are similar to that of supercritical CO2, allowing for an estimation of the susceptibility towards corrosion without the need for high pressure experimentation.
This study investigates the issues and challenges surrounding energy storage project and portfolio valuation and provide insights in to improving visibility into the process for developers, capital providers, and customers so they can make more informed choices. Energy storage project valuation methodology is typical of power sector projects through evaluating various revenue and cost assumptions in a project economic model. The difference is that energy storage projects have many more design and operational variables to incorporate, and the governing market rules that control these variables are still evolving. This makes project valuation for energy storage more difficult. As the number of operating projects grow, portfolios of these projects are being developed, garnering the interest of larger investors. Valuation challenges of these portfolios can be even more challenging as market role and geographical diversity can actually exacerbate the variability, not mitigate it. By proposing additional visibility of key factors and drivers for industry participants, the US DOE can reduce investment risk, expanding both the number and types of investors, plus helping emerging energy storage technology into sustained commercialization.
Since grid energy storage is still evolving rapidly, it is often difficult to obtain project specific capital costs for various energy storage technologies. This information is necessary to evaluate the profitability of the facility, as well as comparing different energy storage technology options. The goal of this report is to summarize energy storage capital costs that were obtained from industry pricing surveys. The methodology breaks down the cost of an energy storage system into the following component categories: the storage module; the balance of system; the power conversion system; the energy management system; and the engineering, procurement, and construction costs. By evaluating each of the different component costs separately, a synthetic system cost can be developed that provides internal pricing consistency between different project sizes using the same technology, as well as between different technologies that utilize similar components.
This report presents a framework to evaluate the impact of a high-altitude electromagnetic pulse (HEMP) event on a bulk electric power grid. This report limits itself to modeling the impact of EMP E1 and E3 components. The co-simulation of E1 and E3 is presented in detail, and the focus of the paper is on the framework rather than actual results. This approach is highly conservative as E1 and E3 are not maximized with the same event characteristics and may only slightly overlap. The actual results shown in this report are based on a synthetic grid with synthetic data and a limited exemplary EMP model. The framework presented can be leveraged and used to analyze the impact of other threat scenarios, both manmade and natural disasters. This report d escribes a Monte-Carlo based methodology to probabilistically quantify the transient response of the power grid to a HEMP event. The approach uses multiple fundamental steps to characterize the system response to HEMP events, focused on the E1 and E3 components of the event. 1) Obtain component failure data related to HEMP events testing of components and creating component failure models. Use the component failure model to create component failure conditional probability density function (PDF) that is a function of the HEMP induced terminal voltage. 2) Model HEMP scenarios and calculate the E1 coupled voltage profiles seen by all system components. Model the same HEMP scenarios and calculate the transformer reactive power consumption profiles due to E3. 3) Sample each component failure PDF to determine which grid components will fail, due to the E1 voltage spike, for each scenario. 4) Perform dynamic simulations that incorporate the predicted component failures from E1 and reactive power consumption at each transformer affected by E3. These simulations allow for secondary transients to affect the relays/protection remaining in service which can lead to cascading outages. 5) Identify the locations and amount of load lost for each scenario through grid dynamic simulation. This can be an indication of the immediate grid impacts from a HEMP event. In addition, perform more detailed analysis to determine critical nodes and system trends. 6) To help realize the longer-term impacts, a security constrained alternating current optimal power flow (ACOPF) is run to maximize critical load served. This report describes a modeling framework to assess the systemic grid impacts due to a HEMP event. This stochastic simulation framework generates a large amount of data for each Monte Carlo replication, including HEMP location and characteristics, relay and component failures, E3 GIC profiles, cascading dynamics including voltage and frequency over time, and final system state. This data can then be analyzed to identify trends, e.g., unique system behavior modes or critical components whose failure is more likely to cause serious systemic effects. The proposed analysis process is demonstrated on a representative system. In order to draw realistic conclusions of the impact of a HEMP event on the grid, a significant amount of work remains with respect to modeling the impact on various grid components.