Large scale non-intrusive inspection (NII) of commercial vehicles is being adopted in the U.S. at a pace and scale that will result in a commensurate growth in adjudication burdens at land ports of entry. The use of computer vision and machine learning models to augment human operator capabilities is critical in this sector to ensure the flow of commerce and to maintain efficient and reliable security operations. The development of models for this scale and speed requires novel approaches to object detection and novel adjudication pipelines. Here we propose a notional combination of existing object detection tools using a novel ensembling framework to demonstrate the potential for hierarchical and recursive operations. Further, we explore the combination of object detection with image similarity as an adjacent capability to provide post-hoc oversight to the detection framework. The experiments described herein, while notional and intended for illustrative purposes, demonstrate that the judicious combination of diverse algorithms can result in a resilient workflow for the NII environment.
A new set of critical experiments exploring the temperature-dependence of the reactivity in a critical assembly is described. In the experiments, the temperature of the critical assembly will be varied to determine the temperature that produces the highest reactivity in the assembly. This temperature is the inversion point of the isothermal reactivity coefficient of the assembly. An analysis of relevant configurations is presented. Existing measurements are described and an analysis of these experiments presented. The overall experimental approach is described as are the modifications to the critical assembly needed to perform the experiments.
One of the more crucial aspects of any mechanical design is the joining methodology of parts. During structural dynamic environments, the ability to analyze the joint and fasteners in a system for structural integrity is fundamental, especially early in a system design during design trade studies. Different modeling representations of fasteners include spring, beam, and solid elements. In this work, we compare the various methods for a linear system to help the analyst decide which method is appropriate for a design study. Ultimately, if stresses of the parts being connected are of interest, then we recommend the use of the Ring Method for modeling the joint. If the structural integrity of the fastener is of interest, then we recommend the Spring Method.
This work explores deriving transmissibility functions for a missile from a measured location at the base of the fairing to a desired location within the payload. A pressure on the outside of the fairing and the rocket motor’s excitation creates an acceleration at a measured location and a desired location. Typically, the desired location is not measured. In fact, it is typical that the payload may change, but measured acceleration at the base of the fairing is generally similar to previous test flights. Given this knowledge, it is desired to use a finite-element model to create a transmissibility function which relates acceleration from the previous test flight’s measured location at the base of the fairing to acceleration at a location in the new payload. Four methods are explored for deriving this transmissibility, with the goal of finding an appropriate transmissibility when both the pressure and rocket motor excitation are equally present. These methods are assessed using transient results from a simple example problem, and it is found that one of the methods gives good agreement with the transient results for the full range of loads considered.
Heterogeneous computing is becoming common in the HPC world. The fast-changing hardware landscape is pushing programmers and developers to rely on performance-portable programming models to rewrite old and legacy applications and develop new ones. While this approach is suitable for individual applications, outstanding challenges still remain when multiple applications are combined into complex workflows. One critical difficulty is the exchange of data between communicating applications where performance constraints imposed by heterogeneous hardware advantage different data layouts. We attempt to solve this problem by exploring asynchronous data layout conversions for applications requiring different memory access patterns for shared data. We implement the proposed solution within the DataSpaces data staging service, extending it to support heterogeneous application workflows across a broad spectrum of programming models. In addition, we integrate heterogeneous DataSpaces with the Kokkos programming model and propose the Kokkos Staging Space as an extension of the Kokkos data abstraction. This new abstraction enables us to express data on a virtual shared space for multiple Kokkos applications, thus guaranteeing the portability of each application when assembling them into an efficient heterogeneous workflow. We present performance results for the Kokkos Staging Space using a synthetic workflow emulator and three different scenarios representing access frequency and use patterns in shared data. The results show that the Kokkos Staging Space is a superior solution in terms of time-to-solution and scalability compared to existing file-based Kokkos data abstractions for inter-application data exchange.
Using the power balance method we estimate the maximum electric field on a conducting wall of a cavity containing an interior structure supporting eccentric coaxial modes in the frequency regime where the resonant modes are isolated from each other.
Coherent anti-Stokes Raman scattering of the N2 molecule is performed at rates up to 100 kHz for thermometry in the Sandia free-piston, high-temperature shock-tube facility (HST) for reflected-shock conditions in excess of T = 4000 K at pressures up to P = 10 atm. A pulse-burst laser architecture delivers picosecond-duration pulses to provide both the CARS pump and probe photons, and to pump a solid-state optical parametric generator (OPG)/optical parametric amplifier (OPA) source, which provides frequency tunable Stokes pulses with a bandwidth of 100-120 cm-1 . Single-laser-shot and averaged CARS spectra obtained in both the incident (P = 1.1 atm, T = 2090 K) and reflected (P ~ 8-10.5 atm, T > 4000 K) shock regions of HST are presented. The results indicate that burst-mode CARS is capable of resolving impulsive, high-temperature events in HST.
Performance assessment is an important tool to estimate the long-term safety for a nuclear waste repository. Performance assessment simulations are subject to multiple kinds of uncertainty including stochastic uncertainty, state of knowledge uncertainty, and model uncertainty. Task F1 of the DECOVALEX project involves comparison of the models and methods used in post-closure performance assessment of deep geologic repositories in fractured crystalline rock, providing an opportunity to compare the effects of different sources of uncertainty. A generic reference case for a mined repository in fractured crystalline rock was put together by participating teams, where each team was responsible for determining how best to represent and implement the model. This work presents the preliminary crystalline reference case results for the Department of Energy (DOE) team.
Centered on modern C++ and the SYCL standard for heterogeneous programming, Data Parallel C++ (dpc++) and Intel's oneAPI software ecosystem aim to lower the barrier to entry for the use of accelerators like FPGAs in diverse applications. In this work, we consider the usage of FPGAs for scientific computing, in particular with a multigrid solver, MueLu. We report on early experiences implementing kernels of the solver in DPC++ for execution on Stratix 10 FPGAs, and we evaluate several algorithmic design and implementation choices. These choices not only impact performance, but also shed light on the capabilities and limitations of DPC++ and oneAPI.
ASHRAE and IBPSA-USA Building Simulation Conference
Villa, Daniel V.; Carvallo, Juan P.; Bianchi, Carlo; Lee, Sang H.
Heat waves are increasing in severity, duration, and frequency, making historical weather patterns insufficient for assessments of building resilience. This work introduces a stochastic weather generator called the multi-scenario extreme weather simulator (MEWS) that produces credible future heat waves. MEWS calculates statistical parameters from historical weather data and then shifts them using climate projections of increasing severity and frequency. MEWS is demonstrated using the EnergyPlus medium office prototype model for climate zone 4B using five climate scenarios to 2060. The results show how changes in climate and heat waves affect electric loads, peak loads, and thermal comfort with uncertainty.
This paper proposes a framework to explain and quantify how a Traveling Wave (TW)-based fault location classifier, a Random Forest, is affected by different TW propagation factors. The classifier's goal is to determine the faulty Protection Zone. In order to work with a simplified, yet realistic, distribution system, this work considers a use case with different configurations that are obtained by optionally including several common distribution elements such as voltage regulators, capacitor banks, laterals, and extra loads. Simulated faults are decomposed in frequency bands using the Stationary Wavelet Transform, and the classifier is trained with such signals' energy. SHapley Additive exPlanations (SHAP) are used to identify the most important features, and the effect of different fault configurations is quantified using the Jensen-Shannon Divergence. Results show that distance, the presence of voltage regulators and the fault type are the main factors that affect the classifier's behavior.
Femtosecond laser electronic excitation tagging (FLEET) is a powerful unseeded velocimetry technique typically used to measure one component of velocity along a line, or two or three components from a dot. In this Letter, we demonstrate a dotted-line FLEET technique which combines the dense profile capability of a line with the ability to perform two-component velocimetry with a single camera on a dot. Our set-up uses a single beam path to create multiple simultaneous spots, more than previously achieved in other FLEET spot configurations. We perform dotted-line FLEET measurements downstream of a highly turbulent, supersonic nitrogen free jet. Dotted-line FLEET is created by focusing light transmitted by a periodic mask with rectangular slits of 1.6 × 40 mm2 and an edge-to-edge spacing of 0.5 mm, then focusing the imaged light at the measurement region. Up to seven symmetric dots spaced approximately 0.9 mm apart, with mean full-width at half maximum diameters between 150 and 350 µm, are simultaneously imaged. Both streamwise and radial velocities are computed and presented in this Letter.
The visualization community has invested decades of research and development into producing large-scale production visualization tools. Although in situ is a paradigm shift for large-scale visualization, much of the same algorithms and operations apply regardless of whether the visualization is run post hoc or in situ. Thus, there is a great benefit to taking the large-scale code originally designed for post hoc use and leveraging it for use in situ. This chapter describes two in situ libraries, Libsim and Catalyst, that are based on mature visualization tools, VisIt and ParaView, respectively. Because they are based on fully-featured visualization packages, they each provide a wealth of features. For each of these systems we outline how the simulation and visualization software are coupled, what the runtime behavior and communication between these components are, and how the underlying implementation works. We also provide use cases demonstrating the systems in action. Both of these in situ libraries, as well as the underlying products they are based on, are made freely available as open-source products. The overviews in this chapter provide a toehold to the practical application of in situ visualization.
The effects of passive pre-chamber (PC) geometry and nozzle pattern as well as the use of either conventional spark or non-equilibrium plasma PC ignition system on knocking events were studied in an optically-accessible single-cylinder gasoline research engine. The equivalence ratio of the charge in the main chamber (MC) was maintained equal to 0.94 at a constant engine speed of 1300 rpm, and at constant engine load of 3.5 bar indicated mean effective pressure for all operating conditions. MC pressure profiles were collected and analyzed to infer the amplitude and the frequency of pressure oscillations that resulted in knocking events. The combustion process in the MC was investigated utilizing high-speed excited methylidyne radical (CH*) chemiluminescence images. The collected results highlighted that PC volume and nozzle pattern substantially affected the knock intensity (KI), while the use of the non-equilibrium plasma ignition system exhibited lower KI compared to PC equipped with a conventional inductive ignition system. It was also identified that knocking events were likely not generated by conventional end gas auto-ignition, but by jet-related phenomena, as well as jet-flame wall quenching. The relation between these phenomena and PC geometry, nozzle pattern, as well as ignition system has been also highlighted and discussed.
The Sandia Optical Fringe Analysis Slope Tool (SOFAST) is a tool that has been developed at Sandia to measure the surface slope of concentrating solar power optics. This tool has largely remained of research quality over the past few years. Since SOFAST is important to ongoing tests happening at Sandia as well as an interest to others outside Sandia, there is a desire to bring SOFAST up to professional software standards. The goal of this effort was to make progress in several broad areas including: code quality, sample data collection, and validation and testing. During the course of this effort, much progress was made in these areas. SOFAST is now a much more professional grade tool. There are, however, some areas of improvement that could not be addressed in the timeframe of this work and will be addressed in the continuation of this effort.
This SAND Report provides an overview of AniMACCS, the animation software developed for the MELCOR Accident Consequence Code System (MACCS). It details what users need to know in order to successfully generate animations from MACCS results. It also includes information on the capabilities, requirements, testing, limitations, input settings, and problem reporting instructions for AniMACCS version 1.3.1. Supporting information is provided in the appendices, such as guidance on required input files using both WinMACCS and running MACCS from the command line.
Operon prediction in prokaryotes is critical not only for understanding the regulation of endogenous gene expression, but also for exogenous targeting of genes using newly developed tools such as CRISPR-based gene modulation. A number of methods have used transcriptomics data to predict operons, based on the premise that contiguous genes in an operon will be expressed at similar levels. While promising results have been observed using these methods, most of them do not address uncertainty caused by technical variability between experiments, which is especially relevant when the amount of data available is small. In addition, many existing methods do not provide the flexibility to determine the stringency with which genes should be evaluated for being in an operon pair. We present OperonSEQer, a set of machine learning algorithms that uses the statistic and p-value from a non-parametric analysis of variance test (Kruskal-Wallis) to determine the likelihood that two adjacent genes are expressed from the same RNA molecule. We implement a voting system to allow users to choose the stringency of operon calls depending on whether your priority is high recall or high specificity. In addition, we provide the code so that users can retrain the algorithm and re-establish hyperparameters based on any data they choose, allowing for this method to be expanded as additional data is generated. We show that our approach detects operon pairs that are missed by current methods by comparing our predictions to publicly available long-read sequencing data. OperonSEQer therefore improves on existing methods in terms of accuracy, flexibility, and adaptability.
Quantum diamond microscope (QDM) magnetic field imaging is an emerging interrogation and diagnostic technique for integrated circuits (ICs). To date, the ICs measured with a QDM have been either too complex for us to predict the expected magnetic fields and benchmark the QDM performance or too simple to be relevant to the IC community. In this paper, we establish a 555 timer IC as a "model system"to optimize QDM measurement implementation, benchmark performance, and assess IC device functionality. To validate the magnetic field images taken with a QDM, we use a spice electronic circuit simulator and finite-element analysis (FEA) to model the magnetic fields from the 555 die for two functional states. We compare the advantages and the results of three IC-diamond measurement methods, confirm that the measured and simulated magnetic images are consistent, identify the magnetic signatures of current paths within the device, and discuss using this model system to advance QDM magnetic imaging as an IC diagnostic tool.
Parallelizing Gated Recurrent Unit (GRU) networks is a challenging task, as the training procedure of GRU is inherently sequential. Prior efforts to parallelize GRU have largely focused on conventional parallelization strategies such as data-parallel and model-parallel training algorithms. However, when the given sequences are very long, existing approaches are still inevitably performance limited in terms of training time. In this paper, we present a novel parallel training scheme (called parallel-in-time) for GRU based on a multigrid reduction in time (MGRIT) solver. MGRIT partitions a sequence into multiple shorter sub-sequences and trains the sub-sequences on different processors in parallel. The key to achieving speedup is a hierarchical correction of the hidden state to accelerate end-to-end communication in both the forward and backward propagation phases of gradient descent. Experimental results on the HMDB51 dataset, where each video is an image sequence, demonstrate that the new parallel training scheme achieves up to 6.5× speedup over a serial approach. As efficiency of our new parallelization strategy is associated with the sequence length, our parallel GRU algorithm achieves significant performance improvement as the sequence length increases.
Proceedings of ISAV 2022: IEEE/ACM International Workshop on In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization, Held in conjunction with SC 2022: The International Conference for High Performance Computing, Networking, Storage and Analysis
This paper reports on Catalyst usability and initial adoption by SPARC analysts. The use case approach highlights the analysts' perspective. Impediments to adoption can be due to deficiencies in software capabilities, or analysts may identify mundane inconveniences and barriers that prevent them from fully leveraging Catalyst. With that said, for many analyst tasks Catalyst provides enough relative advantage that they have begun applying it in their production work, and they recognize the potential for it to solve problems they currently struggle with. The findings in this report include specific issues and minor bugs in ParaView Python scripting, which are viewed as having straightforward solutions, as well as a broader adoption analysis.
We propose a two-stage scenario-based stochastic optimization problem to determine investments that enhance power system resilience. The proposed optimization problem minimizes the Conditional Value at Risk (CVaR) of load loss to target low-probability high-impact events. We provide results in the context of generator winterization investments in Texas using winter storm scenarios generated from historical data collected from Winter Storm Uri. Results illustrate how the CVaR metric can be used to minimize the tail of the distribution of load loss and illustrate how risk-Aversity impacts investment decisions.
Research is presented for carbon emissions abatement utilizing concentrating solar power (CSP) heating for culinary industrial process heat applications of roasting peppers. For this investigation the Sandia National Laboratories (SNL) performed high-intensity flux profile heating, as high as approximately 12.2 W/cm2 roasting peppers near 615oC. This work also explores the suitability of culinary roasting as applied to different forms of CSP heating as well as techno-economic costs. Traditionally, chile pepper roasting has used propane gas source heating to achieve similar temperatures and food roasting profiles in batch style processing. Here, the investigators roasted peppers on the top level of the National Solar Thermal Test Facility (NSTTF) solar tower for multiple roasting trials, with and without water. For comparison, the team also performed roasting from a traditional propane gas heating source, monitoring the volume of propane being consumed over time to assess carbon emissions that were abated using CSP. Results found that roasting peppers with CSP facilitated approximately 26 MJ of energy that abated approximately 0.122 kg CO2/kg chile for a 10 kg bag. The team also determined that pre-wetting the peppers before roasting both under propane and CSP heat sources increased the roast time by approximately 3 minutes to achieve the same qualitative optimal roast state compared to dry peppers.
Physics-constrained machine learning is emerging as an important topic in the field of machine learning for physics. One of the most significant advantages of incorporating physics constraints into machine learning methods is that the resulting machine learning model requires significantly fewer data to train. By incorporating physical rules into the machine learning formulation itself, the predictions are expected to be physically plausible. Gaussian process (GP) is perhaps one of the most common methods in machine learning for small datasets. In this paper, we investigate the possibility of constraining a GP formulation with monotonicity on two different material datasets, where one experimental and one computational dataset is used. The monotonic GP is compared against the regular GP, where a significant reduction in the posterior variance is observed. The monotonic GP is strictly monotonic in the interpolation regime, but in the extrapolation regime, the monotonic effect starts fading away as one goes beyond the training dataset. Imposing monotonicity on the GP comes at a small accuracy cost, compared to the regular GP. The monotonic GP is perhaps most useful in applications where data is scarce and noisy or when the dimensionality is high, and monotonicity is where supported by strong physical reasoning.
Penetration of the power grid by renewable energy sources, distributed storage, and distributed generators is becoming more widespread. Increased utilization of these distributed energy resources (DERs) has given rise to additional protection concerns. With radial feeders terminating in DERs or in microgrids containing DERs, standard non-directional radial protection may be rendered useless. Moreover, coordination will first require the protection engineer to determine what combination of directional and nondirectional elements is required to properly protect the system at a reasonable cost. In this paper, a method is proposed to determine the type of protection that should be placed on each line. Further, an extreme cost constraint is assumed so that an attempt is made to protect a meshed network using only overcurrent protection devices. A method is proposed where instantaneous reclosers are placed in locations that cause the system to temporarily become radial when a fault occurs. Directional and nondirectional overcurrent (OC) relays are placed in locations that allow for standard radial coordination techniques to be utilized while the reclosers are open to clear any sustained faults. The proposed algorithm is found to effectively determine the placement of protection devices while utilizing a minimal number of directional devices. Additionally, it was shown for the IEEE 14-bus case that the proposed relay placement algorithm results in a system where relay coordination remains feasible.
This paper presents a type-IV wind turbine generator (WTG) model developed in MATLAB/Simulink. An aerodynamic model is used to improve an electromagnetic transient model. This model is further developed by incorporating a single-mass model of the turbine and including generator torque control from an aerodynamic model. The model is validated using field data collected from an actual WTG located in the Scaled Wind Farm Technology (SWiFT) facility. The model takes the nacelle wind speed as an estimate. To ensure the model and the SWiFT WTG field data is compared accurately, the wind speed is estimated using a Kalman filter. Simulation results shows that using a single-mass model instead of a two-mass model for aerodynamic torque, including the generator torque control from SWiFT, estimating wind speed via the Kalman filter and tunning the synchronous generator, accurately represent the generator torque, speed, and power, compared to the SWiFT WTG field data.
In the summer of 2020, the National Aeronautics and Space Administration (NASA) launched a spacecraft as part of the Mars 2020 mission. The rover on the spacecraft uses a Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) to provide continuous electrical and thermal power for the mission. The MMRTG uses radioactive plutonium dioxide. NASA prepared a Supplemental Environmental Impact Statement (SEIS) for the mission in accordance with the National Environmental Policy Act. The SEIS provides information related to updates to the potential environmental impacts associated with the Mars 2020 mission as outlined in the Final Environmental Impact Statement (FEIS) for the Mars 2020 Mission issued in 2014 and associated Record of Decision (ROD) issued in January 2015. The Nuclear Risk Assessment (NRA) 2019 Update includes new and updated Mars 2020 mission information since the publication of the 2014 FEIS and the updates to the Launch Approval Process with the issuance of Presidential Memorandum on Launch of Spacecraft Containing Space Nuclear Systems, National Security Presidential Memorandum 20 (NSPM-20). The NRA 2019 Update addresses the responses of the MMRTG to potential accident and abort conditions during the launch opportunity for the Mars 2020 mission and the associated consequences. This information provides the technical basis for the radiological risks discussed in the SEIS. This paper provides a summary of the methods and results used in the NRA 2019 Update.
We present a field-deployable microfluidic immunoassay device in response to the need for sensitive, quantitative, and high-throughput protein detection at point-of-need. The portable microfluidic system facilitates eight magnetic bead-based sandwich immunoassays from raw samples in 45 minutes. An innovative bead actuation strategy was incorporated into the system to automate multiple sample process steps with minimal user intervention. The device is capable of quantitative and sensitive protein analysis with a 10 pg/ml detection limit from interleukin 6-spiked human serum samples. We envision the reported device offering ultrasensitive point-of-care immunoassay tests for timely and accurate clinical diagnosis.
The mobilome of a microbe, i.e., its set of mobile elements, has major effects on its ecology, and is important to delineate properly in each genome. This becomes more challenging for incomplete genomes, and even more so for metagenome-assembled genomes (MAGs), where misbinning of scaffolds and other losses can occur. Genomic islands (GIs), which integrate into the host chromosome, are a major component of the mobilome. Our GI-detection software TIGER, unique in its precise mapping of GI termini, was applied to 74,561 genomes from 2,473 microbial species, each species containing at least one MAG and one isolate genome. A species-normalized deficit of ∼1.6 GIs/genome was measured for MAGs relative to isolates. To test whether this undercount was due to the higher fragmentation of MAG genomes, TIGER was updated to enable detection of split GIs whose termini are on separate scaffolds or that wrap around the origin of a circular replicon. This doubled GI yields, and the new split GIs matched the quality of single-scaffold GIs, except that highly fragmented GIs may lack central portions. Cross-scaffold search is an important upgrade to GI detection as fragmented genomes increasingly dominate public databases. TIGER2 better captures MAG microdiversity, recovering niche-defining GIs and supporting microbiome research aims such as virus-host linking and ecological assessment.
In the face of increasing natural disasters and an aging grid, utilities need to optimally choose investments to the existing infrastructure to promote resiliency. This paper presents a new investment decision optimization model to minimize unserved load over the recovery time and improve grid resilience to extreme weather event scenarios. Our optimization model includes a network power flow model which decides generator status and generator dispatch, optimal transmission switching (OTS) during the multi-time period recovery process, and an investment decision model subject to a given budget. Investment decisions include the hardening of transmission lines, generators, and substations. Our model uses a second order cone programming (SOCP) relaxation of the AC power flow model and is compared to the classic DC power flow approximation. A case study is provided on the 73-bus RTS-GMLC test system for various investment budgets and multiple hurricane scenarios to highlight the difference in optimal investment decisions between the SOCP model and the DC model, and demonstrate the advantages of OTS in resiliency settings. Results indicate that the network models yield different optimal investments, unit commitment, and OTS decisions, and an AC feasibility study indicates our SOCP resiliency model is more accurate than the DC model.
The use of containerization technology in high performance computing (HPC) workflows has substantially increased recently because it makes workflows much easier to develop and deploy. Although many HPC workflows include multiple data and multiple applications, they have traditionally all been bundled together into one monolithic container. This hinders the ability to trace the thread of execution, thus preventing scientists from establishing data provenance, or having workflow reproducibility. To provide a solution to this problem we extend the functionality of a popular HPC container runtime, Singularity. We implement both the ability to compose fine-grained containerized workflows and execute these workflows within the Singularity runtime with automatic metadata collection. Specifically, the new functionality collects a record trail of execution and creates data provenance. The use of our augmented Singularity is demonstrated with an earth science workflow, SOMOSPIE. The workflow is composed via our augmented Singularity which creates fine-grained containers and collects the metadata to trace, explain, and reproduce the prediction of soil moisture at a fine resolution.
A primary objective of repository modeling is identification and assessment of features and processes providing safety performance. Sensitivity analyses typically provide information on how input parameters affect performance, not features and processes. To quantify the effects of features and processes, tracers can be introduced virtually in model simulations and tracked in informative ways. This paper describes five ways virtual tracers can be used to directly measure the relative importance of several features, processes, and combinations of features and processes in repository performance assessment modeling.
Soft-magnetic alloys exhibit exceptional functional properties that are beneficial for a variety of electromagnetic applications. These alloys are conventionally manufactured into sheet or bar forms using well-established insgot metallurgy practices that involve hot- and cold-working steps. However, recent developments in process metallurgy have unlocked opportunities to directly produce bulk soft-magnetic alloys with improved, and often tailorable, structure–property relationships that are unachievable conventionally. The emergence of unconventional manufacturing routes for soft-magnetic alloys is largely motivated by the need to improve the energy efficiency of electromagnetic devices. In this review, literature that details emerging manufacturing approaches for soft-magnetic alloys is overviewed. This review covers (1) severe plastic deformation, (2) recent advances in melt spinning, (3) powder-based methods, and (4) additive manufacturing. These methods are discussed in comparison with conventional rolling and bar processing. Perspectives and recommended future research directions are also discussed.
Two techniques were developed to allow users of microfabricated surface ion traps to detect RF breakdown as soon as it happens, without needing to remove devices from vacuum and look at them with a microscope.
Based on the rationale presented, nuclear criticality is improbable after salt creep causes compaction of criticality control overpacks (CCOs) disposed at the Waste Isolation Pilot Plant, an operating repository in bedded salt for the disposal of transuranic (TRU) waste from atomic energy defense activities. For most TRU waste, the possibility of post-closure criticality is exceedingly small either because the salt neutronically isolates TRU waste canisters or because closure of a disposal room from salt creep does not sufficiently compact the low mass of fissile material. The criticality potential has been updated here because of the introduction of CCOs, which may dispose up to 380 fissile gram equivalent plutonium-239 in each container. The criticality potential is evaluated through high-fidelity geomechanical modeling of a disposal room filled with CCOs during two representative conditions: (1) large salt block fall, and (2) gradual salt compaction (without brine seepage and subsequent gas generation to permit maximum room closure). Geomechanical models of rock fall demonstrate three tiers of CCOs are not greatly disrupted. Geomechanical models of gradual room closure from salt creep predict irregular arrays of closely packed CCOs after 1000 years, when room closure has asymptotically approached maximum compaction. Criticality models of spheres and cylinders of 380 fissile gram equivalent of plutonium (as oxide) at the predicted irregular spacing demonstrate that an array of CCOs is not critical when surrounded by salt and magnesium oxide, provided the amount of hydrogenous material shipped in the CCO (usually water and plastics) is controlled or boron carbide (a neutron poison) is mixed with the fissile contents.
Organizations that monitor for underground nuclear explosive tests are interested in techniques that automatically characterize recurring events such as aftershocks to reduce the human analyst effort required to produce high-quality event bulletins. Waveform correlation is a technique that is effective in finding similar waveforms from repeating seismic events. In this study, we apply waveform correlation in combination with template event metadata to two aftershock sequences in the Middle East to seek corroborating detections from multiple stations in the International Monitoring System of the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization. We use waveform templates from stations that are within regional distance of aftershock sequences to detect subsequent events, then use template event metadata to discover what stations are likely to record corroborating arrival waveforms for recurring aftershock events at the same location, and develop additional waveform templates to seek corroborating detections. We evaluate the results with the goal of determining whether applying the method to aftershock events will improve the choice of waveform correlation detections that lead to bulletin-worthy events and reduction of analyst effort.
We present optical metrology at the Sandia fog chamber facility. Repeatable and well characterized fogs are generated under different atmospheric conditions and applied for light transport model validation and computational sensing development.
Measurements that occur within the internal layers of a quantum circuit—midcircuit measurements—are a useful quantum-computing primitive, most notably for quantum error correction. Midcircuit measurements have both classical and quantum outputs, so they can be subject to error modes that do not exist for measurements that terminate quantum circuits. Here we show how to characterize midcircuit measurements, modeled by quantum instruments, using a technique that we call quantum instrument linear gate set tomography (QILGST). We then apply this technique to characterize a dispersive measurement on a superconducting transmon qubit within a multiqubit system. By varying the delay time between the measurement pulse and subsequent gates, we explore the impact of residual cavity photon population on measurement error. QILGST can resolve different error modes and quantify the total error from a measurement; in our experiment, for delay times above 1000ns we measure a total error rate (i.e., half diamond distance) of ϵ⋄=8.1±1.4%, a readout fidelity of 97.0±0.3%, and output quantum-state fidelities of 96.7±0.6% and 93.7±0.7% when measuring 0 and 1, respectively.
A newly developed variable-weight DSMC collision scheme for inelastic collision events is applied to PIC-DSMC modelling of electrical breakdown in 1-dimensional helium and argon-filled gaps. Application of the collision scheme to various inelastic collisional and gas-surface interaction processes (electron-impact ionization, electronic excitation, secondary electron emission) is considered. The collision scheme is shown to improve the level of noise in the computed current density compared to the commonly used approach of sampling a single process, whilst maintaining a comparable level of computational cost and providing less variance in the average number of particles per cell.
Modern day processes depend heavily on data-driven techniques that use large datasets clustered into relevant groups help them achieve higher efficiency, better utilization of the operation, and improved decision making. However, building these datasets and clustering by similar products is challenging in research environments that produce many novel and highly complex low-volume technologies. In this work, the author develops an algorithm that calculates the similarity between multiple low-volume products from a research environment using a real-world data set. The algorithm is applied to pulse power operations data, which routinely performs novel experiments for inertial confinement fusion, radiation effects, and nuclear stockpile stewardship. The author shows that the algorithm is successful in calculating similarity between experiments of varying complexity such that comparable shots can be used for further analysis. Furthermore, it has been able to identify experiments not traditionally seen as identical.
Proceedings - 2022 IEEE International Symposium on Software Reliability Engineering Workshops, ISSREW 2022
Ketterer, Austin; Shekar, Asha; Yi, Edgardo B.; Bagchi, Saurabh; Clements, Abraham A.
Firmware emulation is useful for finding vulnerabil-ities, performing debugging, and testing functionalities. However, the process of enabling firmware to execute in an emulator (i.e., re-hosting) is difficult. Each piece of the firmware may depend on hardware peripherals outside the microcontroller that are inaccessible during emulation. Current practices involve painstakingly disentangling these dependencies or replacing them with developed models that emulate functions interacting with hardware. Unfortunately, both are highly manual and error-prone. In this paper, we introduce a systematic graph-based approach to analyze firmware binaries and determine which functions need to be replaced. Our approach is customizable to balance the fidelity of the emulation and the amount of effort it would take to achieve the emulation by modeling functions. We run our algorithm across a number of firmware binaries and show its ability to capture and remove a large majority of hardware dependencies.
With the increase in penetration of inverter-based resources (IBRs) in the electrical power system, the ability of these devices to provide grid support to the system has become a necessity. With standards previously developed for the interconnection requirements of grid-following inverters (GFLI) (most commonly photovoltaic inverters), it has been well documented how these inverters 'should' respond to changes in voltage and frequency. However, with other IBRs such as grid-forming inverters (GFMIs) (used for energy storage systems, standalone systems, and as uninterruptable power supplies) these requirements are either: not yet documented, or require a more in deep analysis. With the increased interest in microgrids, GFMIs that can be paralleled onto a distribution system have become desired. With the proper control schemes, a GFMI can help maintain grid stability through fast response compared to rotating machines. This paper will present an experimental comparison of commercially available GFMIand GFLI ' responses to voltage and frequency deviation, as well as the GFMIoperating as a standalone system and subjected to various changes in loads.
This report describes recommended abuse testing procedures for rechargeable energy storage systems (RESSs) for electric vehicles. This report serves as a revision to the USABC Electrical Energy Storage System Abuse Test Manual for Electric and Hybrid Electric Vehicle Applications (SAND99-0497).
Proceedings of the Nuclear Criticality Safety Division Topical Meeting, NCSD 2022 - Embedded with the 2022 ANS Annual Meeting
Salazar, Alex
The postclosure criticality safety assessment for the direct disposal of dual-purpose canisters (DPCs) in a geologic repository includes considerations of transient criticality phenomena. The power pulse from a hypothetical transient criticality event in an unsaturated alluvial repository is evaluated for a DPC containing 37 spent pressurized water reactor (PWR) assemblies. The scenario assumes that the conditions for baseline criticality are achieved through flooding with groundwater and progressive failure of neutron absorbing media. A preliminary series of steady-state criticality calculations is conducted to characterize reactivity feedback due to absorber degradation, Doppler broadening, and thermal expansion. These feedback coefficients are used in an analysis with a reactor kinetics code to characterize the transient pulse given a positive reactivity insertion for a given length of time. The time-integrated behavior of the pulse can be used to model effects on the DPC and surrounding barriers in future studies and determine if transient criticality effects are consequential.
Creation of streaming video stimuli that allow for strict experimental control while providing ease of scene manipulation is difficult to achieve but desired by researchers seeking to approach ecological validity in contexts that involve processing streaming visual information. To that end, we propose leveraging video game modding tools as a method of creating research quality stimuli. As a pilot effort, we used a video game sandbox tool (Garry’s Mod) to create three steaming video scenarios designed to mimic video feeds that physical security personnel might observe. All scenarios required participants to identify the presences of a threat appearing during the video feed. Each scenario differed in level of complexity, in that one scenario required only location monitoring, one required location and action monitoring, and one required location, action, and conjunction monitoring in that when an action was performed it was only considered a threat when performed by a certain character model. While there was no behavioral effect of scenario in terms of accuracy or response times, in all scenarios we found evidence of a P300 when comparing response to threatening stimuli to that of standard stimuli. Results therefore indicate that sufficient levels of experimental control may be achieved to allow for the precise timing required for ERP analysis. Thus, we demonstrate the feasibility of using existing modding tools to create video scenarios amenable to neuroimaging analysis.
In the near future, grid operators are expected to regularly use advanced distributed energy resource (DER) functions, defined in IEEE 1547-2018, to perform a range of grid-support operations. Many of these functions adjust the active and reactive power of the device through commanded or autonomous modes, which will produce new stresses on the grid-interfacing power electronics components, such as DC/AC inverters. In previous work, multiple DER devices were instrumented to evaluate additional component stress under multiple reactive power setpoints. We utilize quasi-static time-series simulations to determine voltage-reactive power mode (volt-var) mission profile of inverters in an active power system. Mission profiles and loss estimates are then combined to estimate the reduction of the useful life of inverters from different reactive power profiles. It was found that the average lifetime reduction was approximately 0.15% for an inverter between standard unity power factor operation and the IEEE 1547 default volt-var curve based on thermal damage due to switching in the power transistors. For an inverter with an expected 20-year lifetime, the 1547 volt-var curve would reduce the expected life of the device by 12 days. This framework for determining an inverter's useful life from experimental and modeling data can be applied to any failure mechanism and advanced inverter operation.
Proceedings - Electronic Components and Technology Conference
Jia, Xiaofan; Moon, Kyoung S.; Kim, Joon W.; Huang, Kai Q.; Jordan, Matthew J.; Swaminathan, Madhavan
This work presents the implementation and characterization of a die-embedded, antenna-integrated glass package for RF modules in D-Band. The proposed package uses glass as the core material which can match the coefficient-of-thermal-expansion (CTE) well for RF chips and printed circuit board (PCB). The redistribution layer (RDL) for electrical connections is built on low-loss polymeric build-up dielectric films (ABF-GL102). Dummy dies are embedded in the glass cavities for characterization. The interconnects between die pads and the package are implemented using micro-vias. An 8-elements series-fed microstrip patch antenna is also integrated on the low-loss RDL. The proposed glass panel embedded package addresses the electrical loss and parasitic from the interconnects. With micro-vias and transmission lines built on low-loss RDL, the glass embedded package provides low-loss and low-parasitic chip-to-chip and chip-to-antenna interconnects. Using temporary thermal release tapes, this package also shows great potential to address the high heat dissipation from D-band power amplifiers.
Wave energy converters have yet to reach broad market viability. Traditionally, levelized cost of energy has been considered the ultimate stage gate through which wave energy developers must pass in order to find success (i.e., the levelized cost of wave energy must be less than that of solar and wind). However, real world energy decisions are not based solely on levelized cost of energy. In this study, we consider the energy mix in California in the year 2045, upon which the state plans to achieve zero carbon energy production. By considering temporal electricity production and consumption, we are able to perform a more informed analysis of the decision process to address this challenge. The results show that, due to high level of ocean wave energy in the winter months, wave energy provides a valuable complement to solar and wind, which have higher production in the summer. Thus, based on this complementary temporal aspect, wave energy appears cost-effective, even when the cost of installation and maintenance is twice that of solar and wind.
With machine learning (ML) technologies rapidly expanding to new applications and domains, users are collaborating with artificial intelligence-assisted diagnostic tools to a larger and larger extent. But what impact does ML aid have on cognitive performance, especially when the ML output is not always accurate? Here, we examined the cognitive effects of the presence of simulated ML assistance-including both accurate and inaccurate output-on two tasks (a domain-specific nuclear safeguards task and domain-general visual search task). Patterns of performance varied across the two tasks for both the presence of ML aid as well as the category of ML feedback (e.g., false alarm). These results indicate that differences such as domain could influence users' performance with ML aid, and suggest the need to test the effects of ML output (and associated errors) in the specific context of use, especially when the stimuli of interest are vague or ill-defined.
Schwering, Paul C.; Lowry, Thomas S.; Hinz, Nicholas; Matson, Gabe; Sabin, Andrew; Blake, Kelly; Zimmerman, Jade; Sewell, Steven; Cumming, William
The Basin & Range Investigations for Developing Geothermal Energy (BRIDGE) Project kicked off in the Autumn of 2021. The Department of Energy Geothermal Technologies Office (GTO) funded BRIDGE as part of a broader GTO initiative to advance the identification and development of hidden, or “blind”, geothermal energy resources in the Basin and Range Province (Basin & Range) of the western USA. The BRIDGE Team is a collaboration being led by Sandia National Laboratories (Sandia) with partners from Geologica Geothermal Group, the US Navy Geothermal Program Office, and others that will contribute to various stages of the project. The focus of this project is on Western Nevada with areas of interest, identified chiefly from the prior Nevada Play Fairway Analysis (PFA) study, located primarily in Churchill and Mineral Counties including lands managed by the Department of Defense (DOD). The first stage of BRIDGE is focused on reconnaissance of PFA targets that are suspected or known to be associated with hidden geothermal resources on DOD and surrounding lands. Helicopter-borne transient electromagnetism (HTEM) surveying is being used in a novel conceptual approach for optimizing shallow and deep well targeting in Basin & Range geothermal exploration. This reconnaissance phase is part of the overall BRIDGE workflow: 1. Assess the pre-survey likelihood of geothermal systems in the study area based on PFA reviews and a reanalysis of existing information to constrain subsurface temperature, structure, hydrology, and thermal manifestations. 2. Design and execute HTEM resistivity surveying to image the depth to the low resistivity and low permeability clay cap, within which a thermally conductive (linear) temperature gradient could be targeted for drilling, and potentially image the underlying higher resistivity associated with shallow aquifers hosting outflows from deeper geothermal systems. 3. Drill temperature gradient (TG) wells that penetrate a thick enough section of the clay cap detected by HTEM surveying to provide a linear thermal gradient that could be reliably extrapolated to the base of the cap. 4. In areas where the TG wells detected a prospective temperature gradient but where the HTEM survey did not penetrate to the base of the cap, conduct surface magnetotelluric (MT) resistivity surveys to image the base of the cap to identify the depth to which the linear TG well gradient could be reliably extrapolated. 5. On the most prospective target(s), drill at least one testable slim hole well to discover the resource associated with the interpreted geothermal reservoir upflow source. The first stage of the project and the second stage HTEM survey have been completed. Preliminary results are being analyzed with respect to potential TG targets and plans for followup surveys, geophysical joint inversion, conceptual model development, and interpretation.
Understanding the lightning science behind the lightning detected by remote sensing systems is crucial to Sandia’s remote sensing program. Improved understanding of lightning properties can lead to improvements of onboard and/or ground-based background signal discrimination.
The latest high temperature (HT) microcontrollers and memory technology have been investigated for the purpose of enhancing downhole instrumentation capabilities at temperatures above 210°C. As part of the effort, five microcontrollers (Honeywell HT83C51, RelChip RC10001, Texas Instruments SM470R1B1M-HT, SM320F2812-HT, SM320F28335-HT) and one memory chip (RelChip RC2110836) have been evaluated to its rated temperature for a period of one month to determine life expectancy and performance. Pulse rate of the integrated circuit and internal memory scan were performed during testing by remotely located axillary components. This paper will describe challenges encountered in the operation and HT testing of these components. Long-term HT tests results show the variation in power consumption and packaging degradation. The work described in this paper improves downhole instrumentation by enabling greater sensor counts and improving data accuracy and transfer rates at temperatures between 210°C and 300°C.
There is a need to perform offline anomaly detection in count data streams to simultaneously identify both systemic changes and outliers, simultaneously. We propose a new algorithmic method, called the Anomaly Detection Pipeline, which leverages common statistical process control procedures in a novel way to accomplish this. The method we propose does not require user-defined control or phase I training data, automatically identifying regions of stability for improved parameter estimation to support change point detection. The method does not require data to be normally distributed, and it detects outliers relative to the regimes in which they occur. Our proposed method performs comparably to state-of-the-art change point detection methods, provides additional capabilities, and is extendable to a larger set of possible data streams than known methods.
This document presents tests from the Sierra Structural Mechanics verification test suite. Each of these tests is run nightly with the Sierra/SD code suite and the results of the test checked versus the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the Sierra/SD code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.
The precise estimation of performance loss rate (PLR) of photovoltaic (PV) systems is vital for reducing investment risks and increasing the bankability of the technology. Until recently, the PLR of fielded PV systems was mainly estimated through the extraction of a linear trend from a time series of performance indicators. However, operating PV systems exhibit failures and performance losses that cause variability in the performance and may bias the PLR results obtained from linear trend techniques. Change-point (CP) methods were thus introduced to identify nonlinear trend changes and behaviour. The aim of this work is to perform a comparative analysis among different CP techniques for estimating the annual PLR of eleven grid-connected PV systems installed in Cyprus. Outdoor field measurements over an 8-year period (June 2006-June 2014) were used for the analysis. The obtained results when applying different CP algorithms to the performance ratio time series (aggregated into monthly blocks) demonstrated that the extracted trend may not always be linear but sometimes can exhibit nonlinearities. The application of different CP methods resulted to PLR values that differ by up to 0.85% per year (for the same number of CPs/segments).
Simple but mission-critical internet-based applications that require extremely high reliability, availability, and verifiability (e.g., auditability) could benefit from running on robust public programmable blockchain platforms such as Ethereum. Unfortunately, program code running on such blockchains is normally publicly viewable, rendering these platforms unsuitable for applications requiring strict privacy of application code, data, and results. In this work, we investigate using MPC techniques to protect the privacy of a blockchain computation. While our main goal is to hide both the data and the computed function itself, we also consider the standard MPC setting where the function is public. We describe GABLE (Garbled Autonomous Bots Leveraging Ethereum), a blockchain MPC architecture and system. The GABLE architecture specifies the roles and capabilities of the players. GABLE includes two approaches for implementing MPC over blockchain: Garbled Circuits (GC), evaluating universal circuits, and Garbled Finite State Automata (GFSA). We formally model and prove the security of GABLE implemented over garbling schemes, a popular abstraction of GC and GFSA from (Bellare et al., CCS 2012). We analyze in detail the performance (including Ethereum gas costs) of both approaches and discuss the trade-offs. We implement a simple prototype of GABLE and report on the implementation issues and experience.
This article presents a notable advance toward the development of a new method of increasing the single-axis tracking photovoltaic (PV) system power output by improving the determination and near-term prediction of the optimum module tilt angle. The tilt angle of the plane receiving the greatest total irradiance changes with Sun position and atmospheric conditions including cloud formation and movement, aerosols, and particulate loading, as well as varying albedo within a module's field of view. In this article, we present a multi-input convolutional neural network that can create a profile of plane-of-array irradiance versus surface tilt angle over a full 180^{\circ } arc from horizon to horizon. As input, the neural network uses the calculated solar position and clear-sky irradiance values, along with sky images. The target irradiance values are provided by the multiplanar irradiance sensor (MPIS). In order to account for varying irradiance conditions, the MPIS signal is normalized by the theoretical clear-sky global horizontal irradiance. Using this information, the neural network outputs an N-dimensional vector, where N is the number of points to approximate the MPIS curve via Fourier resampling. The output vector of the model is smoothed with a Gaussian kernel to account for error in the downsamping and subsequent upsampling steps, as well as to smooth the unconstrained output of the model. These profiles may be used to perform near-term prediction of angular irradiance, which can then inform the movement of a PV tracker.
Conference Proceedings of the Society for Experimental Mechanics Series
Saunders, Brian E.; Vasconcellos, Rui M.G.; Kuether, Robert J.; Abdelkefi, Abdessattar
Physical systems that are subject to intermittent contact/impact are often studied using piecewise-smooth models. Freeplay is a common type of piecewise-smooth system and has been studied extensively for gear systems (backlash) and aeroelastic systems (control surfaces like ailerons and rudders). These systems can experience complex nonlinear behavior including isolated resonance, chaos, and discontinuity-induced bifurcations. This behavior can lead to undesired damaging responses in the system. In this work, bifurcation analysis is performed for a forced Duffing oscillator with freeplay. The freeplay nonlinearity in this system is dependent on the contact stiffness, the size of the freeplay region, and the symmetry/asymmetry of the freeplay region with respect to the system’s equilibrium. Past work on this system has shown that a rich variety of nonlinear behaviors is present. Modern methods of nonlinear dynamics are used to characterize the transitions in system response including phase portraits, frequency spectra, and Poincaré maps. Different freeplay contact stiffnesses are studied including soft, medium, and hard in order to determine how the system response changes as the freeplay transitions from soft contact to near-impact. Particular focus is given to the effects of different initial conditions on the activation of secondary- and isolated-resonance responses. Preliminary results show isolated resonances to occur only for softer-contact cases, regions of superharmonic resonances are more prevalent for harder-contact cases, and more nonlinear behavior occurs for higher initial conditions.
In this paper, we present a sensor encoding technique for the detection of stealthy false data injection attacks in static power system state estimation. This method implements low-cost verification of the integrity of measurement data, allowing for the detection of stealthy additive attack vectors. It is considered that these attacks are crafted by malicious actors with knowledge of the system models and capable of tampering with any number of measurements. The solution involves encoding all vulnerable measurements. The effectiveness of the method was demonstrated through a simulation where a stealthy attack on an encoded measurement vector generates large residuals that trigger a chi-squared anomaly detector (e.g. χ2). Following a defense in-depth approach, this method could be used with other security features such as communications encryption to provide an additional line of defense against cyberattacks.
This study investigates the impact that operations and market strategy have on the design and value of an energy storage system on three levels of the facility: the cell level, the system level, and the project level. The study provides insights for developers, capital providers, customers and policy makers into the impact different operational strategies have on effectiveness of energy storage system in today's emerging market. Energy storage systems can be used for a variety of usage profiles, with the choice having a profound impact on their performance, lifespan, and revenue potential. Most evaluations of application stacking only look at the possible revenue potential without understanding the increased costs and potential for major damage to the cells. Evaluating the impact of operational choices is critical to understanding the risk adjusted return from energy storage project investment. This is the fifth study in the Energy Storage Financing Study series, which is designed to investigate challenges surrounding the financing of energy storage projects in the U.S., promoting greater technology and project risk transparency, reducing project transaction costs, and supporting a level playing field for innovative energy storage technologies.
Any program tasked with the evaluation and acquisition of algorithms for use in deployed scenarios must have an impartial, repeatable, and auditable means of benchmarking both candidate and fielded algorithms. Success in this endeavor requires a body of representative sensor data, data labels indicating the proper algorithmic response to the data as adjudicated by subject matter experts, a means of executing algorithms under review against the data, and the ability to automatically score and report algorithm performance. Each of these capabilities should be constructed in support of program and mission goals. By curating and maintaining data, labels, tests, and scoring methodology, a program can understand and continually improve the relationship between benchmarked and fielded performance of acquired algorithms. A system supporting these program needs, deployed in an environment with sufficient computational power and necessary security controls is a powerful tool for ensuring due diligence in evaluation and acquisition of mission critical algorithms. This paper describes the Seascape system and its place in such a process.
The installation of digital sensors, such as advanced meter infrastructure (AMI) meters, has provided the means to implement a wide variety of techniques to increase visibility into the distribution system, including the ability to calibrate the utility models using data-driven algorithms. One challenge in maintaining accurate and up-to-date distribution system models is identifying changes and event occurrences that happen during the year, such as customers who have changed phases due to maintenance or other events. This work proposes a method for the detection of phase change events that utilizes techniques from an existing phase identification algorithm. This work utilizes an ensemble step to obtain predicted phases for windows of data, therefore allowing the predicted phase of customers to be observed over time. The proposed algorithm was tested on four utility datasets as well as a synthetic dataset. The synthetic tests showed the algorithm was capable of accurately detecting true phase change events while limiting the number of false-positive events flagged. In addition, the algorithm was able to identify possible phase change events on two real datasets.
We demonstrate an optical waveguide device capable of supporting the optical power necessary for trapping a single atom or a cold-atom ensemble with evanescent fields. Our photonic integrated platform successfully manages optical powers of ~30mW.
Non-volatile memory arrays require select devices to ensure accurate programming. The one-selector one-resistor (1S1R) array where a two-terminal nonlinear select device is placed in series with a resistive memory element is attractive due to its high-density data storage; however, the effect of the nonlinear select device on the accuracy of analog in-memory computing has not been explored. This work evaluates the impact of select and memory device properties on the results of analog matrix-vector multiplications. We integrate nonlinear circuit simulations into CrossSim and perform end-to-end neural network inference simulations to study how the select device affects the accuracy of neural network inference. We propose an adjustment to the input voltage that can effectively compensate for the electrical load of the select device. Our results show that for deep residual networks trained on CIFAR-10, a compensation that is uniform across all devices in the system can mitigate these effects over a wide range of values for the select device I-V steepness and memory device On/Off ratio. A realistic I-V curve steepness of 60 mV/dec can yield an accuracy on CIFAR-10 that is within 0.44% of the floating-point accuracy.
This work describes the development and testing of a carbon dioxide seeding system for the Sandia Hypersonic Wind Tunnel. The seeder injects liquid carbon dioxide into the tunnel, which evaporates in the nitrogen supply line and then condenses during the nozzle expansion into a fog of particles that scatter light via Rayleigh scattering. A planar laser scattering (PLS) experiment is conducted in the boundary layer and wake of a cone at Mach 8 to evaluate the success of the seeder. Second-mode waves and turbulence transition were well-visualized by the PLS in the boundary layer and wake. PLS in the wake also captured the expansion wave over the base and wake recompression shock. No carbon dioxide appears to survive and condense in the boundary layer or wake, meaning alternative seeding methods must be explored to extract measurements within these regions. The seeding system offers planar flow visualization opportunities and can enable quantitative velocimetry measurements in the future, including filtered Rayleigh scattering.
A 0.2-2 GHz digitally programmable RF delay element based on a time-interleaved multi-stage switched-capacitor (TIMS-SC) approach is presented. The proposed approach enables hundreds of ns of broadband RF delay by employing sample time expansion in multiple stages of switched-capacitor storage elements. The delay element was implemented in a 45 nm SOI CMOS process and achieves a 2.55-448.6 ns programmable delay range with < 0.12% delay variation across 1.8 GHz of bandwidth at maximum delay, 2.42 ns programmable delay steps, and 330 ns/mm2 area efficiency. The device achieves 24 dB gain, 7.1 dB noise figure, and consumes 80 mW from a 1 V supply with an active area of 1.36 mm2.
The performance of APCs with relatively low compressive strength and poor stability under hydrothermal conditions make them less than desirable for the DPC use case. Meanwhile, grossite as a primary filler material or as a modifier has resulted in marked improvements in the properties of several DPC cement filler candidates. Grossite CAPCs have substantial mechanical strength even after irradiation. However, the significant decrease in strength observed post-irradiation requires further investigation before it is advanced as a material for the use case. As a modifier, grossite improves strength and set times of the APC and WAPC cements. Hibonite CAPCs also show considerable promise although their degradation under hydrothermal conditions is a potentially significant liability. Finally, with recent improvements in working time and compressive strength, the WAPCs remain in contention as viable candidates for the DPC use case.
We evaluate the use of reference modules for monitoring effective irradiance in PV power plants, as compared with traditional plane-of-array (POA) irradiance sensors, for PV monitoring and capacity tests. Common POA sensors such as pyranometers and reference cells are unable to capture module-level irradiance nonuniformity and require several correction factors to accurately represent the conditions for fielded modules. These problems are compounded for bifacial systems, where the power loss due to rear side shading and rear-side plane-of-array (RPOA) irradiance gradients are greater and more difficult to quantify. The resulting inaccuracy can have costly real-world consequences, particularly when the data are used to perform power ratings and capacity tests. Here we analyze data from a bifacial single-axis tracking PV power plant, (175.6 MWdc) using 5 meteorological (MET) stations, located on corresponding inverter blocks with capacities over 4 MWdc. Each MET station consists of bifacial reference modules as well pyranometers mounted in traditional POA and RPOA installations across the PV power plant. Short circuit current measurements of the reference modules are converted to effective irradiance with temperature correction and scaling based on flash test or nameplate short circuit values. Our work shows that bifacial effective irradiance measured by pyranometers averages 3.6% higher than the effective irradiance measured by bifacial reference modules, even when accounting for spectral, angle of incidence, and irradiance nonuniformity. We also performed capacity tests using effective irradiance measured by pyranometers and reference modules for each of the 5 bifacial single-axis tracking inverter blocks mentioned above. These capacity tests evaluated bifacial plant performance at ∼3.9% lower when using bifacial effective irradiance from pyranometers as compared to the same calculation performed with reference modules.
At Sandia National Laboratories, QSCOUT (the Quantum Scientific Computing Open User Testbed) is an ion-trap based quantum computer built for the purpose of allowing users low-level access to quantum hardware. Commands are executed on the hardware using Jaqal (Just Another Quantum Assembly Language), a programming language designed in-house to support the unique capabilities of QSCOUT. In this work, we describe a batching implementation of our custom software that speeds the experimental run-time through the reduction of communication and upload times. Reducing the code upload time during experimental runs improves system performance by mitigating the effects of drift. We demonstrate this implementation through a set of quantum chemistry experiments using a variational quantum eigensolver (VQE). While developed specifically for this testbed, this idea finds application across many similar experimental platforms that seek greater hardware control or reduced overhead.
The III-nitride semiconductors are attractive for on-chip, solid-state vacuum nanoelectronics, having high thermal and chemical stability, low electron affinity, and high breakdown fields. Here we report top-down fabricated, lateral gallium nitride (GaN)-based nanoscale vacuum electron diodes operable in air, with ultra-low turn-on voltages down to ~0.24 V, and stable high field emission currents, tested up to several microamps for single-emitter devices. We present gap-size and pressure dependent studies which provide insights into the design of future nanogap vacuum electron devices. The vacuum nanodiodes also show high resistance to damage from 2.5 MeV proton exposure. Preliminary results on the fabrication and characteristics of lateral GaN nano vacuum transistors will also be presented. The results show promise for a new class of robust, integrated, III-nitride based vacuum nanoelectronics.
This paper applies sensitivity and uncertainty analysis to compare two model alternatives for fuel matrix degradation for performance assessment of a generic crystalline repository. The results show that this model choice has little effect on uncertainty in the peak 129I concentration. The small impact of this choice is likely due to the higher importance of uncertainty in the instantaneous release fraction and differences in epistemic uncertainty between the alternatives.
Grid operating security studies are typically employed to establish operating boundaries, ensuring secure and stable operation for a range of operation under NERC guidelines. However, if these boundaries are violated, the existing system security margins will be largely unknown. As an alternative to the use of complex optimizations over dynamic conditions, this work employs the use of Reinforcement-based Machine Learning to identify a sequence of secure state transitions which place the grid in a higher degree of operating security with greater static and dynamic stability margins. The approach requires the training of a Machine Learning Agent to accomplish this task using modeled data and employs it as a decision support tool under severe, near-blackout conditions.
High - temperature particle receivers are being pursued to enable next - generation concentrating solar thermal power (CSP) systems that can achieve higher temperatures (> 700 C) to enable more efficient power cycles, lower overall system costs, and emerging CSP - based process - heat applications. The objective of this work was to develop characterization methods to quantify the particle and heat losses from the open aperture of the particle receiver. Novel camera - based imaging methods were developed and applied to both laboratory - scale and larger 1 MW t on - sun tests at the National Solar Thermal Test Facility in Albuquerque, New Mexico. Validation of the imaging methods was performed using gravimetric and calorimetric methods. In addition, conventional particle - sampling methods using volumetric particle - air samplers were applied to the on - sun tests to compare particle emission rates with regulatory standards for worker safety and pollution. Novel particle sampling methods using 3 - D printed tipping buckets and tethered balloons were also developed and applied to the on - sun particle - receiver tests. Finally, models were developed to simulate the impact of particle size and wind on particle emissions and concentrations as a function of location. Results showed that particle emissions and concentrations were well below regulatory standards for worker safety and pollution. In addition, estimated particle temperatures and advective heat losses from the camera - based imaging methods correlated well with measured values during the on - sun tests.
Type 2 high-pressure hydrogen vessels for storage at hydrogen refueling stations are designed assuming a predefined operational pressure cycle and targeted autofrettage conditions. However, the resulting finite life depends significantly on variables associated with the autofrettage process and the pressure cycles actually realized during service, which many times are not to the full range of the design. Clear guidance for cycle counting is lacking, therefore industry often defaults to counting every repressurization as a full range pressure cycle, which is an overly conservative approach. In-service pressure cycles used to predict the growth of cracks in operational pressure vessels results in significantly longer life, since most in-service pressure cycles are only a fraction of the full design pressure range. Fatigue crack growth rates can vary widely for a given pressure range depending on the details of the residual strains imparted during the autofrettage process because of their influence on crack driving forces. Small changes in variables associated with the autofrettage process, e.g., the target autofrettage overburden pressure, can result in large changes in the residual stress profile leading to possibly degraded fatigue life. In this paper, computational simulation was used for sensitivity studies to evaluate the effect of both operating conditions and autofrettage conditions on fatigue life for Type 2 highpressure hydrogen vessels. The analysis in this paper explores these sensitivities, and the results are used to provide guidance on cycle counting. In particular, we identify the pressure cycle ranges that can be ignored over the life of the vessel as having negligible effect on fatigue life. This study also examines the sensitivity of design life to the autofrettage process and the impact on life if the targeted residual strain is not achieved during manufacturing.
Monitoring cavern leaching after each calendar year of oil sales is necessary to support cavern stability efforts and long-term availability for oil drawdowns in the U.S. Strategic Petroleum Reserve. Modeling results from the SANSMIC code and recent sonars are compared to show projected changes in the cavern’s geometry due to leaching from raw-water injections. This report aims to give background on the importance of monitoring cavern leaching and provide a detailed explanation of the process used to create the leaching plots used to monitor cavern leaching. In the past, generating leaching plots for each cavern in a given leaching year was done manually, and every cavern had to be processed individually. A Python script, compatible with Earth Volumetric Studio, was created to automate most of the process. The script makes a total of 26 plots per cavern to show leaching history, axisymmetric representation of leaching, and SANSMIC modeling of future leaching. The current run time for the script is one hour, replacing 40-50 hours of the monitoring cavern leaching process.
The Jet Propulsion Laboratory has a keen interest in exploring icy moons in the solar system, particularly Jupiter's Europa. Successful exploration of the moon's surface includes planetary protection initiatives to prevent the introduction of viable organisms from Earth to Europa. To that end, the Europa lander requires a Terminal Sterilization Subsystem (TSS) to rid the lander of viable organisms that would potentially contaminate the moon's environment. Sandia National Laboratories has been developing a TSS architecture, relying heavily on computational models to support TSS development. Sandia's TSS design approach involves using energetic material to thermally sterilize lander components at the end of the mission. A hierarchical modeling approach was used for system development and analysis, where simplified systems were constructed to perform empirical tests for evaluating energetic material formulation development and assist in developing computational models with multiple tiers of physics fidelity. Computational models have been developed using multiple Sandia-native computational tools. Three experimental systems and corresponding computational models have been developed: Tube, Sub-Box Small, and Sub-Box Large systems. This paper presents an explanation of the application context of the TSS along with an overview description of a small portion of the TSS development from a modeling and simulation perspective, specifically highlighting verification, validation, and uncertainty quantification (VVUQ) aspects of the modeling and simulation work. Multiple VVUQ approaches were implemented during TSS development, including solution verification, calibration, uncertainty quantification, global sensitivity analysis, and validation. This paper is not intended to express the design results or parameter values used to model the TSS but to communicate the approaches used and how the results of the VVUQ efforts were used and interpreted to assist system development.
The development of the High-Resolution Wavelet Transform (HRWT) is driven by the need of increasing the high-frequency resolution of widely used discrete Wavelet Transforms (WTs). Based on the Stationary Wavelet Transform (SWT), which is a modification of the Discrete Wavelet Transform (DWT), a novel WT that increases the number of decomposition levels (therefore increasing the previously mentioned frequency resolution) is proposed. In order to show the validity of the HRWT, this paper encompasses a theoretical comparison with other discrete WT methods. First, a summary of the DWT and the SWT, along with a brief explanation of the WT theory, is provided. Then, the concept of the HRWT is presented, followed by a discussion of the adherence of this new method to the WT's common properties. Finally, an example of the application is performed on a transient waveform analysis from a power system fault event, outlining the benefits that can be obtained from its usage compared to the SWT.
Wind turbine wakes are characterized by helical trailing tip vortices that are highly stable initially and act as a shield against mixing with the ambient flow and thereby delay wake recovery until destructive mutual interference of the vortices begins. Delayed wake recovery in turn reduces the power production of downstream turbines that are positioned in the wakes of upstream turbines. The long natural decay length forces wind farms to have large distances between turbines to yield sufficient wake recovery. Herein, we tested a new concept aimed at accelerating the breakdown of wind turbine tip vortices by causing the vortices to interact with one another almost immediately behind the rotor. By adding a spire behind the rotor, essentially a blockage to perturb the paths of the tip vortices, we hypothesized that the altered paths of the tip vortices would cause their destructive interference process to begin sooner. The concept of a nacelle-mounted spire was tested in high-fidelity large-eddy simulations using Nalu-Wind. Four different spires were modeled with wall-resolved meshes behind the rotor of a wind turbine with another turbine five diameters downstream. We compared power and wake data against baseline results to determine whether the spires accelerated wake recovery of the upstream turbine and thereby increased the power of the downstream turbine. The results showed no change in the total power of the two turbines for any spire compared to its respective baseline. These results were further explored by testing at higher spatial resolution and without turbulence in the inflow. The increased spatial resolution increased the apparent stability of the tip vortices while the lack of turbulence did not. We conclude that the spires’ geometry and size were inadequate to alter the helical paths of the trailing tip vortices and that modeling of the formation and decay of tip vortices may be highly sensitive to model parameters.
Pre-chamber ignition has demonstrated capability to increase internal combustion engine in-cylinder burn rates and enable the use of low engine-out pollutant emission combustion strategies. In the present study, newly designed passive pre-chambers with different nozzle-hole patterns - that featured combinations of radial and axial nozzles - were experimentally investigated in an optically accessible, single-cylinder research engine. The pre-chambers analyzed had a narrow throat geometry to increase the velocity of the ejected jets. In addition to a conventional inductive spark igniter, a nanosecond spark ignition system that promotes faster early burn rates was also investigated. Time-resolved visualization of ignition and combustion processes was accomplished through high-speed hydroxyl radical (OH*) chemiluminescence imaging. Pressure was measured during the engine cycle in both the main chamber and pre-chamber to monitor respective combustion progress. Experimental heat release rates (HRR) calculated from the measured pressure profiles were used as inputs for two different GT-Power 1D simulations to evaluate the pre-chamber jet-exit momentum and penetration distance. The first simulation used both the calculated main-chamber and pre-chamber HRR, while the second used only the main chamber HRR with the pre-chamber HRR modeled. Results show discrepancies between the models mainly in the pressurization of the pre-chamber which in turn affected jet penetration rate and highlights the sensitivity of the simulation results to proper input selection. Experimental results further show increased pressurization, with an associated acceleration of jet penetration, when operating with nanosecond spark ignition systems regardless of the pre-chamber tip geometry used.
This paper presents a visualization technique for incorporating eigenvector estimates with geospatial data to create inter-area mode shape maps. For each point of measurement, the method specifies the radius, color, and angular orientation of a circular map marker. These characteristics are determined by the elements of the right eigenvector corresponding to the mode of interest. The markers are then overlaid on a map of the system to create a physically intuitive visualization of the mode shape. This technique serves as a valuable tool for differentiating oscillatory modes that have similar frequencies but different shapes. This work was conducted within the Western Interconnection Modes Review Group (WIMRG) in the Western Electric Coordinating Council (WECC). For testing, we employ the WECC 2021 Heavy Summer base case, which features a high-fidelity, industry standard dynamic model of the North American Western Interconnection. Mode estimates are produced via eigen-decomposition of a reduced-order state matrix identified from simulated ringdown data. The results provide improved physical intuition about the spatial characteristics of the inter-area modes. In addition to offline applications, this visualization technique could also enhance situational awareness for system operators when paired with online mode shape estimates.
A high-speed, two-color pyrometer was developed and employed to characterize the temperature of the ejecta from pyrotechnic igniters. The pyrometer used a single objective lens, beamsplitter, and two high-speed cameras to maximize the spatial and temporal resolutions. The pyrometer used the integrated intensity of under-resolved particles to maintain a large region of interest to capture more particles. The spectral response of the pyrometer was determined based on the response of each optical component and the total system was calibrated using a black body source to ensure accurate intensity ratios over the range of interest.
Soft error rates (SER) are characterized for the 5-nm bulk FinFET D flip-flops for alpha particles, thermal neutrons, and high-energy neutrons as a function of supply voltage. At nominal operating voltage, the 5-nm node has higher SER than the 7-nm node for all three particle types, with increases of 148%, 168%, and 26%, respectively. The overall SER for the 5-nm node was ~2X greater than that of the 7-nm node, because the reduction in critical charge was higher than that in collected charge. For alpha particle exposures, temperature effects on SER were more prominent for the 5-nm node than both the 7-nm and 16-nm node. Relative contribution of alpha particle SER increases with scaling, and it accounts for 13% of the overall SER at the 5-nm node.
For the protection engineer, it is often the case, that full coverage and thus perfect selectivity of the system is not an option for protection devices. This is because perfect selectivity requires protection devices on every line section of the network. Due to cost limitation, relays may not be placed on each branch of a network. Therefore, a method is needed to allow for optimal coordination of relays with sparse relay placement. In this paper, methods for optimal coordination of networks with sparse relay placement introduced in prior work are applied to a system where both overcurrent and distance relays are present. Additionally, a method for defining primary (Zone 1) and secondary (Zone 2) protection zones for the distance relays in such a sparse system is proposed. The proposed method is applied to the IEEE 123-bus test case. The proposed method is found to successfully coordinate the system while also limiting the maximum relay operating time to 1.78s which approaches the theoretical lower bound of 1.75s.
Analog in-memory computing is a method to improve the efficiency of deep neural network inference by orders of magnitude, by utilizing analog properties of a nonvolatile memory. This places new requirements on the memory device, which physically represent neural net weights as analog states. By carefully considering the algorithm implications when mapping weights to physical states, it is possible to achieve precision very close to that of a digital accelerator using a 40nm embedded SONOS.
The Multi-Fidelity Toolkit (MFTK) is a simulation tool being developed at Sandia National Laboratories for aerodynamic predictions of compressible flows over a range of physics fidelities and computational speeds. These models include the Reynolds-Averaged Navier–Stokes (RANS) equations, the Euler equations, and modified Newtonian aerodynamics (MNA) equations, and they can be invoked independently or coupled with hierarchical Kriging to interpolate between high-fidelity simulations using lower-fidelity data. However, as with any new simulation capability, verification and validation are necessary to gather credibility evidence. This work describes formal model validation with uncertainty considerations that leverages experimental data from the HIFiRE-1 wind tunnel tests. The geometry is a multi-conic shape that produces complex flow phenomena under hypersonic conditions. A thorough treatment of the validation comparison with prediction error and validation uncertainty is also presented.
Snow and ice accumulation on photovoltaic (PV) panels is a recognized-but poorly quantified-contributor to PV performance, not only in geographic areas that see persistent snow in winter but also at lower latitudes, where frozen precipitation and 'snowmageddon' events can wreak havoc with the solar infrastructure. In addition, research on the impact of snow and cold on PV systems has not kept pace with the proliferation of new technologies, the rapid deployment of PV in northern latitudes, and experiences with long-term field performance. This paper describes the value of a dedicated outdoor research facility for longitudinal performance and reliability studies of emerging technologies in cold climates.
In this work we introduce Bootstrapped Paired Neural Networks (BPNN), a semi-supervised, low-shot model with uncertainty quantification (UQ). BPNN can be used for classification and target detection problems commonly encountered when working with aerospace imagery data, such as hyperspectral imagery (HSI) data. When collecting aerospace imaging data, there is often large amounts of data which can be costly to label, so we would like to supplement labeled data with the vast unlabeled data (often > 90% of the data) available, we do this using semi-supervised techniques (Exponential Average Adversarial Training). Often, it is difficult and costly to obtain the sample size necessary to train a deep learning model to a new class or target; using paired neural networks (PNN), our model is generalized to low-and no-shot learning by learning an embedding space for which the underlying data population lives, this way additional labeled data may not be necessary to detect for targets or classes which weren't originally trained on. Finally, by bootstrapping the PNN, the BPNN model gives an uncertainty score on predicted classifications with minimal statistical distributional assumptions. Uncertainty is necessary in the high consequence problems that many applications in aerospace endure. The model's ability to provide uncertainty for its own predictions can be used to reduce false alarms rates, provide explainability to black box models, and help design efficient future data collection campaigns. Although models exist to contain two of these three qualities, to our knowledge no model contains all three: semi-supervised, low-shot, and uncertainty quantification. We generate a HSI scene using a high fidelity data simulator that gives us ground truth radiance spectra, allowing us to fully assess the quality of our model and compare to other common models. When applying PBNN to our HSI scene, it outperforms in target detection against classic methods, such as Adaptive Cosine Estimator (ACE), simple deep learning models without low-shot or semi-supervised capabilities, and models using only low-shot learning techniques such as regular PNN. When extending to targets not originally trained on, the model again outperforms the regular PNN. Using the UQ of predictions, we create 'high confidence sets' which contain predictions which are reliably correct and can help suppress false alarms. This is shown by the higher performance of the 'high confidence set' at particular constant false alarm rates. They also provide an avenue for automation while other predictions in high consequence situations might need to be analyzed further. BPNN is a powerful new predictive model that could be used to maximize the data collected by aerial assets while instilling confidence in model predictions for high consequence situations and being flexible enough to find previously unobserved targets.
2022 IEEE Texas Power and Energy Conference, TPEC 2022
Biswal, Milan; Pati, Shubhasmita; Ranade, Satish J.; Lavrova, Olga; Reno, Matthew J.
The application of traveling wave principles for fault detection in distribution systems is challenging because of multiple reflections from the laterals and other lumped elements, particularly when we consider communication-free applications. We propose and explore the use of Shapelets to characterize fault signatures and a data-driven machine learning model to accurately classify the faults based on their distance. Studies of a simple 5-bus system suggest that the use of Shapelets for detecting faults is promising. The application to practical three-phase distribution feeders is the subject of continuing research.
Calibrating a finite element model to test data is often required to accurately characterize a joint, predict its dynamic behavior, and determine fastener fatigue life. In this work, modal testing, model calibration, and fatigue analysis are performed for a bolted structure, and various joint modeling techniques are compared. The structure is designed to test a single bolt to fatigue failure by utilizing an electrodynamic modal shaker to axially force the bolted joint at resonance. Modal testing is done to obtain the dynamic properties, evaluate finite element joint modeling techniques, and assess the effectiveness of a vibration approach to fatigue testing of bolts. Results show that common joint models can be inaccurate in predicting bolt loads, and even when updated using modal test data, linear structural models alone may be insufficient in evaluating fastener fatigue.
Incorrect modeling of control characteristics for inverter-based resources (IBRs) can affect the accuracy of electric power system studies. In many distribution system contexts, the control settings for behind-the-meter (BTM) IBRs are unknown. This paper presents an efficient method for selecting a small number of time series samples from net load meter data that can be used for reconstructing or classifying the control settings of BTM IBRs. Sparse approximation techniques are used to select the time series samples that cause the inversion of a matrix of candidate responses to be as well-conditioned as possible. We verify these methods on 451 actual advanced metering infrastructure (AMI) datasets from loads with BTM IBRs. Selecting 60 15-minute granularity time series samples, we recover BTM control characteristics with a mean error less than 0.2 kVAR.
Propagating thermal runaway events are a significant threat to utility-scale storage installations. A propagating thermal runaway event is a cascading series of failures in which energy released from a failed cell triggers subsequent failures in nearby cells. Without intervention, propagation can turn an otherwise manageable single cell failure into a full system conflagration. This study presents a method of mitigating the severity of propagating thermal runaway events in utility-scale storage systems by leveraging the capabilities of a module-interfaced power conversion architecture. The method involves strategic depletion of storage modules to delay or arrest propagation, reducing the total thermal energy released in the failure event. The feasibility of the method is assessed through simulations of propagating thermal runaway events in a 160 kW/80 kWh energy storage system.
Variable energy resources (VERs) like wind and solar are the future of electricity generation as we gradually phase out fossil fuel due to environmental concerns. Nations across the globe are also making significant strides in integrating VERs into their power grids as we strive toward a greener future. However, integration of VERs leads to several challenges due to their variable nature and low inertia characteristics. In this paper, we discuss the hurdles faced by the power grid due to high penetration of wind power generation and how energy storage system (ESSs) can be used at the grid-level to overcome these hurdles. We propose a new planning strategy using which ESSs can be sized appropriately to provide inertial support as well as aid in variability mitigation, thus minimizing load curtailment. A probabilistic framework is developed for this purpose, which takes into consideration the outage of generators and the replacement of conventional units with wind farms. Wind speed is modeled using an autoregressive moving average technique. The efficacy of the proposed methodology is demonstrated on the WSCC 9-bus test system.
This work investigates both avalanche behavior and failure mechanism of 3 kV GaN-on-GaN vertical P-N diodes, that were fabricated and later tested under unclamped inductive switching (UIS) stress. The goal of this study is to use the particular avalanche characteristics and the failure mechanism to identify issues with the field termination and then provide feedback to improve the device design. DC breakdown is measured at the different temperatures to confirm the avalanche breakdown. Diode's avalanche robustness is measured on-wafer using a UIS test set-up which was integrated with a wafer chuck and CCD camera. Post failure analysis of the diode is done using SEM and optical microscopy to gain insight into the device failure physics.
Injecting CO2 into a deep geological formation (i.e., geological carbon storage, GCS) can induce earthquakes along preexisting faults in the earth's upper crust. Seismic survey and regional geo-structure analysis are typically employed to map the faults prone to earthquakes prior to injection. However, earthquakes induced by fluid injection from other subsurface energy storage and recovery activities show that systematic evaluation of the potential of induced seismicity associated with GCS is necessary. This study mechanistically investigates how multiphysical interaction among injected CO2, preexisting pore fluids and rock matrix alters stress states on faults and which physical mechanisms can nucleate earthquakes along the faults. Increased injection pressure is needed to overcome capillary entry pressure of the fault zone, driven by the contrast of fluids' wetting characteristics. Accumulated CO2 within the reservoir delays post shut-in reduction in pressure and stress fields along the fault that may enhance the potential for earthquake nucleation after terminating injection operations. Elastic energy generated by coupled processes transfers to low-permeability or hydraulically isolated basement faults, which can initiate slip of the faults. Our findings from generic studies suggest that geomechanical simulations integrated with multiphase flow system are essential to detect deformation-driven signals and mitigate potential seismic hazards associated with CO2 injection.
We investigate the space complexity of two graph streaming problems: MAX-CUT and its quantum analogue, QUANTUM MAX-CUT. Previous work by Kapralov and Krachun [STOC 19] resolved the classical complexity of the classical problem, showing that any (2 - ?)-approximation requires O(n) space (a 2-approximation is trivial with O(log n) space). We generalize both of these qualifiers, demonstrating O(n) space lower bounds for (2 - ?)-approximating MAX-CUT and QUANTUM MAX-CUT, even if the algorithm is allowed to maintain a quantum state. As the trivial approximation algorithm for QUANTUM MAX-CUT only gives a 4-approximation, we show tightness with an algorithm that returns a (2 + ?)-approximation to the QUANTUM MAX-CUT value of a graph in O(log n) space. Our work resolves the quantum and classical approximability of quantum and classical Max-Cut using o(n) space.We prove our lower bounds through the techniques of Boolean Fourier analysis. We give the first application of these methods to sequential one-way quantum communication, in which each player receives a quantum message from the previous player, and can then perform arbitrary quantum operations on it before sending it to the next. To this end, we show how Fourier-analytic techniques may be used to understand the application of a quantum channel.
Structural alloys may experience corrosion when exposed to molten chloride salts due to selective dissolution of active alloying elements. One way to prevent this is to make the molten salt reducing. For the KCl + MgCl2 eutectic salt mixture, pure Mg can be added to achieve this. However, Mg can form intermetallic compounds with nickel at high temperatures, which may cause alloy embrittlement. This study shows that an optimum level of excess Mg could be added to the molten salt which will prevent corrosion of alloys like 316 H, while not forming any detectable Ni-Mg intermetallic phases on Ni-rich alloy surfaces.
This user’s guide documents capabilities in Sierra/SolidMechanics which remain “in-development” and thus are not tested and hardened to the standards of capabilities listed in Sierra/SM 5.4 User’s Guide. Capabilities documented herein are available in Sierra/SM for experimental use only until their official release. These capabilities include, but are not limited to, novel discretization approaches such as the conforming reproducing kernel (CRK) method, numerical fracture and failure modeling aids such as the extended finite element method (XFEM) and J-integral, explicit time step control techniques, dynamic mesh rebalancing, as well as a variety of new material models and finite element formulations.
Neural networks (NN)s have been increasingly proposed as surrogates for approximation of systems with computationally expensive physics for rapid online evaluation or exploration. As these surrogate models are integrated into larger optimization problems used for decision making, there is a need to verify their behavior to ensure adequate performance over the desired parameter space. We extend the ideas of optimization-based neural network verification to provide guarantees of surrogate performance over the feasible optimization space. In doing so, we present formulations to represent neural networks within decision-making problems, and we develop verification approaches that use model constraints to provide increasingly tight error estimates. We demonstrate the capabilities on a simple steady-state reactor design problem.
Zhang, Chen; Jacobson, Clas; Zhang, Qi; Biegler, Lorenz T.; Eslick, John C.; Zamarripa, Miguel A.; Stinchfeld, Georgia; Siirola, John D.; Laird, Carl D.
For many industries addressing varied customer needs means producing a family of products that satisfy a range of design requirements. Manufacturers seek to design this family of products while exploiting opportunities for shared components to reduce manufacturing cost and complexity. We present a mixed-integer programming formulation that determines the optimal design for each product, the number and design of shared components, and the allocation of those shared components across the products in the family. This formulation and workflow for product family design has created significant business impact on the industrial design of product families for large-scale commercial HVAC chillers in Carrier Global Corporation. We demonstrate the approach on an open case study based on a transcritical CO2 refrigeration cycle. This case study and our industrial experience show that the formulation is computationally tractable and can significantly reduce engineering time by replacing the manual design process with an automated approach.
Quantifying gas-surface interactions for hypersonic reentry applications remains a challenging and complex problem where credible models are needed to design and analyze thermal protection systems. A flexible sensitivity analysis approach is demonstrated to analyze finite-rate ablation models to identify reaction parameters and mechanisms of influence on predicted quantities of interest. Simulations of hypersonic flow over a sphere-cone are presented using parameterized Park, Zhluktov and Abe (ZA), and MURI finite-rate models that describe the oxidation and sublimation of carbon. The results presented in this study emphasize the importance of characterizing model inputs that are shown to have a high impact on predicted quantities and build evidence to assess credibility of these models.
In high temperature (HT) environments often encountered in geothermal wells, data rate transfers for downhole instrumentation are relatively limited due to transmission line bandwidth and insertion loss and the processing speed of HT microcontrollers. In previous research, Sandia National Laboratory Geothermal Department obtained 3.8 Mbps data rates over 1524 m (5000 ft) for single conductor wireline cable with less than a 1x10-8 bit error rate utilizing low temperature NITM hardware (formerly National InstrumentsTM). Our protocol technique was a combination of orthogonal frequency-division multiplexing and quadrature amplitude modulation across the bandwidth of the single conductor wireline. This showed it is possible to obtain high data rates in low bandwidth wirelines. This paper focuses on commercial HT microcontrollers (µC), rather than low temperature NITM modules, to enable high-speed communication in an HT environment. As part of this effort, four devices were evaluated, and an optimal device (SM320F28335-HT) was selected for its high clock rates, floating-point unit, and on-board analog-to-digital converter. A printed circuit board was assembled with the HT µC, an HT resistor digital-to-analog converter, and an HT line driver. The board was tested at the microcontroller's rated maximum temperature (210°C) for a week while transmitting through a 1524 m (5000 ft) wireline. A final test was conducted to the point of failure at elevated temperatures. This paper will discuss communication methods, achieved data rates, and hardware selection. This effort contributes to the enhancement of HT instrumentation by enabling greater sensor counts and improving data accuracy and transfer rates.
The goal of this paper is to present a set of measurements from a benchmark structure containing two bolted joints to support future efforts to predict the damping due to the joints and to model nonlinear coupling between the first two elastic modes. Bolted joints introduce nonlinearities in structures, typically causing a softening in the natural frequency and an increase in damping because of frictional slip between the contact interfaces within the joint. These nonlinearities pose significant challenges when characterizing the response of the structure under a large range of load amplitudes, especially when the modal responses become coupled, causing the effective damping and natural frequency to not only depend on the excitation amplitude of the targeted mode, but also the relative amplitudes of other modes. In this work, two nominally identical benchmark structures, known in some prior works as the S4 beam, are tested to characterize their nonlinear properties for the first two elastic modes. Detailed surface measurements are presented and validated through finite element analysis and reveal distinct contact interactions between the two sets of beams. The free-free test structures are excited with an impact hammer and the transient response is analyzed to extract the damping and frequency backbone curves. A range of impact amplitudes and drive points are used to isolate a single mode or to excite both modes simultaneously. Differences in the nonlinear response correlate with the relative strength of the modes that are excited, allowing one to characterize mode coupling. Each of the beams shows different nonlinear properties for each mode, which is attributed to the different contact pressure distributions between the parts, although the mode coupling relationship is found to be consistent between the two. The test data key finding are presented in this paper and the supporting data is available on a public repository for interested researchers.
Calibrating a finite element model to test data is often required to accurately characterize a joint, predict its dynamic behavior, and determine fastener fatigue life. In this work, modal testing, model calibration, and fatigue analysis are performed for a bolted structure, and various joint modeling techniques are compared. The structure is designed to test a single bolt to fatigue failure by utilizing an electrodynamic modal shaker to axially force the bolted joint at resonance. Modal testing is done to obtain the dynamic properties, evaluate finite element joint modeling techniques, and assess the effectiveness of a vibration approach to fatigue testing of bolts. Results show that common joint models can be inaccurate in predicting bolt loads, and even when updated using modal test data, linear structural models alone may be insufficient in evaluating fastener fatigue.
In accident scenarios involving release of tritium during handling and storage, the level of risk to human health is dominated by the extent to which radioactive tritium is oxidized to the water form (T2O or THO). At some facilities, tritium inventories consist of very small quantities stored at sub-atmospheric pressure, which means that tritium release accident scenarios will likely produce concentrations in air that are well below the lower flammability limit. It is known that isotope effects on reaction rates should result in slower oxidation rates for heavier isotopes of hydrogen, but this effect has not previously been quantified for oxidation at concentrations well below the lower flammability limit for hydrogen. This work describes hydrogen isotope oxidation measurements in an atmospheric tube furnace reactor. These measurements consist of five concentration levels between 0.01% and 1% protium or deuterium and two residence times. Oxidation is observed to occur between about 550°C and 800°C, with higher levels of conversion achieved at lower temperatures for protium with respect to deuterium at the same volumetric inlet concentration and residence time. Computational fluid dynamics simulations of the experiments were used to customize reaction orders and Arrhenius parameters in a 1-step oxidation mechanism. The trends in the rates for protium and deuterium are extrapolated based on guidance from literature to produce kinetic rate parameters appropriate for tritium oxidation at low concentrations.
Variables estimated by Battery Management Systems (BMSs) such as the State of Charge (SoC) may be vulnerable to False Data Injection Attacks (FDIAs). Bad actors could use FDIAs to manipulate sensor readings, which could degrade Battery Energy Storage Systems (BESSs) or result in poor system performance. In this paper we propose a method for accurate SoC estimation for series-connected stacks of batteries and detection of FDIA in cell and stack voltage sensors using physics-based models, an Extended Kalman Filter (EKF), and a Cumulative Sum (CUSUM) algorithm. Utilizing additional sensors in the battery stack allowed the system to remain observable in the event of a single sensor failure, allowing the system to continue to accurately estimate states when one sensor at a time was offline. A priori residual data for each voltage sensor was used in the CUSUM algorithm to find the minimum detectable attack (500 μV) with no false positives.
Wind turbine wakes are characterized by helical trailing tip vortices that are highly stable initially and act as a shield against mixing with the ambient flow and thereby delay wake recovery until destructive mutual interference of the vortices begins. Delayed wake recovery in turn reduces the power production of downstream turbines that are positioned in the wakes of upstream turbines. The long natural decay length forces wind farms to have large distances between turbines to yield sufficient wake recovery. Herein, we tested a new concept aimed at accelerating the breakdown of wind turbine tip vortices by causing the vortices to interact with one another almost immediately behind the rotor. By adding a spire behind the rotor, essentially a blockage to perturb the paths of the tip vortices, we hypothesized that the altered paths of the tip vortices would cause their destructive interference process to begin sooner. The concept of a nacelle-mounted spire was tested in high-fidelity large-eddy simulations using Nalu-Wind. Four different spires were modeled with wall-resolved meshes behind the rotor of a wind turbine with another turbine five diameters downstream. We compared power and wake data against baseline results to determine whether the spires accelerated wake recovery of the upstream turbine and thereby increased the power of the downstream turbine. The results showed no change in the total power of the two turbines for any spire compared to its respective baseline. These results were further explored by testing at higher spatial resolution and without turbulence in the inflow. The increased spatial resolution increased the apparent stability of the tip vortices while the lack of turbulence did not. We conclude that the spires’ geometry and size were inadequate to alter the helical paths of the trailing tip vortices and that modeling of the formation and decay of tip vortices may be highly sensitive to model parameters.
Identifying the location of faults in a fast and accurate manner is critical for effective protection and restoration of distribution networks. This paper describes an efficient method for detecting, localizing, and classifying faults using advanced signal processing and machine learning tools. The method uses an Isolation Forest technique to detect the fault. Then Continuous Wavelet Transform (CWT) is used to analyze the traveling waves produced by the faults. The CWT coefficients of the current signals at the time of arrival of the traveling wave present unique characteristics for different fault types and locations. These CWT coefficients are fed into a Convolutional Neural Network (CNN) to train and classify fault events. The results show that for multiple fault scenarios and solar PV conditions, the method is able to determine the fault type and location with high accuracy.
Turbine generator power from simulations using Actuator Line Models and Actuator Disk Models with a Filtered Lifting Line Correction are compared to field data of a V27 turbine. Preliminary results of the wake characteristics are also presented. Turbine quantities of interest from traditional ALM and ADM with the Gaussian kernel (ϵ) set at the optimum value for matching power production and that resolve the kernel at all mesh sizes are also presented. The atmospheric boundary layer is simulated using Nalu-Wind, a Large Eddy Simulation code which is part of the ExaWind code suite. The effect of mesh resolution on quantities of interest is also examined.
Metasurface lenses are fabricated using membrane projection lithography following a CMOS-compatible process flow. The lenses are 10-mm in diameter and employ 3-dimensional unit cells designed to function in the mid-infrared spectral range.
Downtown low-voltage (LV) distribution networks are generally protected with network protectors that detect faults by restricting reverse power flow out of the network. This creates protection challenges for protecting the system as new smart grid technologies and distributed generation are installed. This report summarizes well-established methods for the control and protection of LV secondary network systems and spot networks, including operating features of network relays. Some current challenges and findings are presented from interviews with three utilities, PHI PEPCO, Oncor Energy Delivery, and Consolidated Edison Company of New York. Opportunities for technical exploration are presented with an assessment of the importance or value and the difficulty or cost. Finally, this leads to some recommendations for research to improve protection in secondary networks.
We present a procedure for randomly generating realistic steady-state contingency scenarios based on the historical outage data from a particular event. First, we divide generation into classes and fit a probability distribution of outage magnitude for each class. Second, we provide a method for randomly synthesizing generator resilience levels in a way that preserves the data-driven probability distributions of outage magnitude. Finally, we devise a simple method of scaling the storm effects based on a single global parameter. We apply our methods using data from historical Winter Storm Uri to simulate contingency events for the ACTIVSg2000 synthetic grid on the footprint of Texas.
This paper presents the formulation, implementation, and demonstration of a new, largely phenomenological, model for the damage-free (micro-crack-free) thermomechanical behavior of rock salt. Unlike most salt constitutive models, the new model includes both drag stress (isotropic) and back stress (kinematic) hardening. The implementation utilizes a semi-implicit scheme and a fall-back fully-implicit scheme to numerically integrate the model's differential equations. Particular attention was paid to the initial guesses for the fully-implicit scheme. Of the four guesses investigated, an initial guess that interpolated between the previous converged state and the fully saturated hardening state had the best performance. The numerical implementation was then used in simulations that highlighted the difference between drag stress hardening versus combined drag and back stress hardening. Simulations of multi-stage constant stress tests showed that only combined hardening could qualitatively represent reverse (inverse transient) creep, as well as the large transient strains experimentally observed upon switching from axisymmetric compression to axisymmetric extension. Simulations of a gas storage cavern subjected to high and low gas pressure cycles showed that combined hardening led to substantially greater volume loss over time than drag stress hardening alone.
Neural networks (NN) have become almost ubiquitous with image classification, but in their standard form produce point estimates, with no measure of confidence. Bayesian neural networks (BNN) provide uncertainty quantification (UQ) for NN predictions and estimates through the posterior distribution. As NN are applied in more high-consequence applications, UQ is becoming a requirement. Automating systems can save time and money, but only if the operator can trust what the system outputs. BNN provide a solution to this problem by not only giving accurate predictions and estimates, but also an interval that includes reasonable values within a desired probability. Despite their positive attributes, BNN are notoriously difficult and time consuming to train. Traditional Bayesian methods use Markov Chain Monte Carlo (MCMC), but this is often brushed aside as being too slow. The most common method is variational inference (VI) due to its fast computation, but there are multiple concerns with its efficacy. MCMC is the gold standard and given enough time, will produce the correct result. VI, alternatively, is an approximation that converges asymptotically. Unfortunately (or fortunately), high consequence problems often do not live in the land of asymtopia so solutions like MCMC are preferable to approximations. We apply and compare MCMC-and VI-trained BNN in the context of target detection in hyperspectral imagery (HSI), where materials of interest can be identified by their unique spectral signature. This is a challenging field, due to the numerous permuting effects practical collection of HSI has on measured spectra. Both models are trained using out-of-the-box tools on a high fidelity HSI target detection scene. Both MCMC-and VI-trained BNN perform well overall at target detection on a simulated HSI scene. Splitting the test set predictions into two classes, high confidence and low confidence predictions, presents a path to automation. For the MCMC-trained BNN, the high confidence predictions have a 0.95 probability of detection with a false alarm rate of 0.05 when considering pixels with target abundance of 0.2. VI-trained BNN have a 0.25 probability of detection for the same, but its performance on high confidence sets matched MCMC for abundances >0.4. However, the VI-trained BNN on this scene required significant expert tuning to get these results while MCMC worked immediately. On neither scene was MCMC prohibitively time consuming, as is often assumed, but the networks we used were relatively small. This paper provides an example of how to utilize the benefits of UQ, but also to increase awareness that different training methods can give different results for the same model. If sufficient computational resources are available, the best approach rather than the fastest or most efficient should be used, especially for high consequence problems.
Remote assessment of physiological parameters has enabled patient diagnostics without the need for a medical professional to become exposed to potential communicable diseases. In particular, early detection of oxygen saturation, abnormal body temperature, heart rate, and/or blood pressure could affect treatment protocols. The modeling effort in this work uses an adding-doubling radiative transfer model of a seven-layer human skin structure to describe absorption and reflection of incident light within each layer. The model was validated using both abiotic and biotic systems to understand light interactions associated with surfaces consisting of complex topography as well as multiple illumination sources. Using literature-based property values for human skin thickness, absorption, and scattering, an average deviation of 7.7% between model prediction and experimental reflectivity was observed in the wavelength range of 500-1000 nm.
Meagher, Robert M.; Mangadu, Betty; Velappan, Nileena; Nguyen, Hau B.; Micheva-Viteva, Sofiya; Bedinger, Daniel; Ye, Chunyan; Watts, Austin J.; Bradfute, Steven; Bin HuBin; Waldo, Geoffrey S.; Lillo, Antonietta M.
Here, we describe the isolation of 18 unique anti SARS-CoV-2 human single-chain antibodies from an antibody library derived from healthy donors. The selection used a combination of phage and yeast display technologies and included counter-selection strategies meant to direct the selection of the receptor-binding motif (RBM) of SARS-CoV-2 spike protein’s receptor binding domain (RBD2). Selected antibodies were characterized in various formats including IgG, using flow cytometry, ELISA, high throughput SPR, and fluorescence microscopy. We report antibodies’ RBD2 recognition specificity, binding affinity, and epitope diversity, as well as ability to block RBD2 binding to the human receptor angiotensin-converting enzyme 2 (ACE2) and to neutralize authentic SARS-CoV-2 virus infection in vitro. We present evidence supporting that: 1) most of our antibodies (16 out of 18) selectively recognize RBD2; 2) the best performing 8 antibodies target eight different epitopes of RBD2; 3) one of the pairs tested in sandwich assays detects RBD2 with sub-picomolar sensitivity; and 4) two antibody pairs inhibit SARS-CoV-2 infection at low nanomolar half neutralization titers. Based on these results, we conclude that our antibodies have high potential for therapeutic and diagnostic applications. Importantly, our results indicate that readily available non immune (naïve) antibody libraries obtained from healthy donors can be used to select high-quality monoclonal antibodies, bypassing the need for blood of infected patients, and offering a widely accessible and low-cost alternative to more sophisticated and expensive antibody selection approaches (e.g. single B cell analysis and natural evolution in humanized mice).
Schwering, Paul C.; Winn, Carmen L.; Jaysaval, Piyoosh; Knox, Hunter; Siler, Drew; Hardwick, Christian; Ayling, Bridget; Faulds, James; Mlawsky, Elijah; Mcconville, Emma; Norbeck, Jack; Hinz, Nicholas; Matson, Gabe; Queen, John
Sedimentary-hosted geothermal energy systems are permeable structural, structural-stratigraphic, and/or stratigraphic horizons with sufficient temperature for direct use and/or electricity generation. Sedimentary-hosted (i.e., stratigraphic) geothermal reservoirs may be present in multiple locations across the central and eastern Great Basin of the USA, thereby constituting a potentially large base of untapped, economically accessible energy resources. Sandia National Laboratories has partnered with a multi-disciplinary group of collaborators to evaluate a stratigraphic system in Steptoe Valley, Nevada using both established and novel geophysical imaging techniques. The goal of this study is to inform an optimized strategy for subsequent exploration and development of this and analogous resources. Building from prior Nevada Play Fairway Analysis (PFA), this team is primarily 1) collecting additional geophysical data, 2) employing novel joint geophysical inversion/modeling techniques to update existing 3D geologic models, and 3) integrating the geophysical results to produce a working, geologically constrained thermo-hydrological reservoir model. Prior PFA work highlights Steptoe Valley as a favorable resource basin that likely has both sedimentary and hydrothermal characteristics. However, there remains significant uncertainty on the nature and architecture of the resource(s) at depth, which increases the risk in exploratory drilling. Newly acquired gravity, magnetic, magnetotelluric, and controlled-source electromagnetic data, in conjunction with new and preceding geoscientific measurements and observations, are being integrated and evaluated in this study for efficacy in understanding stratigraphic geothermal resources and mitigating exploration risk. Furthermore, the influence of hydrothermal activity on sedimentary-hosted reservoirs in favorable structural settings (i.e., whether fault-controlled systems may locally enhance temperature and permeability in some deep stratigraphic reservoirs) will also be evaluated. This paper provides details and current updates on the course of this study in-progress.
Community, corporate, and government organizations are being targeted by disinformation attacks at an unprecedented rate. These attacks interrupt the ability of organizations to make high-consequence decisions and can lower their confidence in datasets and analytics. New interdisciplinary research approaches are being actively developed to expand resilience theory applications to organizations, and to determine the metrics and mitigations needed to increase resilience against disinformation. This paper presents initial ideas on adapting resilience methodologies for organizations and disinformation, highlighting key areas that require further exploration in this emerging field of research.
This paper presents Energy Storage-based Packetized Delivery of Electricity (ES-PDE) that is radically different from the operation of today's grid. Under ES-PDE, loads are powered by energy storage systems (ESS) most of the time and only receive packets of electricity periodically to power themselves and charge their ESSs. Therefore, grid operators can schedule the delivery of electricity in a manner that utilizes existing grid infrastructure. Since customers are powered by the co-located ESSs, when grid outages occur, they can be self-powered for some time before the grid is fully restored.In this paper, two operating schemes for ES-PDE are proposed. A Mixed-Integer-Linear-Programming (MILP) optimization is developed to find the optimal packet delivery schedule for each operating scheme. A case study is conducted to demonstrate the operation of ES-PDE.
The state of charge (SoC) estimated by Battery Management Systems (BMSs) could be vulnerable to False Data Injection Attacks (FDIAs), which aim to disturb state estimation. Inaccurate SoC estimation, due to attacks or suboptimal estimators, could lead to thermal runaway, accelerated degradation of batteries, and other undesirable events. In this paper, an ambient temperature-dependent model is adopted to represent the physics of a stack of three series-connected battery cells, and an Unscented Kalman Filter (UKF) is utilized to estimate the SoC for each cell. A Cumulative Sum (CUSUM) algorithm is used to detect FDIAs targeting the voltage sensors in the battery stack. The UKF was more accurate in state and measurement estimation than the Extended Kalman Filter (EKF) for Maximum Absolute Error (MAE) and Root Mean Squared Error (RMSE). The CUSUM algorithm described in this paper was able to detect attacks as low as ±1 mV when one or more voltage sensor was attacked under various ambient temperatures and attack injection times.
This manuscript presents the recent advances in Mixed-Integer Nonlinear Programming (MINLP) and Generalized Disjunctive Programming (GDP) with a particular scope for superstructure optimization within Process Systems Engineering (PSE). We present an environment of open-source software packages written in Python and based on the algebraic modeling language Pyomo. These packages include MindtPy, a solver for MINLP that implements decomposition algorithms for such problems, CORAMIN, a toolset for MINLP algorithms providing relaxation generators for nonlinear constraints, Pyomo.GDP, a modeling extension for Generalized Disjunctive Programming that allows users to represent their problem as a GDP natively, and GDPOpt, a collection of algorithms explicitly tailored for GDP problems. Combining these tools has allowed us to solve several problems relevant to PSE, which we have gathered in an easily installable and accessible library, GDPLib. We show two examples of these models and how the flexibility of modeling given by Pyomo.GDP allows for efficient solutions to these complex optimization problems. Finally, we show an example of integrating these tools with the framework IDAES PSE, leading to optimal process synthesis and conceptual design with advanced multi-scale PSE modeling systems.
Grid operating security studies are typically employed to establish operating boundaries, ensuring secure and stable operation for a range of operation under NERC guidelines. However, if these boundaries are violated, the existing system security margins will be largely unknown. As an alternative to the use of complex optimizations over dynamic conditions, this work employs the use of Reinforcement-based Machine Learning to identify a sequence of secure state transitions which place the grid in a higher degree of operating security with greater static and dynamic stability margins. The approach requires the training of a Machine Learning Agent to accomplish this task using modeled data and employs it as a decision support tool under severe, near-blackout conditions.
This paper presents a set of tests on a bolted benchmark structure called the S4 beam with a focus on evaluating coupling between the first two modes due to nonlinearity. Bolted joints are of interest in dynamically loaded structures because frictional slipping at the contact interface can introduce amplitude-dependent nonlinearities into the system, where the frequency of the structure decreases, and the damping increases. The challenge to model this phenomenon is even more difficult if the modes of the structure become coupled, violating a common assumption of mode orthogonality. This work presents a detailed set of measurements in which the nonlinearities of a bolted structure are highly coupled for the first two modes. Two nominally identical bolted structures are excited using an impact hammer test. The nonlinear damping curves for each beam are calculated using the Hilbert transform. Although the two structures have different frequency and damping characteristics, the mode coupling relationship between the first two modes of the structures is shown to be consistent and significant. The data is intended as a challenge problem for interested researchers; all data from these tests are available upon request.
The possibility of estimating the effective resistance at contact points along a seam in a cylindrical vessel is investigated. The vessel is formed from two top-hat structures bolted together at a flange. Aluminum shims at the bolt locations ensure a nearly constant 5-mil gap or slot between the flanges. Cavity modes are excited with a short monopole antenna inside the structure, and external near fields 5 mm away from the slot are probed around the vessel circumference. Comparison of CST and FDTD simulations with measurements reveals that the shape of the field-vs-angle curve is strongly dependent on the contact resistance, indicating that meaningful estimates can be extracted.
This paper uses co-located wind and photovoltaic generation, along with battery energy storage, as a single plant and introduces a method to provide a flexible synthetic inertia (SI) response based on plant-wide settings. The proposed controller accounts for variable resources and correctly adjusts device responses when an inverter-based resource (IBR) may become unavailable to provide a consistent plant level SI response. The flexible SI response is shown to adequately replace the lost synchronous inertial response from equivalent conventional generation when IBR penetration is approximately 25% in a small power system. Furthermore, it is shown that a high gain SI response provided by the combined IBR plant can reduce the rate of change of frequency magnitude over 50% from the equivalently rated conventional generation response.
This chapter deals with experimental dynamic substructures which are reduced order models that can be coupled with each other or with finite element derived substructures to estimate the system response of the coupled substructures. A unifying theoretical framework in the physical, modal or frequency domain is reviewed with examples. The major issues that have hindered experimental based substructures are addressed. An example is demonstrated with the transmission simulator method that overcomes the major historical difficulties. Guidelines for the transmission simulator design are presented.
Thermal runaway and its propagation are major safety issues in containerized lithium-ion battery energy storage systems. While conduction-driven propagation has received much attention, the thermal hazards associated with propagation via hot gases vented from failing cells are still not fully understood. Vented gases can lead to global safety issues in containerized systems, via heat transfer to other parts of the system and potential combustion hazards. In this work, we validate the characteristics of vented gases from cells undergoing thermal runaway in the thermal propagation model LIM1TR (Lithium-ion Modeling with 1-D Thermal Runaway). In particular, we assess the evolution of vented gases, venting time, and temperature profiles of single cell and multi-cell arrays based on experiments performed in Archibald et al (Fire Technology, 2020). While several metrics for estimating the venting time are assessed, a metric based on the CO2 generation results in consistent predictions. Vented gas evolution, and venting times predicted by the simulations are consistent with those estimated during the experiments. The simulation resolution and other model parameters, especially the use of an intra-particle diffusion limiter, have a large role in prediction of venting time.
In order to evaluate the time evolution of avalanche breakdown in wide and ultra-wide bandgap devices, we have developed a cable pulser experimental setup that can evaluate the time-evolution of the terminating impedance for a semiconductor device with a time resolution of 130 ps. We have utilized this pulser setup to evaluate the time-to-breakdown of vertical Gallium Nitride and Silicon Carbide diodes for possible use as protection elements in the electrical grid against fast transient voltage pulses (such as those induced by an electromagnetic pulse event). We have found that the Gallium Nitride device demonstrated faster dynamics compared to the Silicon Carbide device, achieving 90% conduction within 1.37 ns compared to the SiC device response time of 2.98 ns. While the Gallium Nitride device did not demonstrate significant dependence of breakdown time with applied voltage, the Silicon Carbide device breakdown time was strongly dependent on applied voltage, ranging from a value of 2.97 ns at 1.33 kV to 0.78 ns at 2.6 kV. The fast response time (< 5 ns) of both the Gallium Nitride and Silicon Carbide devices indicate that both materials systems could meet the stringent response time requirements and may be appropriate for implementation as protection elements against electromagnetic pulse transients.
Proceedings of PMBS 2022: Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems, Held in conjunction with SC 2022: The International Conference for High Performance Computing, Networking, Storage and Analysis
We propose a new benchmark for high-performance (HP) computers. Similar to High Performance Conjugate Gradient (HPCG), the new benchmark is designed to rank computers based on how fast they can solve a sparse linear system of equations, exhibiting computational and communication requirements typical in many scientific applications. The main novelty of the new benchmark is that it is now based on Generalized Minimum Residual method (GMRES) (combined with Geometric Multi-Grid preconditioner and Gauss-Seidel smoother) and provides the flexibility to utilize lower precision arithmetic. This is motivated by new hardware architectures that deliver lower-precision arithmetic at higher performance. There are other machines that do not follow this trend. However, using a lower-precision arithmetic reduces the required amount of data transfer, which alone could improve solver performance. Considering these trends, an HP benchmark that allows the use of different precisions for solving important scientific problems will be valuable for many different disciplines, and we also hope to promote the design of future HP computers that can utilize mixed-precision arithmetic for achieving high application performance. We present our initial design of the new benchmark, its reference implementation, and the performance of the reference mixed (double and single) precision Geometric Multi-Grid solvers on current top-ranked architectures. We also discuss challenges of designing such a benchmark, along with our preliminary numerical results using 16-bit numerical values (half and bfloat precisions) for solving a sparse linear system of equations.
There are unique challenges associated with protection and self-healing of microgrids energized by multiple inverterbased distributed energy resources. In this study, prioritized undervoltage load shedding and undervoltage-supervised overcurrent (UVOC) for fault isolation are demonstrated using PSCAD. The PSCAD implementations of these relays are described in detail, and their operation in a self-healing microgrid is demonstrated.
Area efficient self-correcting flip-flops for use with triple modular redundant (TMR) soft-error hardened logic are implemented in a 12-nm finFET process technology. The TMR flip-flop slave latches self-correct in the clock low phase using Muller C-elements in the latch feedback. These C-elements are driven by the two redundant stored values and not by the slave latch itself, saving area over a similar implementation using majority gate feedback. These flip-flops are implemented as large shift-register arrays on a test chip and have been experimentally tested for their soft-error mitigation in static and dynamic modes of operation using heavy ions and protons. We show how high clock skew can result in susceptibility to soft-errors in the dynamic mode, and explain the potential failure mechanism.
As presented above, because similar existing DOE-managed SNF (DSNF) from previous reactors have been evaluated for disposal pathways, we use this knowledge/experience as a broad reference point for initial technical bases for preliminary dispositioning of potential AR SNF. The strategy for developing fully-formed gap analyses for AR SNF entails the primary step of first obtaining all the defining characteristics of the AR SNF waste stream from the AR developers. Utilizing specific and accurate information/data for developing the potential disposal inventory to be evaluated is a key principle start for success. Once the AR SNF waste streams are defined, the initial assessments would be based on comparison to appropriate existing SNF/waste forms previously analyzed (prior experience) to make a determination on feasibility of direct disposal, or the need to further evaluate due to differences specific to the AR SNF. Assessments of criticality potential and controls would also be performed to assess any R&D gaps to be addressed in that regard as well. Although some AR SNF may need additional treatment for waste form development, these aspects may also be constrained and evaluated within the context of disposal options, including detailed gap analysis to identify further R&D activities to close the gaps.
High-performing teams learn intelligent and efficient communication and coordination strategies to maximize their joint utility. These teams implicitly understand the different roles of heterogeneous team members and adapt their communication protocols accordingly. Multi-Agent Reinforcement Learning (MARL) seeks to develop computational methods for synthesizing such coordination strategies, but formulating models for heterogeneous teams with different state, action, and observation spaces has remained an open problem. Without properly modeling agent heterogeneity, as in prior MARL work that leverages homogeneous graph networks, communication becomes less helpful and can even deteriorate the cooperativity and team performance. We propose Heterogeneous Policy Networks (HetNet) to learn efficient and diverse communication models for coordinating cooperative heterogeneous teams. Building on heterogeneous graph-attention networks, we show that HetNet not only facilitates learning heterogeneous collaborative policies per existing agent-class but also enables end-to-end training for learning highly efficient binarized messaging. Our empirical evaluation shows that HetNet sets a new state of the art in learning coordination and communication strategies for heterogeneous multi-agent teams by achieving an 8.1% to 434.7% performance improvement over the next-best baseline across multiple domains while simultaneously achieving a 200× reduction in the required communication bandwidth.
This paper provides a study of the potential impacts of climate change on intermittent renewable energy resources, battery storage, and resource adequacy in Public Service Company of New Mexico's Integrated Resource Plan for 2020 - 2040. Climate change models and available data were first evaluated to determine uncertainty and potential changes in solar irradiance, temperature, and wind speed in NM in the coming decades. These changes were then implemented in solar and wind energy models to determine impacts on renewable energy resources in NM. Results for the extreme climate-change scenario show that the projected wind power may decrease by ~13% due to projected decreases in wind speed. Projected solar power may decrease by ~4% due to decreases in irradiance and increases in temperature in NM. Uncertainty in these climateinduced changes in wind and solar resources was accommodated in probabilistic models assuming uniform distributions in the annual reductions in solar and wind resources. Uncertainty in battery storage performance was also evaluated based on increased temperature, capacity fade, and degradation in roundtrip efficiency. The hourly energy balance was determined throughout the year given uncertainties in the renewable energy resources and energy storage. The loss of load expectation (LOLE) was evaluated for the 2040 No New Combustion portfolio and found to increase from 0 days/year to a median value of ~2 days/year due to potential reductions in renewable energy resources and battery storage performance and capacity. A rank-regression analyses revealed that battery round-trip efficiency was the most significant parameter that impacted LOLE, followed by solar resource, wind resource, and battery fade. An increase in battery storage capacity to ~30,000 MWh from a baseline value of ~14,000 MWh was required to reduce the median value of LOLE to ~0.2 days/year with consideration of potential climate impacts and battery degradation.
The Multi-Fidelity Toolkit (MFTK) is a simulation tool being developed at Sandia National Laboratories for aerodynamic predictions of compressible flows over a range of physics fidelities and computational speeds. These models include the Reynolds-Averaged Navier–Stokes (RANS) equations, the Euler equations, and modified Newtonian aerodynamics (MNA) equations, and they can be invoked independently or coupled with hierarchical Kriging to interpolate between high-fidelity simulations using lower-fidelity data. However, as with any new simulation capability, verification and validation are necessary to gather credibility evidence. This work describes formal code-and solution-verification activities. Code verification is performed on the MNA model by comparing with an analytical solution for flat-plate and inclined-plate geometries. Solution-verification activities include grid-refinement studies of HIFiRE-1 wind tunnel measurements, which are used for validation, for all model fidelities.
We evaluate the use of reference modules for monitoring effective irradiance in PV power plants, as compared with traditional plane-of-array (POA) irradiance sensors, for PV monitoring and capacity tests. Common POA sensors such as pyranometers and reference cells are unable to capture module-level irradiance nonuniformity and require several correction factors to accurately represent the conditions for fielded modules. These problems are compounded for bifacial systems, where the power loss due to rear side shading and rear-side plane-of-array (RPOA) irradiance gradients are greater and more difficult to quantify. The resulting inaccuracy can have costly real-world consequences, particularly when the data are used to perform power ratings and capacity tests. Here we analyze data from a bifacial single-axis tracking PV power plant, (175.6 MWdc) using 5 meteorological (MET) stations, located on corresponding inverter blocks with capacities over 4 MWdc. Each MET station consists of bifacial reference modules as well pyranometers mounted in traditional POA and RPOA installations across the PV power plant. Short circuit current measurements of the reference modules are converted to effective irradiance with temperature correction and scaling based on flash test or nameplate short circuit values. Our work shows that bifacial effective irradiance measured by pyranometers averages 3.6% higher than the effective irradiance measured by bifacial reference modules, even when accounting for spectral, angle of incidence, and irradiance nonuniformity. We also performed capacity tests using effective irradiance measured by pyranometers and reference modules for each of the 5 bifacial single-axis tracking inverter blocks mentioned above. These capacity tests evaluated bifacial plant performance at ∼3.9% lower when using bifacial effective irradiance from pyranometers as compared to the same calculation performed with reference modules.
We develop methods that could be used to qualify a training dataset and a data-driven turbulence closure trained on it. By qualify, we mean identify the kind of turbulent physics that could be simulated by the data-driven closure. We limit ourselves to closures for the Reynolds-Averaged Navier Stokes (RANS) equations. We build on our previous work on assembling feature-spaces, clustering and characterizing Direct Numerical Simulation datasets that are typically pooled to constitute training datasets. In this paper, we develop an alternative way to assemble feature-spaces and thus check the correctness and completeness of our previous method. We then use the characterization of our training dataset to identify if a data-driven turbulence closure learned on it would generalize to an unseen flow configuration – an impinging jet in our case. Finally, we train a RANS closure architected as a neural network, and develop an explanation i.e., an interpretable approximation, using generalized linear mixed-effects models and check whether the explanation resembles a contemporary closure from turbulence modeling.
We introduce a technique to automatically convert local boundary conditions into nonlocal volume constraints for nonlocal Poisson’s and peridynamic models. The proposed strategy is based on the approximation of nonlocal Dirichlet or Neumann data with a local solution obtained by using available boundary, local data. The corresponding nonlocal solution converges quadratically to the local solution as the nonlocal horizon vanishes, making the proposed technique asymptotically compatible. The proposed conversion method does not have any geometry or dimensionality constraints, and its computational cost is negligible, compared to the numerical solution of the nonlocal equation. The consistency of the method and its quadratic convergence with respect to the horizon is illustrated by several two-dimensional numerical experiments conducted by meshfree discretization for both Poisson’s problem and the linear peridynamic solid model.
A crucial component of field testing is the utilization of numerical models to better understand the system and the experimental data being collected. Meshing and modeling field tests is a complex and computationally demanding problem. Hexahedral elements cannot always reproduce experimental dimensions leading to grid orientation or geometric errors. Voronoi meshes can match complex geometries without sacrificing orthogonality. As a result, here we present a high-resolution 3D numerical study for the BATS heater test at the WIPP that compares both a standard non-deformed cartesian mesh along with a Voronoi mesh to match field data collected during a salt heater experiment.
Robertson, Michelle; Su, Jiann-Cherng S.; Kaven, J.O.; Hopp, Chet; Hirakawa, Evan; Gasperikova, Erika; Dobson, Patrick; Schwering, Paul C.; Nakata, Nori; Majer, Ernest L.
The DOE GeoVision study identified that Enhanced Geothermal Systems (EGS) resources have the potential to provide a significant contribution toward achieving the goal of converting the U.S. electricity system to 100% clean energy over the next few decades. To further the implementation of commercial EGS development, DOE's Geothermal Technologies Office (GTO) initiated the Wells of Opportunity (WOO) Amplify program, where unproductive wells in selected geothermal fields are to be stimulated using EGS technologies, resulting in increased power production from these resources. As part of the WOO-Amplify project, GTO assembled the Amplify Monitoring Team (AMT), whose role is to provide in-field and near-field seismic monitoring design, deployment and data analysis for stimulations under the WOO-Amplify initiative. This team, consisting of scientists and engineers from Lawrence Berkeley National Laboratory (LBNL), Sandia National Laboratories (SNL), and the US Geological Survey (USGS), is working with WOO-Amplify EGS Operators in Nevada to develop and deploy optimized seismic monitoring systems at four geothermal fields where WOO-Amplify well stimulations are planned: Don A. Campbell, Tungsten Mountain and Jersey Valley operated by Ormat Technologies, and Patua operated by Cyrq Patua Acquisition Company LLC. Using geologic and geophysical field data provided by the WOO-Amplify teams, the focus of the AMT is to develop advanced simulation and modeling techniques, design targeted seismic monitoring arrays, develop innovative and cost-effective methodologies for drilling seismic monitoring boreholes, deploy effective seismic instrumentation, and facilitate the use of microseismic data to monitor well stimulation and flow within the geothermal reservoir. Realtime seismic data from the four WOO-Amplify sites will be streamed to a publicly accessible Amplify Monitoring website. AMT's advanced simulations and template matching techniques applied during pre-stimulation phases can help improve understanding of potential seismic hazard and inform the Operator's Induced Seismicity Mitigation Protocol (ISMP). Over the next two years, AMT will be drilling, instrumenting, and recording seismic data at the WOO-Amplify field sites, telemetering the seismic waveform data to AMT's central processing system and providing the processed location data to the WOO Amplify Operator teams. These data and monitoring systems will be critical for effective monitoring of the effects of planned well stimulation and extended flow tests during the next stage of the WOO-Amplify project.
We measured the Hugoniot, Hugoniot elastic limit (HEL), and spallation strength of laser powder bed fusion (LPBF) AlSi10Mg via uniaxial plate-impact experiments to stresses greater than 13 GPa. Despite its complex anisotropic microstructure, the LPBF AlSi10Mg did not exhibit significant orientation dependence or sample-to-sample variability in these measured quantities. We found that the Hugoniot response of the LPBF AlSi10Mg is similar to that of other Al-based alloys and is well approximated by a linear relationship: us = 5.49 + 1.39up. Additionally, the measured HELs ranged from 0.25 to 0.30 GPa and spallation strengths ranged from 1.16 to 1.45 GPa, consistent with values reported in other studies of LPBF AlSi10Mg and Al-based alloys. Furthermore, strain-rate and stress dependence of the spallation strength were also observed.
Sandia National Laboratories has been tasked to operate and maintain the National Solar Thermal Test Facility (NSTTF) located in Kirtland Airforce Base near Albuquerque, NM. The NSTTF provides established test platforms and experienced researchers and technologists in the field of Concentrating Solar Technologies (CST) and Concentrating Solar Power (CSP). This three-year project seeks to maintain the NSTTF for development, testing, and application of new CSP technologies that are instrumental in advancing the state-of-the-art in support of SunShot and Generation 3 CSP technology goals. In turn, these technologies will form the foundation of the global CSP industry and continue to advance the technology to new levels of efficiency, higher temperatures, lower costs, lower risk, and higher reliability. The NSTTF provides established test platforms and highly experienced researchers and technologists in the CSP field.
Lithium/fluorinated graphite (Li/CF x ) primary batteries show great promise for applications in a wide range of energy storage systems due to their high energy density (>2100 Wh kg –1 ) and low self‐discharge rate (<0.5% per year at 25 °C). While the electrochemical performance of the CF x cathode is indeed promising, the discharge reaction mechanism is not thoroughly understood to date. In this article, a multiscale investigation of the CF x discharge mechanism is performed using a novel cathode structure to minimize the carbon and fluorine additives for precise cathode characterizations. Titration gas chromatography, X‐ray diffraction, Raman spectroscopy, X‐ray photoelectron spectroscopy, scanning electron microscopy, cross‐sectional focused ion beam, high‐resolution transmission electron microscopy, and scanning transmission electron microscopy with electron energy loss spectroscopy are utilized to investigate this system. Results show no metallic lithium deposition or intercalation during the discharge reaction. Crystalline lithium fluoride particles uniformly distributed with <10 nm sizes into the CF x layers, and carbon with lower sp 2 content similar to the hard‐carbon structure are the products during discharge. This work deepens the understanding of CF x as a high energy density cathode material and highlights the need for future investigations on primary battery materials to advance performance.
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high fidelity, validated models used in modal, vibration, static and shock analysis of structural systems. This manual describes the theory behind many of the constructs in Sierra/SD. For a more detailed description of how to use Sierra/SD, we refer the reader to User's Manual. Many of the constructs in Sierra/SD are pulled directly from published material. Where possible, these materials are referenced herein. However, certain functions in Sierra/SD are specific to our implementation. We try to be far more complete in those areas. The theory manual was developed from several sources including general notes, a programmer_notes manual, the user's notes and of course the material in the open literature.
The How To Manual supplements the User’s Manual and the Theory Manual. The goal of the How To Manual is to reduce learning time for complex end to end analyses. These documents are intended to be used together. See the User’s Manual for a complete list of the options for a solution case. All the examples are part of the Sierra/SD test suite. Each runs as is. The organization is similar to the other documents: How to run, Commands, Solution cases, Materials, Elements, Boundary conditions, and then Contact. The table of contents and index are indispensable. The Geometric Rigid Body Modes section is shared with the Users Manual.
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high-fidelity, validated models used in modal, vibration, static and shock analysis of weapons systems. This document provides a user's guide to the input for Sierra/SD. Details of input specifications for the different solution types, output options, element types and parameters are included. The appendices contain detailed examples, and instructions for running the software on parallel platforms.