Robust and accurate unmanned aircraft system (UAS) detection is pivotal in restricted air spaces. Deep learning-based object detection has been proposed to identify the presence of UASs, but it introduces two key challenges. Specifically, deep learning detectors (i) provide point estimates at test-time with no associated measure of uncertainty, and (ii) easily trigger false positive detections for birds and other aerial wildlife. In this work, we propose a novel detection algorithm, which is capable of providing uncertainty quantification (UQ) metrics at test time while also significantly reducing the false positive rate on natural wildlife. Our proposed method consists of using an ensemble of object detectors to generate a distributive estimate of each input prediction. In addition, we measure multiple UQ-based scoring metrics for each input to further validate our model's effectiveness. Through evaluation on our custom generated UAS dataset, consisting of images captured from deployed cameras, we show that our model provides robust UQ estimates, low false positive rates on wildlife, and significantly improved error rates over singular deep learning detection models.
In this work we introduce Bootstrapped Paired Neural Networks (BPNN), a semi-supervised, low-shot model with uncertainty quantification (UQ). BPNN can be used for classification and target detection problems commonly encountered when working with aerospace imagery data, such as hyperspectral imagery (HSI) data. When collecting aerospace imaging data, there is often large amounts of data which can be costly to label, so we would like to supplement labeled data with the vast unlabeled data (often > 90% of the data) available, we do this using semi-supervised techniques (Exponential Average Adversarial Training). Often, it is difficult and costly to obtain the sample size necessary to train a deep learning model to a new class or target; using paired neural networks (PNN), our model is generalized to low-and no-shot learning by learning an embedding space for which the underlying data population lives, this way additional labeled data may not be necessary to detect for targets or classes which weren't originally trained on. Finally, by bootstrapping the PNN, the BPNN model gives an uncertainty score on predicted classifications with minimal statistical distributional assumptions. Uncertainty is necessary in the high consequence problems that many applications in aerospace endure. The model's ability to provide uncertainty for its own predictions can be used to reduce false alarms rates, provide explainability to black box models, and help design efficient future data collection campaigns. Although models exist to contain two of these three qualities, to our knowledge no model contains all three: semi-supervised, low-shot, and uncertainty quantification. We generate a HSI scene using a high fidelity data simulator that gives us ground truth radiance spectra, allowing us to fully assess the quality of our model and compare to other common models. When applying PBNN to our HSI scene, it outperforms in target detection against classic methods, such as Adaptive Cosine Estimator (ACE), simple deep learning models without low-shot or semi-supervised capabilities, and models using only low-shot learning techniques such as regular PNN. When extending to targets not originally trained on, the model again outperforms the regular PNN. Using the UQ of predictions, we create 'high confidence sets' which contain predictions which are reliably correct and can help suppress false alarms. This is shown by the higher performance of the 'high confidence set' at particular constant false alarm rates. They also provide an avenue for automation while other predictions in high consequence situations might need to be analyzed further. BPNN is a powerful new predictive model that could be used to maximize the data collected by aerial assets while instilling confidence in model predictions for high consequence situations and being flexible enough to find previously unobserved targets.
Dynamical systems subject to intermittent contact are often modeled with piecewise-smooth contact forces. However, the discontinuous nature of the contact can cause inaccuracies in numerical results or failure in numerical solvers. Representing the piecewise contact force with a continuous and smooth function can mitigate these problems, but not all continuous representations may be appropriate for this use. In this work, five representations used by previous researchers (polynomial, rational polynomial, hyperbolic tangent, arctangent, and logarithm-arctangent functions) are studied to determine which ones most accurately capture nonlinear behaviors including super- and subharmonic resonances, multiple solutions, and chaos. The test case is a single-DOF forced Duffing oscillator with freeplay nonlinearity, solved using direct time integration. This work intends to expand on past studies by determining the limits of applicability for each representation and what numerical problems may occur.
Performance assessment is an important tool to estimate the long-term safety for a nuclear waste repository. Performance assessment simulations are subject to multiple kinds of uncertainty including stochastic uncertainty, state of knowledge uncertainty, and model uncertainty. Task F1 of the DECOVALEX project involves comparison of the models and methods used in post-closure performance assessment of deep geologic repositories in fractured crystalline rock, providing an opportunity to compare the effects of different sources of uncertainty. A generic reference case for a mined repository in fractured crystalline rock was put together by participating teams, where each team was responsible for determining how best to represent and implement the model. This work presents the preliminary crystalline reference case results for the Department of Energy (DOE) team.
We propose a two-stage scenario-based stochastic optimization problem to determine investments that enhance power system resilience. The proposed optimization problem minimizes the Conditional Value at Risk (CVaR) of load loss to target low-probability high-impact events. We provide results in the context of generator winterization investments in Texas using winter storm scenarios generated from historical data collected from Winter Storm Uri. Results illustrate how the CVaR metric can be used to minimize the tail of the distribution of load loss and illustrate how risk-Aversity impacts investment decisions.
The detonation of explosives produces luminous fireballs often containing particulates such as carbon soot or remnants of partially reacted explosives. The spatial distribution of these particulates is of great interest for the derivation and validation of models. In this work, three ultra-high-speed imaging techniques: diffuse back-illumination extinction, schlieren, and emission imaging, are utilized to investigate the particulate quantity, spatial distribution, and structure in a small-scale fireball. The measurements show the evolution of the particulate cloud in the fireball, identifying possible emission sources and regions of high optical thickness. Extinction measurements performed at two wavelengths shows that extinction follows the inverse wavelength behavior expected of absorptive particles in the Rayleigh scattering regime. The estimated mass from these extinction measurements shows an average soot yield consistent with previous soot collection experiments. The imaging diagnostics discussed in the current work can provide detailed information on the spatial distribution and concentration of soot, crucial for validation opportunities in the future.
A 0.2-2 GHz digitally programmable RF delay element based on a time-interleaved multi-stage switched-capacitor (TIMS-SC) approach is presented. The proposed approach enables hundreds of ns of broadband RF delay by employing sample time expansion in multiple stages of switched-capacitor storage elements. The delay element was implemented in a 45 nm SOI CMOS process and achieves a 2.55-448.6 ns programmable delay range with < 0.12% delay variation across 1.8 GHz of bandwidth at maximum delay, 2.42 ns programmable delay steps, and 330 ns/mm2 area efficiency. The device achieves 24 dB gain, 7.1 dB noise figure, and consumes 80 mW from a 1 V supply with an active area of 1.36 mm2.
The growing x-ray detection burden for vehicles at Ports of Entry in the US requires the development of efficient and reliable algorithms to assist human operator in detecting contraband. Developing algorithms for large-scale non-intrusive inspection (NII) that both meet operational performance requirements and are extensible for use in an evolving environment requires large volumes and varieties of training data, yet collecting and labeling data for these enivornments is prohibitively costly and time consuming. Given these, generating synthetic data to augment algorithm training has been a focus of recent research. Here we discuss the use of synthetic imagery in an object detection framework, and describe a simulation based approach to determining domain-informed threat image projection (TIP) augmentation.
The National Academy of Sciences, Engineering, and Medicine (NASEM) defines reproducibility as 'obtaining consistent computational results using the same input data, computational steps, methods, code, and conditions of analysis,' and replicability as 'obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data' [1]. Due to an increasing number of applications of artificial intelligence and machine learning (AI/ML) to fields such as healthcare and digital medicine, there is a growing need for verifiable AI/ML results, and therefore reproducible research and replicable experiments. This paper establishes examples of irreproducible AI/ML applications to medical sciences and quantifies the variance of common AI/ML models (Artificial Neural Network, Naive Bayes classifier, and Random Forest classifiers) for tasks on medical data sets.
Non-volatile memory arrays require select devices to ensure accurate programming. The one-selector one-resistor (1S1R) array where a two-terminal nonlinear select device is placed in series with a resistive memory element is attractive due to its high-density data storage; however, the effect of the nonlinear select device on the accuracy of analog in-memory computing has not been explored. This work evaluates the impact of select and memory device properties on the results of analog matrix-vector multiplications. We integrate nonlinear circuit simulations into CrossSim and perform end-to-end neural network inference simulations to study how the select device affects the accuracy of neural network inference. We propose an adjustment to the input voltage that can effectively compensate for the electrical load of the select device. Our results show that for deep residual networks trained on CIFAR-10, a compensation that is uniform across all devices in the system can mitigate these effects over a wide range of values for the select device I-V steepness and memory device On/Off ratio. A realistic I-V curve steepness of 60 mV/dec can yield an accuracy on CIFAR-10 that is within 0.44% of the floating-point accuracy.
Incorrect modeling of control characteristics for inverter-based resources (IBRs) can affect the accuracy of electric power system studies. In many distribution system contexts, the control settings for behind-the-meter (BTM) IBRs are unknown. This paper presents an efficient method for selecting a small number of time series samples from net load meter data that can be used for reconstructing or classifying the control settings of BTM IBRs. Sparse approximation techniques are used to select the time series samples that cause the inversion of a matrix of candidate responses to be as well-conditioned as possible. We verify these methods on 451 actual advanced metering infrastructure (AMI) datasets from loads with BTM IBRs. Selecting 60 15-minute granularity time series samples, we recover BTM control characteristics with a mean error less than 0.2 kVAR.
As the legacy distance protection schemes are starting to transition from impedance-based to traveling wave (TW) time-based, it is important to perform diligent simulations prior to commissioning the TW relay. Since Control-Hardware-In-the-Loop (CHIL) simulations have recently become a common practice for power system research, this work aims to illustrate some limitations in the integration of commercially available TW relays in CHIL for transmission-level simulations. The interconnection of Frequency-Dependent (FD) with PI-modeled transmission lines, which is a common practice in CHIL, may lead to sharp reflections that ease the relaying task. However, modeling contiguous lines as FD, or the presence of certain shunt loads, may cover certain TW reflections. As a consequence, the fault location algorithm in the relay may lead to a wrong calculation. In this paper, a qualitative comparison of the performance of commercially available TW relay is carried out to show how the system modeling in CHIL may affect the fault location accuracy.
This paper provides a study of the potential impacts of climate change on intermittent renewable energy resources, battery storage, and resource adequacy in Public Service Company of New Mexico's Integrated Resource Plan for 2020 - 2040. Climate change models and available data were first evaluated to determine uncertainty and potential changes in solar irradiance, temperature, and wind speed in NM in the coming decades. These changes were then implemented in solar and wind energy models to determine impacts on renewable energy resources in NM. Results for the extreme climate-change scenario show that the projected wind power may decrease by ~13% due to projected decreases in wind speed. Projected solar power may decrease by ~4% due to decreases in irradiance and increases in temperature in NM. Uncertainty in these climateinduced changes in wind and solar resources was accommodated in probabilistic models assuming uniform distributions in the annual reductions in solar and wind resources. Uncertainty in battery storage performance was also evaluated based on increased temperature, capacity fade, and degradation in roundtrip efficiency. The hourly energy balance was determined throughout the year given uncertainties in the renewable energy resources and energy storage. The loss of load expectation (LOLE) was evaluated for the 2040 No New Combustion portfolio and found to increase from 0 days/year to a median value of ~2 days/year due to potential reductions in renewable energy resources and battery storage performance and capacity. A rank-regression analyses revealed that battery round-trip efficiency was the most significant parameter that impacted LOLE, followed by solar resource, wind resource, and battery fade. An increase in battery storage capacity to ~30,000 MWh from a baseline value of ~14,000 MWh was required to reduce the median value of LOLE to ~0.2 days/year with consideration of potential climate impacts and battery degradation.
Density fluctuations in compressible turbulent boundary layers cause aero-optical distortions that affect the performance of optical systems such as sensors and lasers. The development of models for predicting the aero-optical distortions relies on theory and reference data that can be obtained from experiments and time-resolved simulations. This paper reports on wall-modeled large-eddy simulations of turbulent boundary layers over a flat plate at Mach 3.5, 7.87, and 13.64. The conditions for the Mach 3.5 case match those for the DNS presented by Miller et al.1 The Mach 7.87 simulation match those inside the Hypersonic Wind Tunnel at Sandia National Laboratories. For the Mach 13.64, the conditions inside the Arnold Engineering Development Complex Hypervelocity Tunnel 9 are matched. Overall, adequate agreement of the velocity and temperature as well as Reynolds stress profiles with reference data from direct numerical simulations is obtained for the different Mach numbers. For all three cases, the normalized root-mean-square optical path difference was computed and compared with data obtained from the reference direct numerical simulations and experiments, as well as predictions obtained with a semi-analytical relationship by Notre Dame University. Above Mach five, the normalized path difference obtained from the simulations is above the model prediction. This provides motivation for future work aimed at evaluating the assumptions behind the Notre Dame model for hypersonic boundary layer flows.
This paper presents a visualization technique for incorporating eigenvector estimates with geospatial data to create inter-area mode shape maps. For each point of measurement, the method specifies the radius, color, and angular orientation of a circular map marker. These characteristics are determined by the elements of the right eigenvector corresponding to the mode of interest. The markers are then overlaid on a map of the system to create a physically intuitive visualization of the mode shape. This technique serves as a valuable tool for differentiating oscillatory modes that have similar frequencies but different shapes. This work was conducted within the Western Interconnection Modes Review Group (WIMRG) in the Western Electric Coordinating Council (WECC). For testing, we employ the WECC 2021 Heavy Summer base case, which features a high-fidelity, industry standard dynamic model of the North American Western Interconnection. Mode estimates are produced via eigen-decomposition of a reduced-order state matrix identified from simulated ringdown data. The results provide improved physical intuition about the spatial characteristics of the inter-area modes. In addition to offline applications, this visualization technique could also enhance situational awareness for system operators when paired with online mode shape estimates.
The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncertainty in numerical model predictions. Version 3.1.2 offers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sensitivity analysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.
A newly developed variable-weight DSMC collision scheme for inelastic collision events is applied to PIC-DSMC modelling of electrical breakdown in 1-dimensional helium and argon-filled gaps. Application of the collision scheme to various inelastic collisional and gas-surface interaction processes (electron-impact ionization, electronic excitation, secondary electron emission) is considered. The collision scheme is shown to improve the level of noise in the computed current density compared to the commonly used approach of sampling a single process, whilst maintaining a comparable level of computational cost and providing less variance in the average number of particles per cell.
We investigate the space complexity of two graph streaming problems: MAX-CUT and its quantum analogue, QUANTUM MAX-CUT. Previous work by Kapralov and Krachun [STOC 19] resolved the classical complexity of the classical problem, showing that any (2 - ?)-approximation requires O(n) space (a 2-approximation is trivial with O(log n) space). We generalize both of these qualifiers, demonstrating O(n) space lower bounds for (2 - ?)-approximating MAX-CUT and QUANTUM MAX-CUT, even if the algorithm is allowed to maintain a quantum state. As the trivial approximation algorithm for QUANTUM MAX-CUT only gives a 4-approximation, we show tightness with an algorithm that returns a (2 + ?)-approximation to the QUANTUM MAX-CUT value of a graph in O(log n) space. Our work resolves the quantum and classical approximability of quantum and classical Max-Cut using o(n) space.We prove our lower bounds through the techniques of Boolean Fourier analysis. We give the first application of these methods to sequential one-way quantum communication, in which each player receives a quantum message from the previous player, and can then perform arbitrary quantum operations on it before sending it to the next. To this end, we show how Fourier-analytic techniques may be used to understand the application of a quantum channel.
Rock salt is being considered as a medium for energy storage and radioactive waste disposal. A Disturbed Rock Zone (DRZ) develops in the immediate vicinity of excavations in rock salt, with an increase in permeability, which alters the migration of gases and liquids around the excavation. When creep occurs adjacent to a stiff inclusion such as a concrete plug, it is expected that the stress state near the inclusion will become more hydrostatic and less deviatoric, promoting healing (permeability reduction) of the DRZ. In this scoping study, we measured the permeability of DRZ rock salt with time adjacent to inclusions (plugs) of varying stiffness to determine how the healing of rock salt, as reflected in the permeability changes, is a function of the stress and time. Samples were created with three different inclusion materials in a central hole along the axis of a salt core: (i) very soft silicone sealant, (ii) sorel cement, and (iii) carbon steel. The measured permeabilities are corrected for the gas slippage effect. We observed that the permeability change is a function of the inclusion material. The stiffer the inclusion, the more rapidly the permeability reduces with time.
We present a field-deployable microfluidic immunoassay device in response to the need for sensitive, quantitative, and high-throughput protein detection at point-of-need. The portable microfluidic system facilitates eight magnetic bead-based sandwich immunoassays from raw samples in 45 minutes. An innovative bead actuation strategy was incorporated into the system to automate multiple sample process steps with minimal user intervention. The device is capable of quantitative and sensitive protein analysis with a 10 pg/ml detection limit from interleukin 6-spiked human serum samples. We envision the reported device offering ultrasensitive point-of-care immunoassay tests for timely and accurate clinical diagnosis.
Proceedings of ISAV 2022: IEEE/ACM International Workshop on In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization, Held in conjunction with SC 2022: The International Conference for High Performance Computing, Networking, Storage and Analysis
This paper reports on Catalyst usability and initial adoption by SPARC analysts. The use case approach highlights the analysts' perspective. Impediments to adoption can be due to deficiencies in software capabilities, or analysts may identify mundane inconveniences and barriers that prevent them from fully leveraging Catalyst. With that said, for many analyst tasks Catalyst provides enough relative advantage that they have begun applying it in their production work, and they recognize the potential for it to solve problems they currently struggle with. The findings in this report include specific issues and minor bugs in ParaView Python scripting, which are viewed as having straightforward solutions, as well as a broader adoption analysis.
Proceedings of PMBS 2022: Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems, Held in conjunction with SC 2022: The International Conference for High Performance Computing, Networking, Storage and Analysis
We propose a new benchmark for high-performance (HP) computers. Similar to High Performance Conjugate Gradient (HPCG), the new benchmark is designed to rank computers based on how fast they can solve a sparse linear system of equations, exhibiting computational and communication requirements typical in many scientific applications. The main novelty of the new benchmark is that it is now based on Generalized Minimum Residual method (GMRES) (combined with Geometric Multi-Grid preconditioner and Gauss-Seidel smoother) and provides the flexibility to utilize lower precision arithmetic. This is motivated by new hardware architectures that deliver lower-precision arithmetic at higher performance. There are other machines that do not follow this trend. However, using a lower-precision arithmetic reduces the required amount of data transfer, which alone could improve solver performance. Considering these trends, an HP benchmark that allows the use of different precisions for solving important scientific problems will be valuable for many different disciplines, and we also hope to promote the design of future HP computers that can utilize mixed-precision arithmetic for achieving high application performance. We present our initial design of the new benchmark, its reference implementation, and the performance of the reference mixed (double and single) precision Geometric Multi-Grid solvers on current top-ranked architectures. We also discuss challenges of designing such a benchmark, along with our preliminary numerical results using 16-bit numerical values (half and bfloat precisions) for solving a sparse linear system of equations.
Based on the rationale presented, nuclear criticality is improbable after salt creep causes compaction of criticality control overpacks (CCOs) disposed at the Waste Isolation Pilot Plant, an operating repository in bedded salt for the disposal of transuranic (TRU) waste from atomic energy defense activities. For most TRU waste, the possibility of post-closure criticality is exceedingly small either because the salt neutronically isolates TRU waste canisters or because closure of a disposal room from salt creep does not sufficiently compact the low mass of fissile material. The criticality potential has been updated here because of the introduction of CCOs, which may dispose up to 380 fissile gram equivalent plutonium-239 in each container. The criticality potential is evaluated through high-fidelity geomechanical modeling of a disposal room filled with CCOs during two representative conditions: (1) large salt block fall, and (2) gradual salt compaction (without brine seepage and subsequent gas generation to permit maximum room closure). Geomechanical models of rock fall demonstrate three tiers of CCOs are not greatly disrupted. Geomechanical models of gradual room closure from salt creep predict irregular arrays of closely packed CCOs after 1000 years, when room closure has asymptotically approached maximum compaction. Criticality models of spheres and cylinders of 380 fissile gram equivalent of plutonium (as oxide) at the predicted irregular spacing demonstrate that an array of CCOs is not critical when surrounded by salt and magnesium oxide, provided the amount of hydrogenous material shipped in the CCO (usually water and plastics) is controlled or boron carbide (a neutron poison) is mixed with the fissile contents.
Quantum computers can now run interesting programs, but each processor’s capability—the set of programs that it can run successfully—is limited by hardware errors. These errors can be complicated, making it difficult to accurately predict a processor’s capability. Benchmarks can be used to measure capability directly, but current benchmarks have limited flexibility and scale poorly to many-qubit processors. We show how to construct scalable, efficiently verifiable benchmarks based on any program by using a technique that we call circuit mirroring. With it, we construct two flexible, scalable volumetric benchmarks based on randomized and periodically ordered programs. We use these benchmarks to map out the capabilities of twelve publicly available processors, and to measure the impact of program structure on each one. We find that standard error metrics are poor predictors of whether a program will run successfully on today’s hardware, and that current processors vary widely in their sensitivity to program structure.
Magann, Alicia B.; Mccaul, Gerard; Rabitz, Herschel A.; Bondar, Denys I.
The characterization of mixtures of non-interacting, spectroscopically similar quantum components has important applications in chemistry, biology, and materials science. We introduce an approach based on quantum tracking control that allows for determining the relative concentrations of constituents in a quantum mixture, using a single pulse which enhances the distinguishability of components of the mixture and has a length that scales linearly with the number of mixture constituents. To illustrate the method, we consider two very distinct model systems: mixtures of diatomic molecules in the gas phase, as well as solid-state materials composed of a mixture of components. A set of numerical analyses are presented, showing strong performance in both settings.
Despite their susceptibility to hydrogen-assisted fracture, ferritic steels make up a large portion of the hydrogen infrastructure. It is impractical and too costly to build large scale components such as pipelines and pressure vessels out of more hydrogen-resistant materials such as austenitic stainless steels. Therefore, it is necessary to understand the fracture behavior of ferritic steels in high-pressure hydrogen environments to manage design margins and reduce costs. Quenched and tempered (Q&T) martensite is the predominant microstructure of high-pressure hydrogen pressure vessels, and higher strength grades of this steel type are more susceptible to hydrogen degradation than lower strength grades. In this study, a single heat of 4340 alloy was heat treated to develop alternative microstructures for evaluation of fracture resistance in hydrogen gas. Fracture tests of several microstructures, such as lower bainite and upper bainite with similar strength to the baseline Q&T martensite, were tested at 21 and 105 MPa H2. Despite a higher MnS inclusion content in the tested 4340 alloy which reduced the fracture toughness in air, the fracture behavior in hydrogen gas fit a similar trend to other previously tested Q&T martensitic steels. The lower bainite microstructure performed similar to the Q&T martensite, whereas the upper bainite microstructure performed slightly worse. In this paper, we extend the range of high-strength microstructures evaluated for hydrogen-assisted fracture beyond conventional Q&T martensitic steels.
Driven by the exceedingly high computational demands of simulating mechanical response in complex engineered systems with finely resolved finite element models, there is a critical need to optimally reduce the fidelity of such simulations. The minimum required fidelity is constrained by error tolerances on the simulation results, but error bounds are often impossible to obtain a priori. One such source of error is the variability of material properties within a body due to spatially non-uniform processing conditions and inherent stochasticity in material microstructure. This study seeks to quantify the effects of microstructural heterogeneity on component- and system-scale performance to aid in the choice of an appropriate material model and spatial resolution for finite element analysis.
The Jet Propulsion Laboratory has a keen interest in exploring icy moons in the solar system, particularly Jupiter's Europa. Successful exploration of the moon's surface includes planetary protection initiatives to prevent the introduction of viable organisms from Earth to Europa. To that end, the Europa lander requires a Terminal Sterilization Subsystem (TSS) to rid the lander of viable organisms that would potentially contaminate the moon's environment. Sandia National Laboratories has been developing a TSS architecture, relying heavily on computational models to support TSS development. Sandia's TSS design approach involves using energetic material to thermally sterilize lander components at the end of the mission. A hierarchical modeling approach was used for system development and analysis, where simplified systems were constructed to perform empirical tests for evaluating energetic material formulation development and assist in developing computational models with multiple tiers of physics fidelity. Computational models have been developed using multiple Sandia-native computational tools. Three experimental systems and corresponding computational models have been developed: Tube, Sub-Box Small, and Sub-Box Large systems. This paper presents an explanation of the application context of the TSS along with an overview description of a small portion of the TSS development from a modeling and simulation perspective, specifically highlighting verification, validation, and uncertainty quantification (VVUQ) aspects of the modeling and simulation work. Multiple VVUQ approaches were implemented during TSS development, including solution verification, calibration, uncertainty quantification, global sensitivity analysis, and validation. This paper is not intended to express the design results or parameter values used to model the TSS but to communicate the approaches used and how the results of the VVUQ efforts were used and interpreted to assist system development.
A forward analytic model is required to rapidly simulate the neutron time-of-flight (nToF) signals that result from magnetized liner inertial fusion (MagLIF) experiments at Sandia’s Z Pulsed Power Facility. Various experimental parameters, such as the burn-weighted fuel-ion temperature and liner areal density, determine the shape of the nToF signal and are important for characterizing any given MagLIF experiment. Extracting these parameters from measured nToF signals requires an appropriate analytic model that includes the primary deuterium-deuterium neutron peak, once-scattered neutrons in the beryllium liner of the MagLIF target, and direct beamline attenuation. Mathematical expressions for this model were derived from the general-geometry time- and energy-dependent neutron transport equation with anisotropic scattering. Assumptions consistent with the time-of-flight technique were used to simplify this linear Boltzmann transport equation into a more tractable form. Models of the uncollided and once-collided neutron scalar fluxes were developed for one of the five nToF detector locations at the Z-Machine. Numerical results from these models were produced for a representative MagLIF problem and found to be in good agreement with similar neutron transport simulations. Twenty experimental MagLIF data sets were analyzed using the forward models, which were determined to only be significantly sensitive to the ion temperature. The results of this work were also found to agree with values obtained separately using a zero scatter analytic model and a high-fidelity Monte Carlo simulation. Inherent difficulties in this and similar techniques are identified, and a new approach forward is suggested.
Software sustainability is critical for Computational Science and Engineering (CSE) software. Measuring sustainability is challenging because sustainability consists of many attributes. One factor that impacts software sustainability is the complexity of the source code. This paper introduces an approach for utilizing complexity data, with a focus on hotspots of and changes in complexity, to assist developers in performing code reviews and inform project teams about longer-term changes in sustainability and maintainability from the perspective of cyclomatic complexity. We present an analysis of data associated with four real-world pull requests to demonstrate how the metrics may help guide and inform the code review process and how the data can be used to measure changes in complexity over time.
Energy storage is an extremely flexible grid asset than can provide a wide range of services. Unfortunately, energy storage is often relatively expensive compared to other options. With the emphasis on decarbonization, energy storage is required to buffer the intermittency associated with variable renewable generation. This paper calculates the maximum potential revenue from an energy storage system engaged in day-ahead market arbitrage in the California Independent System Operator (CAISO) region and uses these results to estimate the distribution of break-even capital costs. Break-even cost data is extremely useful as it provides insight into expected market penetration given a target capital cost. This information is also valuable for setting policy related to energy storage incentives as well as for setting price targets for research and development initiatives. The potential annual revenue of a generic battery energy storage system (BESS) participating in the CAISO day-ahead energy market was analyzed for 2,145 nodes over a seven year period (2014-2020). This data was used to estimate the break-even capital cost for each node as well as the cost requirements for several internal rate of return scenarios. Based on the analysis, the capital costs of lithium-ion systems must be reduced by approximately 80% from current levels to enable arbitrage applications to have a reasonable rate of return.
Expansion techniques are powerful tools that can take a limited measurement set and provide information on responses at unmeasured locations. Expansion techniques are used in dynamic environments specifications, full field stress measurements, model calibration, and other calculations that require response at locations not measured. However, the process of modal expansion techniques such as SEREP (System Equivalent Reduction Expansion Process) has error with the projection of the measurement set of degrees of freedom to the expanded degrees of freedom. Empirical evidence has been used in the past to qualitatively determine the error. In recent years, the modal projection error was developed to quantify the error through a projection between different domains. The modal projection error is used in this paper to demonstrate the use of the metric in quantifying the error of the expansion process and to quantify which modes of the expansion process are the most important.
To meet the challenges oflow-carbon power generation, distributed energy resources (DERs) such as solar and wind power generators are now widely integrated into the power grid. Because of the autonomous nature of DERs, ensuring properly regulated output voltages of the individual sources to the grid distribution system poses a technical challenge to grid operators. Stochastic, model-free voltage regulations methods such as deep reinforcement learning (DRL) have proven effective in the regulation of DER output voltages; however, deriving an optimal voltage control policy using DRL over a large state space has a large computational time complexity. In this paper we illustrate a computationally efficient method for deriving an optimal voltage control policy using a parallelized DRL ensemble. Additionally, we illustrate the resiliency of the control ensemble when random noise is introduced by a cyber adversary.
We present optical metrology at the Sandia fog chamber facility. Repeatable and well characterized fogs are generated under different atmospheric conditions and applied for light transport model validation and computational sensing development.
The Spectral Physics Environment for Advanced Remote Sensing (SPEARS) application programming interface (API) is a Python-based, line-by-line, local thermal equilibrium (LTE) spectral modeling code which is optimized for simultaneously synthesizing optical spectra from any combination of fundamental spectroscopic databases. In this article, we contribute two novel spectral modeling techniques to the scientific literature. First we describe how SPEARS integrates a physics-based collisional model for calculating pressure broadening in the absence of available broadening coefficients. With this collisional model implementation, a generalized approach to fundamental spectroscopic databases can be achieved across multiple databases. We also detail our adaptive grid mesh algorithm developed to make the code scalable for simulating large spectral bandwidths at high spectral fidelity using intuitive grid parameters. We present comparisons to other modeling tools, experiments, and provide a discussion on the SPEARS user interface.
Type 2 high-pressure hydrogen vessels for storage at hydrogen refueling stations are designed assuming a predefined operational pressure cycle and targeted autofrettage conditions. However, the resulting finite life depends significantly on variables associated with the autofrettage process and the pressure cycles actually realized during service, which many times are not to the full range of the design. Clear guidance for cycle counting is lacking, therefore industry often defaults to counting every repressurization as a full range pressure cycle, which is an overly conservative approach. In-service pressure cycles used to predict the growth of cracks in operational pressure vessels results in significantly longer life, since most in-service pressure cycles are only a fraction of the full design pressure range. Fatigue crack growth rates can vary widely for a given pressure range depending on the details of the residual strains imparted during the autofrettage process because of their influence on crack driving forces. Small changes in variables associated with the autofrettage process, e.g., the target autofrettage overburden pressure, can result in large changes in the residual stress profile leading to possibly degraded fatigue life. In this paper, computational simulation was used for sensitivity studies to evaluate the effect of both operating conditions and autofrettage conditions on fatigue life for Type 2 highpressure hydrogen vessels. The analysis in this paper explores these sensitivities, and the results are used to provide guidance on cycle counting. In particular, we identify the pressure cycle ranges that can be ignored over the life of the vessel as having negligible effect on fatigue life. This study also examines the sensitivity of design life to the autofrettage process and the impact on life if the targeted residual strain is not achieved during manufacturing.
This paper presents a type-IV wind turbine generator (WTG) model developed in MATLAB/Simulink. An aerodynamic model is used to improve an electromagnetic transient model. This model is further developed by incorporating a single-mass model of the turbine and including generator torque control from an aerodynamic model. The model is validated using field data collected from an actual WTG located in the Scaled Wind Farm Technology (SWiFT) facility. The model takes the nacelle wind speed as an estimate. To ensure the model and the SWiFT WTG field data is compared accurately, the wind speed is estimated using a Kalman filter. Simulation results shows that using a single-mass model instead of a two-mass model for aerodynamic torque, including the generator torque control from SWiFT, estimating wind speed via the Kalman filter and tunning the synchronous generator, accurately represent the generator torque, speed, and power, compared to the SWiFT WTG field data.
The penetration of renewable energy resources (RER) and energy storage systems (ESS) into the power grid has been accelerated in recent times due to the aggressive emission and RER penetration targets. The Integrated resource planning (IRP) framework can help in ensuring long-term resource adequacy while satisfying RER integration and emission reduction targets in a cost-effective and reliable manner. In this paper, we present pIRP (probabilistic Integrated Resource Planning), an open-source Python-based software tool designed for optimal portfolio planning for an RER and ESS rich future grid and for addressing the capacity expansion problem. The tool, which is planned to be released publicly, with its ESS and RER modeling capabilities along with enhanced uncertainty handling make it one of the more advanced non-commercial IRP tools available currently. Additionally, the tool is equipped with an intuitive graphical user interface and expansive plotting capabilities. Impacts of uncertainties in the system are captured using Monte Carlo simulations and lets the users analyze hundreds of scenarios with detailed scenario reports. A linear programming based architecture is adopted which ensures sufficiently fast solution time while considering hundreds of scenarios and characterizing profile risks with varying levels of RER and ESS penetration levels. Results for a test case using data from parts of the Eastern Interconnection are provided in this paper to demonstrate the capabilities offered by the tool.
This work presents an experimental investigation of the deformation and breakup of water drops behind conical shock waves. A conical shock is generated by firing a bullet at Mach 4.5 past a vertical column of drops with a mean initial diameter of 192 µm. The time-resolved drop position and maximum transverse dimension are characterized using backlit stereo videos taken at 500 kHz. A Reynolds-Averaged Navier Stokes (RANS) simulation of the bullet is used to estimate the gas density and velocity fields experienced by the drops. Classical correlations for breakup times derived from planar-shock/drop interactions are evaluated. Predicted drop breakup times are found to be in error by a factor of three or more, indicating that existing correlations are inadequate for predicting the response to the three-dimensional relaxation of the velocity and thermodynamic properties downstream of the conical shock. Next, the Taylor Analogy Breakup (TAB) model, which solves a transient equation for drop deformation, is evaluated. TAB predictions for drop diameter calculated using a dimensionless constant of C2 = 2, as compared to the accepted value of C2 = 2/3, are found to agree within the confidence bounds of the ensemble averaged experimental values for all drops studied. These results suggest the three-dimensional relaxation effects behind conical shock waves alter the drop response in comparison to a step change across a planar shock, and that future models describing the interaction between a drop and a non-planar shock wave should account for flow field variations.
Proceedings of SPIE - The International Society for Optical Engineering
Fredricksen, C.J.; Peale, R.E.; Dhakal, N.; Barrett, C.L.; Boykin II, O.; Maukonen, D.; Davis, L.; Ferarri, B.; Chernyak, L.; Zeidan, O.A.; Hawkins, Samuel D.; Klem, John F.; Krishna, Sanjay; Kazemi, Alireza; Schuler-Sandy, Ted
Effects of gamma and proton irradiation, and of forward bias minority carrier injection, on minority carrier diffusion and photoresponse were investigated for long-wave (LW) and mid-wave (MW) infrared detectors with engineered majoritycarrier barriers. The LWIR detector was a type-II GaSb/InAs strained-layer superlattice pBiBn structure. The MWIR detector was a InAsSb/AlAsSb nBp structure without superlattices. Room temperature gamma irradiations degraded the minority carrier diffusion length of the LWIR structure, and minority carrier injections caused dramatic improvements, though there was little effect from either treatment on photoresponse. For the MWIR detector, effects of room temperature gamma irradiation and injection on minority carrier diffusion and photoresponse were negligible. Subsequently, both types of detectors were subjected to gamma irradiation at 77 K. In-situ photoresponse was unchanged for the LWIR detectors, while that for the MWIR ones decreased 19% after cumulative dose of ~500 krad(Si). Minority carrier injection had no effect on photoresponse for either. The LWIR detector was then subjected to 4 Mrad(Si) of 30 MeV proton irradiation at 77 K, and showed a 35% decrease in photoresponse, but again no effect from forward bias injection. These results suggest that photoresponse of the LWIR detectors is not limited by minority carrier diffusion.
This paper presents a novel approach for fault location and classification based on combining mathematical morphology (MM) with Random Forests (RF). The MM stage of the method is used to pre-process voltage and current data. Signal vector norms on the output signals of the MM stage are then used as the input features for a RF machine learning classifier and regressor. The data used as input for the proposed approach comprises only a window of 50 µs before and after the fault is detected. The proposed method is tested with noisy data from a small simulated system. These results show 100% accuracy for the classification task and prediction errors with an average of ~13 m in the fault location task.
We evaluate the use of reference modules for monitoring effective irradiance in PV power plants, as compared with traditional plane-of-array (POA) irradiance sensors, for PV monitoring and capacity tests. Common POA sensors such as pyranometers and reference cells are unable to capture module-level irradiance nonuniformity and require several correction factors to accurately represent the conditions for fielded modules. These problems are compounded for bifacial systems, where the power loss due to rear side shading and rear-side plane-of-array (RPOA) irradiance gradients are greater and more difficult to quantify. The resulting inaccuracy can have costly real-world consequences, particularly when the data are used to perform power ratings and capacity tests. Here we analyze data from a bifacial single-axis tracking PV power plant, (175.6 MWdc) using 5 meteorological (MET) stations, located on corresponding inverter blocks with capacities over 4 MWdc. Each MET station consists of bifacial reference modules as well pyranometers mounted in traditional POA and RPOA installations across the PV power plant. Short circuit current measurements of the reference modules are converted to effective irradiance with temperature correction and scaling based on flash test or nameplate short circuit values. Our work shows that bifacial effective irradiance measured by pyranometers averages 3.6% higher than the effective irradiance measured by bifacial reference modules, even when accounting for spectral, angle of incidence, and irradiance nonuniformity. We also performed capacity tests using effective irradiance measured by pyranometers and reference modules for each of the 5 bifacial single-axis tracking inverter blocks mentioned above. These capacity tests evaluated bifacial plant performance at ∼3.9% lower when using bifacial effective irradiance from pyranometers as compared to the same calculation performed with reference modules.
Proceedings - 2022 IEEE International Symposium on Software Reliability Engineering Workshops, ISSREW 2022
Ketterer, Austin; Shekar, Asha; Yi, Edgardo B.; Bagchi, Saurabh; Clements, Abraham A.
Firmware emulation is useful for finding vulnerabil-ities, performing debugging, and testing functionalities. However, the process of enabling firmware to execute in an emulator (i.e., re-hosting) is difficult. Each piece of the firmware may depend on hardware peripherals outside the microcontroller that are inaccessible during emulation. Current practices involve painstakingly disentangling these dependencies or replacing them with developed models that emulate functions interacting with hardware. Unfortunately, both are highly manual and error-prone. In this paper, we introduce a systematic graph-based approach to analyze firmware binaries and determine which functions need to be replaced. Our approach is customizable to balance the fidelity of the emulation and the amount of effort it would take to achieve the emulation by modeling functions. We run our algorithm across a number of firmware binaries and show its ability to capture and remove a large majority of hardware dependencies.
The Arroyo Seco Improvement Program (ASIP) is intended to provide active channel improvements and stream zone management activities that will reduce current flood and erosion risk while providing additional and improved habitat for critical species that may use the Arroyo Seco at the United States Department of Energy (DOE), Sandia National Laboratories, California (SNL/CA) location. The objectives of the ASIP are: correct existing channel stability problems associated with existing arroyo structures (i.e. bridges, security grates, utility crossings, and drain structures), correct bank erosion and provide protection against future erosion, reduce the risk of future flooding, and provide habitat improvement and creation of a mitigation credit for site development and management activities.
Communication-assisted adaptive protection can improve the speed and selectivity of the protection system. However, in the event, that communication is disrupted to the relays from the centralized adaptive protection system, predicting the local relay protection settings is a viable alternative. This work evaluates the potential for machine learning to overcome these challenges by using the Prophet algorithm programmed into each relay to individually predict the time-dial (TDS) and pickup current (IPICKUP) settings. A modified IEEE 123 feeder was used to generate the data needed to train and test the Prophet algorithm to individually predict the TDS and IPICKUP settings. The models were evaluated using the mean average percentage error (MAPE) and the root mean squared error (RMSE) as metrics. The results show that the algorithms could accurately predict IPICKUP setting with an average MAPE accuracy of 99.961%, and the TDS setting with a average MAPE accuracy of 94.32% which is sufficient for protection parameter prediction.
This paper describes how the performance of motion primitive-based planning algorithms can be improved using reinforcement learning. Specifically, we describe and evaluate a framework that autonomously improves the performance of a primitive-based motion planner. The improvement process consists of three phases: exploration, extraction, and reward updates. This process can be iterated continuously to provide successive improvement. The exploration step generates new trajectories, and the extraction step identifies new primitives from these trajectories. These primitives are then used to update rewards for continued exploration. This framework required novel shaping rewards, development of a primitive extraction algorithm, and modification of the Hybrid A* algorithm. The framework is tested on a navigation task using a nonlinear F-16 model. The framework autonomously added 91 motion primitives to the primitive library and reduced average path cost by 21.6 s, or 35.75% of the original cost. The learned primitives are applied to an obstacle field navigation task, which was not used in training, and reduced path cost by 16.3 s, or 24.1%. Additionally, two heuristics for the modified Hybrid A* algorithm are designed to improve effective branching factor.
Multi-model Monte Carlo methods have been illustrated to be an efficient and accurate alternative to standard Monte Carlo (MC) in the model-based propagation of uncertainty in entry, descent, and landing (EDL) applications. These multi-model MC methods fuse predictions from low-fidelity models with the high-fidelity EDL model of interest to produce unbiased statistics with a fraction of the computational cost. The accuracy and efficiency of the multi-model MC methods are dependent upon the magnitude of correlations of the low-fidelity models with the high-fidelity model, but also upon the correlation amongst the low-fidelity models, and their relative computational cost. Because of this layer of complexity, the question of how to optimally select the set of low-fidelity models has remained open. In this work, methods for optimal model construction and tuning are investigated as a means to increase the speed and precision of trajectory simulation for EDL. Specifically, the focus is on the inclusion of low-fidelity model tuning within the sample allocation optimization that accompanies multi-model MC methods. Results indicate that low-fidelity model tuning can significantly improve efficiency and precision of trajectory simulations and provide an increased edge to multi-model MC methods when compared to standard MC.
The effects of passive pre-chamber (PC) geometry and nozzle pattern as well as the use of either conventional spark or non-equilibrium plasma PC ignition system on knocking events were studied in an optically-accessible single-cylinder gasoline research engine. The equivalence ratio of the charge in the main chamber (MC) was maintained equal to 0.94 at a constant engine speed of 1300 rpm, and at constant engine load of 3.5 bar indicated mean effective pressure for all operating conditions. MC pressure profiles were collected and analyzed to infer the amplitude and the frequency of pressure oscillations that resulted in knocking events. The combustion process in the MC was investigated utilizing high-speed excited methylidyne radical (CH*) chemiluminescence images. The collected results highlighted that PC volume and nozzle pattern substantially affected the knock intensity (KI), while the use of the non-equilibrium plasma ignition system exhibited lower KI compared to PC equipped with a conventional inductive ignition system. It was also identified that knocking events were likely not generated by conventional end gas auto-ignition, but by jet-related phenomena, as well as jet-flame wall quenching. The relation between these phenomena and PC geometry, nozzle pattern, as well as ignition system has been also highlighted and discussed.
Variable energy resources (VERs) like wind and solar are the future of electricity generation as we gradually phase out fossil fuel due to environmental concerns. Nations across the globe are also making significant strides in integrating VERs into their power grids as we strive toward a greener future. However, integration of VERs leads to several challenges due to their variable nature and low inertia characteristics. In this paper, we discuss the hurdles faced by the power grid due to high penetration of wind power generation and how energy storage system (ESSs) can be used at the grid-level to overcome these hurdles. We propose a new planning strategy using which ESSs can be sized appropriately to provide inertial support as well as aid in variability mitigation, thus minimizing load curtailment. A probabilistic framework is developed for this purpose, which takes into consideration the outage of generators and the replacement of conventional units with wind farms. Wind speed is modeled using an autoregressive moving average technique. The efficacy of the proposed methodology is demonstrated on the WSCC 9-bus test system.
Coupled Mode Theory (CMT) is a classic model that addresses many diverse problems in physics and electrical engineering. Although CMT is well-established in areas such as filter and directional coupler design, its significance in antenna engineering is less well-recognized. Recently, Characteristic Mode Analysis has shown how CMT quantitatively models a wide variety of broadband, multi-mode patch structures. In this paper, we review these results and discuss the similarities and differences between these radiators through the lens of CMT. With these principles in mind, multi-mode radiators may be designed in a straightforward manner and new ideas for broadband, multi-mode patches are stimulated.
This work evaluates the benefits of using a 'smart' network interface card (SmartNIC) as a compute accelerator for the example of the MiniMD molecular dynamics proxy application. The accelerator is NVIDIA's BlueField-2 card, which includes an 8-core Arm processor along with a small amount of DRAM and storage. We test the networking and data movement performance of these cards compared to a standard Intel server host using microbenchmarks and MiniMD. In MiniMD, we identify two distinct classes of computation, namely core computation and maintenance computation, which are executed in sequence. We restructure the algorithm and code to weaken this dependence and increase task parallelism, thereby making it possible to increase utilization of the BlueField-2 concurrently with the host. We evaluate our implementation on a cluster consisting of 16 dual-socket Intel Broadwell host nodes with one BlueField-2 per host-node. Our results show that while the overall compute performance of BlueField-2 is limited, using them with a modified MiniMD algorithm allows for up to 20% speedup over the host CPU baseline with no loss in simulation accuracy.
A high-throughput experimental setup was used to characterize initiation threshold and growth to detonation in the explosives hexanitrostilbene (HNS) and pentaerythritol tetranitrate (PETN). The experiment sequentially launched an array of laser-driven flyers to shock samples arranged in a 96-well microplate geometry, with photonic Doppler velocimetry diagnostics to characterize flyer velocity and particle velocity at the explosive-substrate interface. Vapor-deposited films of HNS and PETN were used to provide numerous samples with various thicknesses, enabling characterization of the evolution of growth to detonation. One-dimensional hydrocode simulations were performed with reactions disabled to illustrate where the experimental data deviate from the predicted inert response. Prompt initiation was observed in 144 μm thick HNS films at flyer velocities near 3000 m/s and in 125 μm thick PETN films at flyer velocities near 2400 m/s. This experimental setup enables rapid quantification of the growth of reactions in explosive materials that can reach detonation at sub-millimeter length scales. These data can subsequently be used for parameterizing reactive burn models in hydrocode simulations, as discussed in Paper II [D. E. Kittell, R. Knepper, and A. S. Tappan, J. Appl. Phys. 131, 154902 (2022)].
Synthetic Aperture Radar (SAR) projects a 3-D scene's reflectivity into a 2-D image. In doing so, it generally focuses the image to a surface, usually a ground plane. Consequently, scatterers above or below the focal/ground plane typically exhibit some degree of distortion manifesting as a geometric distortion and misfocusing or smearing. Limits to acceptable misfocusing define a Height of Focus (HOF), analogous to Depth of Field in optical systems. This may be exacerbated by the radar's flightpath during the synthetic aperture data collection. HOF is very radar flightpath dependent. Some flightpaths like straight and level flightpaths will have very large HOF limits. Other flightpaths, especially those that exhibit large out-of-plane motion will have very small HOF limits, perhaps even small fractions of a meter. This paper explores the impact of various flightpaths on HOF, and discusses the conditions for increasing or decreasing HOF. We note also that HOF might be exploited for target height estimation and offer insight to other height estimation techniques.
In March 2021, a functional area drill was held at the Remote Sensing Laboratory–Nellis that focused on using CBRNResponder and the Digital Field Monitoring (DFM) tablets for sample hotline operations and the new paper Sample Control Forms (SCFs) for sample collection. Participants included staff trained and billeted as sample control specialists and Consequence Management Response Team (CMRT) field monitoring personnel. Teams were able to successfully gather and transfer samples to the sample control hotline staff through the manual process, though there were several noted areas for improvement. In July and October 2021, two additional functional area drills were held at Sandia National Laboratories that focused on field sample collection and custody transfer at the sample control hotline for the Consequence Management (CM) Radiological Assistance Program (RAP) program. The overarching goal of the drills was to evaluate the current CM process for sample collection, sample drop off, and sample control using the CBRNResponder mobile and web-based applications. The July 2021 drill had an additional focus to have a subset of samples analyzed by the local analytical laboratory, Radiation Protection Sample Diagnostics (RPSD) laboratory, to evaluate the Laboratory Access portal on CBRNResponder. All three drills were able to accomplish their objectives however, there were several issues noted (Observations: 25 Urgent, 29 Important, and 22 Improvement Opportunities). The observations were prioritized according to their impact on the mission as well as categorized to align with the programmatic functional area required to address the issue. This report provides additional detail on each observation for skillset/program leads and software developers to consider for future improvement or mandatory efforts.
This paper applies sensitivity and uncertainty analysis to compare two model alternatives for fuel matrix degradation for performance assessment of a generic crystalline repository. The results show that this model choice has little effect on uncertainty in the peak 129I concentration. The small impact of this choice is likely due to the higher importance of uncertainty in the instantaneous release fraction and differences in epistemic uncertainty between the alternatives.
This paper describes an efficient reverse-mode differentiation algorithm for contraction operations for arbitrary and unconventional tensor network topologies. The approach leverages the tensor contraction tree of Evenbly and Pfeifer (2014), which provides an instruction set for the contraction sequence of a network. We show that this tree can be efficiently leveraged for differentiation of a full tensor network contraction using a recursive scheme that exploits (1) the bilinear property of contraction and (2) the property that trees have a single path from root to leaves. While differentiation of tensor-tensor contraction is already possible in most automatic differentiation packages, we show that exploiting these two additional properties in the specific context of contraction sequences can improve eficiency. Following a description of the algorithm and computational complexity analysis, we investigate its utility for gradient-based supervised learning for low-rank function recovery and for fitting real-world unstructured datasets. We demonstrate improved performance over alternating least-squares optimization approaches and the capability to handle heterogeneous and arbitrary tensor network formats. When compared to alternating minimization algorithms, we find that the gradient-based approach requires a smaller oversampling ratio (number of samples compared to number model parameters) for recovery. This increased efficiency extends to fitting unstructured data of varying dimensionality and when employing a variety of tensor network formats. Here, we show improved learning using the hierarchical Tucker method over the tensor-train in high-dimensional settings on a number of benchmark problems.
Intracellular transport by kinesin motors moving along their associated cytoskeletal filaments, microtubules, is essential to many biological processes. This active transport system can be reconstituted in vitro with the surface-adhered motors transporting the microtubules across a planar surface. In this geometry, the kinesin-microtubule system has been used to study active self-assembly, to power microdevices, and to perform analyte detection. Fundamental to these applications is the ability to characterize the interactions between the surface tethered motors and microtubules. Fluorescence Interference Contrast (FLIC) microscopy can illuminate the height of the microtubule above a surface, which, at sufficiently low surface densities of kinesin, also reveals the number, locations, and dynamics of the bound motors.
This article presents a notable advance toward the development of a new method of increasing the single-axis tracking photovoltaic (PV) system power output by improving the determination and near-term prediction of the optimum module tilt angle. The tilt angle of the plane receiving the greatest total irradiance changes with Sun position and atmospheric conditions including cloud formation and movement, aerosols, and particulate loading, as well as varying albedo within a module's field of view. In this article, we present a multi-input convolutional neural network that can create a profile of plane-of-array irradiance versus surface tilt angle over a full 180^{\circ } arc from horizon to horizon. As input, the neural network uses the calculated solar position and clear-sky irradiance values, along with sky images. The target irradiance values are provided by the multiplanar irradiance sensor (MPIS). In order to account for varying irradiance conditions, the MPIS signal is normalized by the theoretical clear-sky global horizontal irradiance. Using this information, the neural network outputs an N-dimensional vector, where N is the number of points to approximate the MPIS curve via Fourier resampling. The output vector of the model is smoothed with a Gaussian kernel to account for error in the downsamping and subsequent upsampling steps, as well as to smooth the unconstrained output of the model. These profiles may be used to perform near-term prediction of angular irradiance, which can then inform the movement of a PV tracker.
Hu, Xuan; Walker, Benjamin W.; Garcia-Sanchez, Felipe; Edwards, Alexander J.; Zhou, Peng; Incorvia, Jean A.C.; Paler, Alexandru; Frank, Michael P.; Friedman, Joseph S.
Magnetic skyrmions are nanoscale whirls of magnetism that can be propagated with electrical currents. The repulsion between skyrmions inspires their use for reversible computing based on the elastic billiard ball collisions proposed for conservative logic in 1982. In this letter, we evaluate the logical and physical reversibility of this skyrmion logic paradigm, as well as the limitations that must be addressed before dissipation-free computation can be realized.
Conference Proceedings of the Society for Experimental Mechanics Series
Singh, Aabhas; Wielgus, Kayla M.; Dimino, Ignazio; Kuether, Robert J.; Allen, Matthew S.
Morphing wings have great potential to dramatically improve the efficiency of future generations of aircraft and to reduce noise and emissions. Among many camber morphing wing concepts, shape changing fingerlike mechanisms consist of components, such as torsion bars, bushings, bearings, and joints, all of which exhibit damping and stiffness nonlinearities that are dependent on excitation amplitude. These nonlinearities make the dynamic response difficult to model accurately with traditional simulation approaches. As a result, at high excitation levels, linear finite element models may be inaccurate, and a nonlinear modeling approach is required to capture the necessary physics. This work seeks to better understand the influence of nonlinearity on the effective damping and natural frequency of the morphing wing through the use of quasi-static modal analysis and model reduction techniques that employ multipoint constraints (i.e., spider elements). With over 500,000 elements and 39 frictional contact surfaces, this represents one of the most complicated models to which these methods have been applied to date. The results to date are summarized and lessons learned are highlighted.
For the resiliency of both small and large distribution systems, the concept of microgrids is arising. The ability for sections of the distribution system to be 'self-sufficient' and operate under their own energy generation is a desirable concept. This would allow for only small sections of the system to be without power after being affected by abnormal events such as a fault or a natural disaster, and allow for a greater number of consumers to go through their lives as normal. Research is needed to determine how different forms of generation will perform in a microgrid, as well as how to properly protect an islanded system. While synchronous generators are well understood and generally accepted amongst utility operators, inverter-based resources (IBRs) are less common. An IBR's fault characteristic varies between manufacturers and is heavily based on the internal control scheme. Additionally, with the internal protections of these devices to not damage the switching components, IBRs are usually limited to only 1.1-2.5p.u. of the rated current, depending on the technology. This results in traditional protection methods such as overcurrent devices being unable to 'trip' in a microgrid with high IBR penetration. Moreover, grid-following inverters (commonly used for photovoltaic systems) require a voltage source to synchronize with before operating. Also, these inverters do not provide any inertia to a system. On the other hand, grid-forming inverters can operate as a primary voltage source, and provide an 'emulated inertia' to the system. This study will look at a small islanded system with a grid-forming inverter, and a grid-following inverter subjected to a line-to-ground fault.
Measurements that occur within the internal layers of a quantum circuit—midcircuit measurements—are a useful quantum-computing primitive, most notably for quantum error correction. Midcircuit measurements have both classical and quantum outputs, so they can be subject to error modes that do not exist for measurements that terminate quantum circuits. Here we show how to characterize midcircuit measurements, modeled by quantum instruments, using a technique that we call quantum instrument linear gate set tomography (QILGST). We then apply this technique to characterize a dispersive measurement on a superconducting transmon qubit within a multiqubit system. By varying the delay time between the measurement pulse and subsequent gates, we explore the impact of residual cavity photon population on measurement error. QILGST can resolve different error modes and quantify the total error from a measurement; in our experiment, for delay times above 1000ns we measure a total error rate (i.e., half diamond distance) of ϵ⋄=8.1±1.4%, a readout fidelity of 97.0±0.3%, and output quantum-state fidelities of 96.7±0.6% and 93.7±0.7% when measuring 0 and 1, respectively.
Frequent changes in penetration levels of distributed energy resources (DERs) and grid control objectives have caused the maintenance of accurate and reliable grid models for behind-the-meter (BTM) photovoltaic (PV) system impact studies to become an increasingly challenging task. At the same time, high adoption rates of advanced metering infrastructure (AMI) devices have improved load modeling techniques and have enabled the application of machine learning algorithms to a wide variety of model calibration tasks. Therefore, we propose that these algorithms can be applied to improve the quality of the input data and grid models used for PV impact studies. In this paper, these potential improvements were assessed for their ability to improve the accuracy of locational BTM PV hosting capacity analysis (HCA). Specifically, the voltage- and thermal-constrained hosting capacities of every customer location on a distribution feeder (1,379 in total) were calculated every 15 minutes for an entire year before and after each calibration algorithm or load modeling technique was applied. Overall, the HCA results were found to be highly sensitive to the various modeling deficiencies under investigation, illustrating the opportunity for more data-centric/model-free approaches to PV impact studies.
Large scale non-intrusive inspection (NII) of commercial vehicles is being adopted in the U.S. at a pace and scale that will result in a commensurate growth in adjudication burdens at land ports of entry. The use of computer vision and machine learning models to augment human operator capabilities is critical in this sector to ensure the flow of commerce and to maintain efficient and reliable security operations. The development of models for this scale and speed requires novel approaches to object detection and novel adjudication pipelines. Here we propose a notional combination of existing object detection tools using a novel ensembling framework to demonstrate the potential for hierarchical and recursive operations. Further, we explore the combination of object detection with image similarity as an adjacent capability to provide post-hoc oversight to the detection framework. The experiments described herein, while notional and intended for illustrative purposes, demonstrate that the judicious combination of diverse algorithms can result in a resilient workflow for the NII environment.
This paper demonstrates that a faster Automatic Generation Control (AGC) response provided by Inverter-Based Resources (IBRs) can improve a performance-based regulation (PBR) metric. The improvement in performance has a direct effect on operational income. The PBR metric used in this work was obtained from a California ISO (CAISO) example and is fully described herein. A single generator in a modified three area IEEE 39 bus system was replaced with a group of co-located IBRs to present possible responses using different plant controls and variable resource conditions. We show how a group of IBRs that rely on variable resources may negatively affect the described PBR metric of all connected areas if adequate plant control is not employed. However, increasing the dispatch rate of internal plant controls may positively affect the PBR metric of all connected areas despite variable resource conditions.
This chapter focuses on explosives-based threats, the challenges they present, and various means by which these challenges can be overcome. It begins with an introduction to explosive threats, detailing statistics regarding their use, and some overarching challenges associated with properly mitigating the risks they present, before delving deeper into different areas of response by government agencies. These response areas are broadly categorized as deter, prevent, detect, delay/ protect, and respond/analyze. Deterrence refers to trying to discourage people from becoming malefactors, with a focus on anti-radicalization programs and ways by which people can be dissuaded to join extremist movements. The section on prevention discusses means by which access to explosive precursor materials and information can be controlled, with a focus on polices and regulations. This includes examples of current regulations, discussion of why specific chemicals are on controlled chemicals lists, and information campaigns to raise awareness of IED threats. The following section gives a brief understanding of the important aspects to consider in detection and describes different explosives detection methods used. Approaches to delaying the use or impact of an explosive threat, as well as those that provide some sort of protection against the effects of an explosive threat, are then described. Lastly, current approaches to response to explosive threats, either before or after detonation, and the importance of analysis, are discussed before summarizing the chapter and providing a near-future outlook.
For the model-based control of low-voltage microgrids, state and parameter information are required. Different optimal estimation techniques can be employed for this purpose. However, these estimation techniques require knowledge of noise covariances (process and measurement noise). Incorrect values of noise covariances can deteriorate the estimator performance, which in turn can reduce the overall controller performance. This paper presents a method to identify noise covariances for voltage dynamics estimation in a microgrid. The method is based on the autocovariance least squares technique. A simulation study of a simplified 100 kVA, 208 V microgrid system in MATLAB/Simulink validates the method. Results show that estimation accuracy is close to the actual value for Gaussian noise, and non-Gaussian noise has a slightly larger error.
Many methods have been suggested to choose between distributions. There has been relatively less study to examine whether these methods accurately recover the distributions being studied. Hence, this research compares several popular distribution selection methods through a Monte Carlo simulation study and identifies which are robust for several types of discrete probability distributions. In addition, we study whether it matters that the distribution selection method does not accurately pick the correct probability distribution by calculating the expected distance, which is the amount of information lost for each distribution selection method compared to the generating probability distribution.
With the increase in penetration of inverter-based resources (IBRs) in the electrical power system, the ability of these devices to provide grid support to the system has become a necessity. With standards previously developed for the interconnection requirements of grid-following inverters (GFLI) (most commonly photovoltaic inverters), it has been well documented how these inverters 'should' respond to changes in voltage and frequency. However, with other IBRs such as grid-forming inverters (GFMIs) (used for energy storage systems, standalone systems, and as uninterruptable power supplies) these requirements are either: not yet documented, or require a more in deep analysis. With the increased interest in microgrids, GFMIs that can be paralleled onto a distribution system have become desired. With the proper control schemes, a GFMI can help maintain grid stability through fast response compared to rotating machines. This paper will present an experimental comparison of commercially available GFMIand GFLI ' responses to voltage and frequency deviation, as well as the GFMIoperating as a standalone system and subjected to various changes in loads.
Geothermal energy has been underutilized in the U.S., primarily due to the high cost of drilling in the harsh environments encountered during the development of geothermal resources. Drilling depths can approach 5,000 m with temperatures reaching 170 C. In situ geothermal fluids are up to ten times more saline than seawater and highly corrosive, and hard rock formations often exceed 240 MPa compressive strength. This combination of extreme conditions pushes the limits of most conventional drilling equipment. Furthermore, enhanced geothermal systems are expected to reach depths of 10,000 m and temperatures more than 300 °C. To address these drilling challenges, Sandia developed a proof-of-concept tool called the auto indexer under an annual operating plan task funded by the Geothermal Technologies Program (GTP) of the U.S. Department of Energy Geothermal Technologies Office. The auto indexer is a relatively simple, elastomer-free motor that was shown previously to be compatible with pneumatic hammers in bench-top testing. Pneumatic hammers can improve penetration rates and potentially reduce drilling costs when deployed in appropriate conditions. The current effort, also funded by DOE GTP, increased the technology readiness level of the auto indexer, producing a scaled prototype for drilling larger diameter boreholes using pneumatic hammers. The results presented herein include design details, modeling and simulation results, and testing results, as well as background on percussive hammers and downhole rotation.
2022 IEEE Power and Energy Conference at Illinois, PECI 2022
Weaver, Wayne W.; Robinett, Rush D.; Wilson, David G.; Matthews, Ronald C.
The world's oceans hold a tremendous amount of energy and are a promising resource of renewable energy. Wave Energy Converters (WECs) are a technology being developed to extract the energy from the ocean efficiently and economically. The main components of a WEC include a buoy, an electric machine, an energy storage system, and a connection to the onshore grid. Since the absorption of the energy in the ocean's waves is a complex hydrodynamic process a power-take-off (PTO) mechanism must be used to convert the mechanical motion of the buoy into usable electric energy. This conversion can be done by using a rack-and-pinion gear system to transform the linear velocity of the buoy into a rotational velocity that is used to turn the electric machine. To extract the most energy from the ocean waves a controller must be implemented on the electric machine to make the buoy resonate with the frequency of the waves. For irregular wave climates a multi-resonance controller can be utilized to resonate with the wave spectrum and optimize the power output of the WEC.
Software is ubiquitous in society, but understanding it, especially without access to source code, is both non-trivial and critical to security. A specialized group of cyber defenders conducts reverse engineering (RE) to analyze software. The expertise-driven process of software RE is not well understood, especially from the perspective of workflows and automated tools. We conducted a task analysis to explore the cognitive processes that analysts follow when using static techniques on binary code. Experienced analysts were asked to statically find a vulnerability in a small binary that could allow for unverified access to root privileges. Results show a highly iterative process with commonly used cognitive states across participants of varying expertise, but little standardization in process order and structure. A goal-centered analysis offers a different perspective about dominant RE states. We discuss implications about the nature of RE expertise and opportunities for new automation to assist analysts using static techniques.
As presented above, because similar existing DOE-managed SNF (DSNF) from previous reactors have been evaluated for disposal pathways, we use this knowledge/experience as a broad reference point for initial technical bases for preliminary dispositioning of potential AR SNF. The strategy for developing fully-formed gap analyses for AR SNF entails the primary step of first obtaining all the defining characteristics of the AR SNF waste stream from the AR developers. Utilizing specific and accurate information/data for developing the potential disposal inventory to be evaluated is a key principle start for success. Once the AR SNF waste streams are defined, the initial assessments would be based on comparison to appropriate existing SNF/waste forms previously analyzed (prior experience) to make a determination on feasibility of direct disposal, or the need to further evaluate due to differences specific to the AR SNF. Assessments of criticality potential and controls would also be performed to assess any R&D gaps to be addressed in that regard as well. Although some AR SNF may need additional treatment for waste form development, these aspects may also be constrained and evaluated within the context of disposal options, including detailed gap analysis to identify further R&D activities to close the gaps.
We computationally explore the optical and elastic modes necessary for acoustoelectrically enhanced Brillouin interactions. The large simulated piezoelectric (k2 ≈ 6%) and optome-chanical (|g0| ≈ 8000 (rad/s)√m) coupling theoretically predicts a performance enhancement of several orders of magnitude in Brillouin-based photonic technologies.
SIERRA/Aero is a compressible fluid dynamics program intended to solve a wide variety compressible fluid flows including transonic and hypersonic problems. This document describes the commands for assembling a fluid model for analysis with this module, henceforth referred to simply as Aero for brevity. Aero is an application developed using the SIERRA Toolkit (STK). The intent of STK is to provide a set of tools for handling common tasks that programmers encounter when developing a code for numerical simulation. For example, components of STK provide field allocation and management, and parallel input/output of field and mesh data. These services also allow the development of coupled mechanics analysis software for a massively parallel computing environment.
In transit visualization offers a desirable approach to performing in situ visualization by decoupling the simulation and visualization components. This decoupling requires that the data be transferred from the simulation to the visualization, which is typically done using some form of aggregation and redistribution. As the data distribution is adjusted to match the visualization’s parallelism during redistribution, the data transport layer must have knowledge of the input data structures to partition or merge them. In this chapter, we will discuss an alternative approach suitable for quickly integrating in transit visualization into simulations without incurring significant overhead or aggregation cost. Our approach adopts an abstract view of the input simulation data and works only on regions of space owned by the simulation ranks, which are sent to visualization clients on demand.
We propose the use of balanced iterative reducing and clustering using hierarchies (BIRCH) combined with linear regression to predict the reduced Young's modulus and hardness of highly heterogeneous materials from a set of nanoindentation experiments. We first use BIRCH to cluster the dataset according to its mineral compositions, which are derived from the spectral matching of energy-dispersive spectroscopy data through the modular automated processing system (MAPS) platform. We observe that grouping our dataset into five clusters yields the best accuracy as well as a reasonable representation of mineralogy in each cluster. Subsequently, we test four types of regression models, namely linear regression, support vector regression, Gaussian process regression, and extreme gradient boosting regression. The linear regression and Gaussian process regression provide the most accurate prediction, and the proposed framework yields R2 = 0.93 for the test set. Although the study is needed more comprehensively, our results shows that machine learning methods such as linear regression or Gaussian process regression can be used to accurately estimate mechanical properties with a proper number of grouping based on compositional data.
This paper presents an investigation into sampling strategies for reducing the computational expense of creating error models for steady hypersonic flow surrogate models. The error model describes the quantity of interest error between a reduced-order model prediction and a full-order model solution. The sampling strategies are separated into three categories: distinct training sets, single training set, and augmented single training set for the reduced-order model and the error model. Using a distinct training set, three sampling strategies are investigated: latin hypercube sampling, latin hypercube sampling with a maximin criterion, and a D-Optimal design. It was found that using a D-Optimal design was the most effective at producing an accurate error model with the fewest number of training points. When using a single training set, the leave-one-out cross validation approach was used on the D-Optimal design training set. This produced an error model with an R2 value of greater than 0.8, but it had some outliers due to high nonlinearities in the space. Augmenting the training points of the error model helped improve its accuracy. Using a D-Optimal design with distinct training sets cut the computational cost of creating the error model by 15% and using the LOOCV approach with the D-Optimal design cut the cost by 64%.
Structural alloys may experience corrosion when exposed to molten chloride salts due to selective dissolution of active alloying elements. One way to prevent this is to make the molten salt reducing. For the KCl + MgCl2 eutectic salt mixture, pure Mg can be added to achieve this. However, Mg can form intermetallic compounds with nickel at high temperatures, which may cause alloy embrittlement. This study shows that an optimum level of excess Mg could be added to the molten salt which will prevent corrosion of alloys like 316 H, while not forming any detectable Ni-Mg intermetallic phases on Ni-rich alloy surfaces.
We present a distributed Brillouin fiber sensor that operates by exciting a series of discrete lasing modes. This approach provides inherently wide dynamic range (5m) while the narrow linewidth lasing modes enable low noise (8n/Hz)
This work presents a high-speed laser-absorption-spectroscopy diagnostic capable of measuring temperature, pressure, and nitric oxide (NO) mole fraction in shock-heated air at a measurement rate of 500 kHz. This diagnostic was demonstrated in the High-Temperature Shock Tube (HST) facility at Sandia National Laboratories. The diagnostic utilizes a quantum-cascade laser to measure the absorbance spectra of two rovibrational transitions near 5.06 µm in the fundamental vibration bands (v" = 0 and 1) of NO in its ground electronic state (X2 Π1/2 ). Gas properties were determined using scanned-wavelength direct absorption and a recently established fitting method that utilizes a modified form of the time-domain molecular free-induction-decay signal (m-FID). This diagnostic was applied to acquire measurements in shock-heated air in the HST at temperatures ranging from approximately 2500 to 5500 K and pressures of 3 to 12 atm behind both incident and reflected shocks. The measurements agree well with the temperature predicted by NASA CEA and the pressure measured simultaneously using PCB pressure sensors. The measurements presented demonstrate that this diagnostic is capable of resolving the formation of NO in shock-heated air and the associated temperature change at the conditions studied.
Area efficient self-correcting flip-flops for use with triple modular redundant (TMR) soft-error hardened logic are implemented in a 12-nm finFET process technology. The TMR flip-flop slave latches self-correct in the clock low phase using Muller C-elements in the latch feedback. These C-elements are driven by the two redundant stored values and not by the slave latch itself, saving area over a similar implementation using majority gate feedback. These flip-flops are implemented as large shift-register arrays on a test chip and have been experimentally tested for their soft-error mitigation in static and dynamic modes of operation using heavy ions and protons. We show how high clock skew can result in susceptibility to soft-errors in the dynamic mode, and explain the potential failure mechanism.
Reno, Matthew J.; Blakely, Logan; Trevizan, Rodrigo D.; Pena, Bethany D.; Lave, Matthew S.; Azzolini, Joseph A.; Yusuf, Jubair; Jones, Christian B.; Furlani Bastos, Alvaro F.; Chalamala, Rohit; Korkali, Mert; Sun, Chih-Che; Donadee, Jonathan; Stewart, Emma M.; Donde, Vaibhav; Peppanen, Jouni; Hernandez, Miguel; Deboever, Jeremiah; Rocha, Celso; Rylander, Matthew; Siratarnsophon, Piyapath; Grijalva, Santiago; Talkington, Samuel; Gomez-Peces, Cristian; Mason, Karl; Vejdan, Sadegh; Khan, Ahmad U.; Mbeleg, Jordan S.; Ashok, Kavya; Divan, Deepak; Li, Feng; Therrien, Francis; Jacques, Patrick; Rao, Vittal; Francis, Cody; Zaragoza, Nicholas; Nordy, David; Glass, Jim
This report summarizes the work performed under a project funded by U.S. DOE Solar Energy Technologies Office (SETO) to use grid edge measurements to calibrate distribution system models for improved planning and grid integration of solar PV. Several physics-based data-driven algorithms are developed to identify inaccuracies in models and to bring increased visibility into distribution system planning. This includes phase identification, secondary system topology and parameter estimation, meter-to-transformer pairing, medium-voltage reconfiguration detection, determination of regulator and capacitor settings, PV system detection, PV parameter and setting estimation, PV dynamic models, and improved load modeling. Each of the algorithms is tested using simulation data and demonstrated on real feeders with our utility partners. The final algorithms demonstrate the potential for future planning and operations of the electric power grid to be more automated and data-driven, with more granularity, higher accuracy, and more comprehensive visibility into the system.
ASHRAE and IBPSA-USA Building Simulation Conference
Villa, Daniel V.; Carvallo, Juan P.; Bianchi, Carlo; Lee, Sang H.
Heat waves are increasing in severity, duration, and frequency, making historical weather patterns insufficient for assessments of building resilience. This work introduces a stochastic weather generator called the multi-scenario extreme weather simulator (MEWS) that produces credible future heat waves. MEWS calculates statistical parameters from historical weather data and then shifts them using climate projections of increasing severity and frequency. MEWS is demonstrated using the EnergyPlus medium office prototype model for climate zone 4B using five climate scenarios to 2060. The results show how changes in climate and heat waves affect electric loads, peak loads, and thermal comfort with uncertainty.
With machine learning (ML) technologies rapidly expanding to new applications and domains, users are collaborating with artificial intelligence-assisted diagnostic tools to a larger and larger extent. But what impact does ML aid have on cognitive performance, especially when the ML output is not always accurate? Here, we examined the cognitive effects of the presence of simulated ML assistance-including both accurate and inaccurate output-on two tasks (a domain-specific nuclear safeguards task and domain-general visual search task). Patterns of performance varied across the two tasks for both the presence of ML aid as well as the category of ML feedback (e.g., false alarm). These results indicate that differences such as domain could influence users' performance with ML aid, and suggest the need to test the effects of ML output (and associated errors) in the specific context of use, especially when the stimuli of interest are vague or ill-defined.
Aria is a Galerkin finite element based program for solving coupled-physics problems described by systems of PDEs and is capable of solving nonlinear, implicit, transient and direct-to-steady state problems in two and three dimensions on parallel architectures. The suite of physics currently supported by Aria includes thermal energy transport, species transport, and electrostatics as well as generalized scalar, vector and tensor transport equations. Additionally, Aria includes support for manufacturing process flows via the incompressible Navier-Stokes equations specialized to a low Reynolds number (Re < 1) regime. Enhanced modeling support of manufacturing processing is made possible through use of either arbitrary Lagrangian-Eulerian (ALE) and level set based free and moving boundary tracking in conjunction with quasi-static nonlinear elastic solid mechanics for mesh control. Coupled physics problems are solved in several ways including fully-coupled Newton’s method with analytic or numerical sensitivities, fully-coupled Newton-Krylov methods and a loosely-coupled nonlinear iteration about subsets of the system that are solved using combinations of the aforementioned methods. Error estimation, uniform and dynamic ℎ-adaptivity and dynamic load balancing are some of Aria’s more advanced capabilities.
Snow and ice accumulation on photovoltaic (PV) panels is a recognized-but poorly quantified-contributor to PV performance, not only in geographic areas that see persistent snow in winter but also at lower latitudes, where frozen precipitation and 'snowmageddon' events can wreak havoc with the solar infrastructure. In addition, research on the impact of snow and cold on PV systems has not kept pace with the proliferation of new technologies, the rapid deployment of PV in northern latitudes, and experiences with long-term field performance. This paper describes the value of a dedicated outdoor research facility for longitudinal performance and reliability studies of emerging technologies in cold climates.
In order to evaluate the time evolution of avalanche breakdown in wide and ultra-wide bandgap devices, we have developed a cable pulser experimental setup that can evaluate the time-evolution of the terminating impedance for a semiconductor device with a time resolution of 130 ps. We have utilized this pulser setup to evaluate the time-to-breakdown of vertical Gallium Nitride and Silicon Carbide diodes for possible use as protection elements in the electrical grid against fast transient voltage pulses (such as those induced by an electromagnetic pulse event). We have found that the Gallium Nitride device demonstrated faster dynamics compared to the Silicon Carbide device, achieving 90% conduction within 1.37 ns compared to the SiC device response time of 2.98 ns. While the Gallium Nitride device did not demonstrate significant dependence of breakdown time with applied voltage, the Silicon Carbide device breakdown time was strongly dependent on applied voltage, ranging from a value of 2.97 ns at 1.33 kV to 0.78 ns at 2.6 kV. The fast response time (< 5 ns) of both the Gallium Nitride and Silicon Carbide devices indicate that both materials systems could meet the stringent response time requirements and may be appropriate for implementation as protection elements against electromagnetic pulse transients.
The state of charge (SoC) estimated by Battery Management Systems (BMSs) could be vulnerable to False Data Injection Attacks (FDIAs), which aim to disturb state estimation. Inaccurate SoC estimation, due to attacks or suboptimal estimators, could lead to thermal runaway, accelerated degradation of batteries, and other undesirable events. In this paper, an ambient temperature-dependent model is adopted to represent the physics of a stack of three series-connected battery cells, and an Unscented Kalman Filter (UKF) is utilized to estimate the SoC for each cell. A Cumulative Sum (CUSUM) algorithm is used to detect FDIAs targeting the voltage sensors in the battery stack. The UKF was more accurate in state and measurement estimation than the Extended Kalman Filter (EKF) for Maximum Absolute Error (MAE) and Root Mean Squared Error (RMSE). The CUSUM algorithm described in this paper was able to detect attacks as low as ±1 mV when one or more voltage sensor was attacked under various ambient temperatures and attack injection times.
The Fusion Energy Sciences office supported “A Pilot Program for Research Traineeships to Broaden and Diversify Fusion Energy Sciences” at Sandia National Laboratories during the summer of 2021. This pilot project was motivated in part by the Fusion Energy Sciences Advisory Committee report observation that “The multidisciplinary workforce needed for fusion energy and plasma science requires that the community commit to the creation and maintenance of a healthy climate of diversity, equity, and inclusion, which will benefit the community as a whole and the mission of FES”. The pilot project was designed to work with North Carolina A&T (NCAT) University and leverage SNL efforts in FES to engage underrepresented students in developing and accessing advanced material solutions for plasma facing components in fusion systems. The intent was to create an environment conducive to the development of a sense of belonging amongst participants, foster a strong sense of physics identity among the participants, and provide financial support to enable students to advance academically while earning money. The purpose of this assessment is to review what worked well and lessons that can be learned. We reviewed implementation and execution of the pilot, describe successes and areas for improvement and propose a no-cost extension of the pilot project to apply these lessons and continue engagement activities in the summer of 2022.
Kawahara, Hajime; Kawashima, Yui; Masuda, Kento; Crossfield, Ian J.M.; Pannier, Erwan; van den Bekerom, Dirk C.
We present an autodifferentiable spectral modeling of exoplanets and brown dwarfs. This model enables a fully Bayesian inference of the high-dispersion data to fit the ab initio line-by-line spectral computation to the observed spectrum by combining it with the Hamiltonian Monte Carlo in recent probabilistic programming languages. An open-source code, ExoJAX (https://github.com/HajimeKawahara/exojax), developed in this study, was written in Python using the GPU/TPU compatible package for automatic differentiation and accelerated linear algebra, JAX. We validated the model by comparing it with existing opacity calculators and a radiative transfer code and found reasonable agreements for the output. As a demonstration, we analyzed the high-dispersion spectrum of a nearby brown dwarf, Luhman 16 A, and found that a model including water, carbon monoxide, and H2/He collision-induced absorption was well fitted to the observed spectrum (R = 105 and 2.28-2.30 μm). As a result, we found that T0=1295-32+35 K at 1 bar and C/O = 0.62 ± 0.03, which is slightly higher than the solar value. This work demonstrates the potential of a full Bayesian analysis of brown dwarfs and exoplanets as observed by high-dispersion spectrographs and also directly imaged exoplanets as observed by high-dispersion coronagraphy.
Modern distribution systems can accommodate different topologies through controllable tie lines for increasing the reliability of the system. Estimating the prevailing circuit topology or configuration is of particular importance at the substation for different applications to properly operate and control the distribution system. One of the applications of circuit configuration estimation is adaptive protection. An adaptive protection system relies on the communication system infrastructure to identify the latest status of power. However, when the communication links to some of the equipment are outaged, the adaptive protection system may lose its awareness over the status of the system. Therefore, it is necessary to estimate the circuit status using the available healthy communicated data. This paper proposes the use of machine learning algorithms at the substation to estimate circuit configuration when the communication to the tie breakers is compromised. Doing so, the adaptive protection system can identify the correct protection settings corresponding to the estimated circuit topology. The effectiveness of the proposed approach is verified on IEEE 123 bus test system.
Nonlinear force appropriation is an extension of its linear counterpart where sinusoidal excitation is applied to a structure with a modal shaker and phase quadrature is achieved between the excitation and response. While a standard practice in modal testing, modal shaker excitation has the potential to alter the dynamics of the structure under test. Previous studies have been conducted to address several concerns, but this work specifically focuses on a shaker-structure interaction phenomenon which arises during the force appropriation testing of a nonlinear structure. Under pure-tone sinusoidal forcing, a nonlinear structure may respond not only at the fundamental harmonic but also potentially at sub- or superharmonics, or it can even produce aperiodic and chaotic motion in certain cases. Shaker-structure interaction occurs when the response physically pushes back against the shaker attachment, producing non-fundamental harmonic content in the force measured by the load cell, even for pure tone voltage input to the shaker. This work develops a model to replicate these physics and investigates their influence on the response of a nonlinear normal mode of the structure. Experimental evidence is first provided that demonstrates the generation of harmonic content in the measured load cell force during a force appropriation test. This interaction is replicated by developing an electromechanical model of a modal shaker attached to a nonlinear, three-mass dynamical system. Several simulated experiments are conducted both with and without the shaker model in order to identify which effects are specifically due to the presence of the shaker. The results of these simulations are then compared to the undamped nonlinear normal modes of the structure under test to evaluate the influence of shaker-structure interaction on the identified system's dynamics.
Two techniques were developed to allow users of microfabricated surface ion traps to detect RF breakdown as soon as it happens, without needing to remove devices from vacuum and look at them with a microscope.
A new small-scale pressure vessel with a 5×5 fuel assembly and axially truncated PWR hardware was created to simulate commercial vacuum drying processes. This test assembly, known as the Dashpot Drying Apparatus, was built to focus on the drying of a single PWR dashpot and surrounding fuel. Drying operations were simulated for three tests with the DDA based on the pressure and temperature histories observed in the HBDP. All three tests were conducted with an empty guide tube. One test was performed with deionized water as the fill fluid. The other two tests used 0.2 M boric acid as the fill fluid to accurately simulate spent fuel pool conditions. These tests proved the capability of the DDA to mimic commercial drying processes on a limited scale and detect the presence of bulk and residual water. Furthermore, for all tests, pressure remained below the 0.4 kPa (3 Torr) rebound threshold for the final evacuation step in the drying procedure. Results indicate that after bulk fluid is removed from the pressure vessel, residual water is verifiably measured through confirmatory measurements of pressure and water content using a mass spectrometer. The final pressure rebound behaviors for the three tests conducted were well below the established regulatory limit of less than 0.4 kPa (3 Torr) within 30 minutes of isolation. The water content measurements across all tests showed that despite observing high water content within the DDA vessel at the beginning of the vacuum isolations, the water content drastically drops to below 1,200 ppmv after the isolations were conducted. The data and operational experience from these tests will guide the next evolution of experiments on a prototypic-length scale with multiple surrogate rods in a full 17×17 PWR assembly. The insight gained through these investigations is expected to support the technical basis for the continued safe storage of spent nuclear fuel into long term operations.
This report describes recommended abuse testing procedures for rechargeable energy storage systems (RESSs) for electric vehicles. This report serves as a revision to the USABC Electrical Energy Storage System Abuse Test Manual for Electric and Hybrid Electric Vehicle Applications (SAND99-0497).
Calibrating a finite element model to test data is often required to accurately characterize a joint, predict its dynamic behavior, and determine fastener fatigue life. In this work, modal testing, model calibration, and fatigue analysis are performed for a bolted structure, and various joint modeling techniques are compared. The structure is designed to test a single bolt to fatigue failure by utilizing an electrodynamic modal shaker to axially force the bolted joint at resonance. Modal testing is done to obtain the dynamic properties, evaluate finite element joint modeling techniques, and assess the effectiveness of a vibration approach to fatigue testing of bolts. Results show that common joint models can be inaccurate in predicting bolt loads, and even when updated using modal test data, linear structural models alone may be insufficient in evaluating fastener fatigue.
Modern day processes depend heavily on data-driven techniques that use large datasets clustered into relevant groups help them achieve higher efficiency, better utilization of the operation, and improved decision making. However, building these datasets and clustering by similar products is challenging in research environments that produce many novel and highly complex low-volume technologies. In this work, the author develops an algorithm that calculates the similarity between multiple low-volume products from a research environment using a real-world data set. The algorithm is applied to pulse power operations data, which routinely performs novel experiments for inertial confinement fusion, radiation effects, and nuclear stockpile stewardship. The author shows that the algorithm is successful in calculating similarity between experiments of varying complexity such that comparable shots can be used for further analysis. Furthermore, it has been able to identify experiments not traditionally seen as identical.
This study presents a method that can be used to gain information relevant to determining the corrosion risk for spent nuclear fuel (SNF) canisters during extended dry storage. Currently, it is known that stainless steel canisters are susceptible to chloride-induced stress corrosion cracking (CISCC). However, the rate of CISCC degradation and the likelihood that it could lead to a through-wall crack is unknown. This study uses well-developed computational fluid dynamics and particle-tracking tools and applies them to SNF storage to determine the rate of deposition on canisters. The deposition rate is determined for a vertical canister system and a horizontal canister system, at various decay heat rates with a uniform particle size distribution, ranging from 0.25 to 25 µm, used as an input. In all cases, most of the dust entering the overpack passed through without depositing. Most of what was retained in the overpack was deposited on overpack surfaces (e.g., inlet and outlet vents); only a small fraction was deposited on the canister itself. These results are provided for generalized canister systems with a generalized input; as such, this technical note is intended to demonstrate the technique. This study is a part of an ongoing effort funded by the U.S. Department of Energy, Nuclear Energy Office of Spent Fuel Waste Science and Technology, which is tasked with doing research relevant to developing a sound technical basis for ensuring the safe extended storage and subsequent transport of SNF. This work is being presented to demonstrate a potentially useful technique for SNF canister vendors, utilities, regulators, and stakeholders to utilize and further develop for their own designs and site-specific studies.
Proceedings of the Nuclear Criticality Safety Division Topical Meeting, NCSD 2022 - Embedded with the 2022 ANS Annual Meeting
Salazar, Alex
The postclosure criticality safety assessment for the direct disposal of dual-purpose canisters (DPCs) in a geologic repository includes considerations of transient criticality phenomena. The power pulse from a hypothetical transient criticality event in an unsaturated alluvial repository is evaluated for a DPC containing 37 spent pressurized water reactor (PWR) assemblies. The scenario assumes that the conditions for baseline criticality are achieved through flooding with groundwater and progressive failure of neutron absorbing media. A preliminary series of steady-state criticality calculations is conducted to characterize reactivity feedback due to absorber degradation, Doppler broadening, and thermal expansion. These feedback coefficients are used in an analysis with a reactor kinetics code to characterize the transient pulse given a positive reactivity insertion for a given length of time. The time-integrated behavior of the pulse can be used to model effects on the DPC and surrounding barriers in future studies and determine if transient criticality effects are consequential.
Sandia provided technical assistance to Kit Carson Electric Cooperative (KCEC) to assess the technical merits of a proposed community resilience microgrid project in the Village of El Rito, New Mexico (NM). The project includes a proposed community resilience microgrid in the Village of El Rito, NM, around the campus of Northern New Mexico College (NNMC). A conceptual microgrid analysis plan was performed, considering a campus and community-wide approach. The analysis results provided conceptual microgrid configurations, optimized according to the performance metrics defined. The campus microgrid was studied independently and many conceptual microgrid solutions were provided that met the performance requirements. Considering the existing 1.5 MW PV system on campus far exceeds the simulated campus load peak and energy demand, a small battery installation was deemed sufficient to support the campus microgrid goals. Following the analysis and consultation, it was determined that the core Resilient El Rito team will need to further investigate the results for additional economic and environmental considerations to continue toward the best approach for their goals and needs.
To keep pace with the demand for innovation through scientific computing, modern scientific software development is increasingly reliant upon a rich and diverse ecosystem of software libraries and toolchains. Research software engineers (RSEs) responsible for that infrastructure perform highly integrative work, acting as a bridge between the hardware, the needs of researchers, and the software layers situated between them; relatively little, however, has been written about the role played by RSEs in that work and what support they need to thrive. To that end, we present a two-part report on the development of half-precision floating point support in the Kokkos Ecosystem. Half-precision computation is a promising strategy for increasing performance in numerical computing and is particularly attractive for emerging application areas (e.g., machine learning), but developing practicable, portable, and user-friendly abstractions is a nontrivial task. In the first half of the paper, we conduct an engineering study on the technical implementation of the Kokkos half-precision scalar feature and showcase experimental results; in the second half, we offer an experience report on the challenges and lessons learned during feature development by the first author. We hope our study provides a holistic view on scientific library development and surfaces opportunities for future studies into effective strategies for RSEs engaged in such work.
Femtosecond laser electronic excitation tagging (FLEET) is a powerful unseeded velocimetry technique typically used to measure one component of velocity along a line, or two or three components from a dot. In this Letter, we demonstrate a dotted-line FLEET technique which combines the dense profile capability of a line with the ability to perform two-component velocimetry with a single camera on a dot. Our set-up uses a single beam path to create multiple simultaneous spots, more than previously achieved in other FLEET spot configurations. We perform dotted-line FLEET measurements downstream of a highly turbulent, supersonic nitrogen free jet. Dotted-line FLEET is created by focusing light transmitted by a periodic mask with rectangular slits of 1.6 × 40 mm2 and an edge-to-edge spacing of 0.5 mm, then focusing the imaged light at the measurement region. Up to seven symmetric dots spaced approximately 0.9 mm apart, with mean full-width at half maximum diameters between 150 and 350 µm, are simultaneously imaged. Both streamwise and radial velocities are computed and presented in this Letter.
This paper focuses on the development and testing of spoofing detection and localization techniques that rely only on clock deviations to identify threat signals. Detection methods that rely on dynamic receiver geometries to triangulate threat locations or signal geometry to identify spoofing are not considered here. Instead this paper focuses on single antenna receivers and assumes the receiver tracks only the inauthentic signal. The quality of the receiver clock has a significant impact on the performance of the receiver tracking loops. Low quality clocks have frequency instabilities that inherently limit the sensitivity of the receiver to slow growing errors. Some clocks provide better frequency stabilities but have a higher white frequency noise that can induce false detections. Because of these trends, various detection methods are tested with four types of receiver and transmitter clocks of varying quality.
The visualization community has invested decades of research and development into producing large-scale production visualization tools. Although in situ is a paradigm shift for large-scale visualization, much of the same algorithms and operations apply regardless of whether the visualization is run post hoc or in situ. Thus, there is a great benefit to taking the large-scale code originally designed for post hoc use and leveraging it for use in situ. This chapter describes two in situ libraries, Libsim and Catalyst, that are based on mature visualization tools, VisIt and ParaView, respectively. Because they are based on fully-featured visualization packages, they each provide a wealth of features. For each of these systems we outline how the simulation and visualization software are coupled, what the runtime behavior and communication between these components are, and how the underlying implementation works. We also provide use cases demonstrating the systems in action. Both of these in situ libraries, as well as the underlying products they are based on, are made freely available as open-source products. The overviews in this chapter provide a toehold to the practical application of in situ visualization.
The paper proposes an implementation of Graph Neural Networks (GNNs) for distribution power system Traveling Wave (TW) - based protection schemes. Simulated faults on the IEEE 34 system are processed by using the Karrenbauer Transform and the Stationary Wavelet Transform (SWT), and the energy of the resulting signals is calculated using the Parseval's Energy Theorem. This data is used to train Graph Convolutional Networks (GCNs) to perform fault zone location. Several levels of measurement noise are considered for comparison. The results show outstanding performance, more than 90% for the most developed models, and outline a fast, reliable, asynchronous and distributed protection scheme for distribution level networks.
Monitoring cavern leaching after each calendar year of oil sales is necessary to support cavern stability efforts and long-term availability for oil drawdowns in the U.S. Strategic Petroleum Reserve. Modeling results from the SANSMIC code and recent sonars are compared to show projected changes in the cavern’s geometry due to leaching from raw-water injections. This report aims to give background on the importance of monitoring cavern leaching and provide a detailed explanation of the process used to create the leaching plots used to monitor cavern leaching. In the past, generating leaching plots for each cavern in a given leaching year was done manually, and every cavern had to be processed individually. A Python script, compatible with Earth Volumetric Studio, was created to automate most of the process. The script makes a total of 26 plots per cavern to show leaching history, axisymmetric representation of leaching, and SANSMIC modeling of future leaching. The current run time for the script is one hour, replacing 40-50 hours of the monitoring cavern leaching process.
The resonant plate shock test is a dynamic test of a mid-field pyroshock environment where a projectile is struck against a plate. The structure undergoing the simulated field shock is mounted to the plate. The plate resonates when struck and provides a two sided shock that is representative of the shock observed in the field. This test environment shock simulates a shock in a single coordinate direction for components looking to provide evidence that they will survive a similar or less shock when deployed in their operating environment. However, testing in one axis at a time provides many challenges. The true environment is a multi-axis environment. The test environment exhibits strong off-axis motion when only motion in one axis is desired. Multiple fixtures are needed for a single test series. It would be advantageous if a single test could be developed that tests the multi-axis environment simultaneously. In order to design such a test, a model must be developed and validated. The model can be iterated in design and configuration until the specified multi-axis environment is met. The test can then execute the model driven test design. This report discusses the resonant plate model needed to design future tests and the steps and methods used to obtain the model. This report also details aspects of the resonant plate test discovered during the process of model development that aids in our understanding of the test.
A primary objective of repository modeling is identification and assessment of features and processes providing safety performance. Sensitivity analyses typically provide information on how input parameters affect performance, not features and processes. To quantify the effects of features and processes, tracers can be introduced virtually in model simulations and tracked in informative ways. This paper describes five ways virtual tracers can be used to directly measure the relative importance of several features, processes, and combinations of features and processes in repository performance assessment modeling.
We present a procedure for randomly generating realistic steady-state contingency scenarios based on the historical outage data from a particular event. First, we divide generation into classes and fit a probability distribution of outage magnitude for each class. Second, we provide a method for randomly synthesizing generator resilience levels in a way that preserves the data-driven probability distributions of outage magnitude. Finally, we devise a simple method of scaling the storm effects based on a single global parameter. We apply our methods using data from historical Winter Storm Uri to simulate contingency events for the ACTIVSg2000 synthetic grid on the footprint of Texas.
Any program tasked with the evaluation and acquisition of algorithms for use in deployed scenarios must have an impartial, repeatable, and auditable means of benchmarking both candidate and fielded algorithms. Success in this endeavor requires a body of representative sensor data, data labels indicating the proper algorithmic response to the data as adjudicated by subject matter experts, a means of executing algorithms under review against the data, and the ability to automatically score and report algorithm performance. Each of these capabilities should be constructed in support of program and mission goals. By curating and maintaining data, labels, tests, and scoring methodology, a program can understand and continually improve the relationship between benchmarked and fielded performance of acquired algorithms. A system supporting these program needs, deployed in an environment with sufficient computational power and necessary security controls is a powerful tool for ensuring due diligence in evaluation and acquisition of mission critical algorithms. This paper describes the Seascape system and its place in such a process.
In high temperature (HT) environments often encountered in geothermal wells, data rate transfers for downhole instrumentation are relatively limited due to transmission line bandwidth and insertion loss and the processing speed of HT microcontrollers. In previous research, Sandia National Laboratory Geothermal Department obtained 3.8 Mbps data rates over 1524 m (5000 ft) for single conductor wireline cable with less than a 1x10-8 bit error rate utilizing low temperature NITM hardware (formerly National InstrumentsTM). Our protocol technique was a combination of orthogonal frequency-division multiplexing and quadrature amplitude modulation across the bandwidth of the single conductor wireline. This showed it is possible to obtain high data rates in low bandwidth wirelines. This paper focuses on commercial HT microcontrollers (µC), rather than low temperature NITM modules, to enable high-speed communication in an HT environment. As part of this effort, four devices were evaluated, and an optimal device (SM320F28335-HT) was selected for its high clock rates, floating-point unit, and on-board analog-to-digital converter. A printed circuit board was assembled with the HT µC, an HT resistor digital-to-analog converter, and an HT line driver. The board was tested at the microcontroller's rated maximum temperature (210°C) for a week while transmitting through a 1524 m (5000 ft) wireline. A final test was conducted to the point of failure at elevated temperatures. This paper will discuss communication methods, achieved data rates, and hardware selection. This effort contributes to the enhancement of HT instrumentation by enabling greater sensor counts and improving data accuracy and transfer rates.
Conference Proceedings of the Society for Experimental Mechanics Series
Saunders, Brian E.; Vasconcellos, Rui M.G.; Kuether, Robert J.; Abdelkefi, Abdessattar
Dynamical systems containing contact/impact between parts can be modeled as piecewise-smooth reduced-order models. The most common example is freeplay, which can manifest as a loose support, worn hinges, or backlash. Freeplay causes very complex, nonlinear responses in a system that range from isolated resonances to grazing bifurcations to chaos. This can be an issue because classical solution methods, such as direct time integration (e.g., Runge-Kutta) or harmonic balance methods, can fail to accurately detect some of the nonlinear behavior or fail to run altogether. To deal with this limitation, researchers often approximate piecewise freeplay terms in the equations of motion using continuous, fully smooth functions. While this strategy can be convenient, it may not always be appropriate for use. For example, past investigation on freeplay in an aeroelastic control surface showed that, compared to the exact piecewise representation, some approximations are not as effective at capturing freeplay behavior as other ones. Another potential issue is the effectiveness of continuous representations at capturing grazing contacts and grazing-type bifurcations. These can cause the system to transition to high-amplitude responses with frequent contact/impact and be particularly damaging. In this work, a bifurcation study is performed on a model of a forced Duffing oscillator with freeplay nonlinearity. Various representations are used to approximate the freeplay including polynomial, absolute value, and hyperbolic tangent representations. Bifurcation analysis results for each type are compared to results using the exact piecewise-smooth representation computed using MATLAB® Event Location. The effectiveness of each representation is compared and ranked in terms of numerical accuracy, ability to capture multiple response types, ability to predict chaos, and computation time.
A critical parameter for the well integrity in geothermal storage and production wells subjected to frequent thermal cycling is the interface between the steel and cement. In geothermal energy storage and energy production wells an insulating cement sheath is necessary to minimize heat losses through the heat uptake by cooler rock formations with high thermal conductivity. Also critical parameters for the well integrity in geothermal storage and production wells subjected to frequent thermal cycling is the interface between metal casing and cement composite. A team from Sandia and Brookhaven National Labs is evaluating special cement formulations to facilitate use during severe and repeated thermal cycling in geothermal wells; this paper reports on recent finding using these more recently developed cements. For this portion of the laboratory study we report on preliminary results from subjecting this cement to high temperature (T> 200°C), at a confining pressure of 13.8 MPa, and pore water pressure of 10.4 MPa. Building on previous work, we studied two sample types; solid cement and a steel cylinder sheathed with cement. In the first sample type we measured fluid flow at increasing elevated temperatures and pressure. In the second sample type, we flowed water through the inside of the steel cylinder rapidly to develop an inner to outer thermal gradient using this specialized test geometry. In the paper we report on water permeability estimates at elevated temperatures and the results of rapid thermal cycling of a steel/cement interface. Posttest observations of the steel-cement interface reveal insight into the nature of the steel/cement bond.
This is an addendum to the Sierra/SolidMechanics 5.4 User’s Guide that documents additional capabilities available only in alternate versions of the Sierra/SolidMechanics (Sierra/SM) code. These alternate versions are enhanced to provide capabilities that are regulated under the U.S. Department of State’s International Traffic in Arms Regulations (ITAR) export control rules. The ITAR regulated codes are only distributed to entities that comply with the ITAR export control requirements. The ITAR enhancements to Sierra/SM include material models with an energy-dependent pressure response (appropriate for very large deformations and strain rates) and capabilities for blast modeling. This document is an addendum only; the standard Sierra/SolidMechanics 5.4 User’s Guide should be referenced for most general descriptions of code capability and use.
Propagating thermal runaway events are a significant threat to utility-scale storage installations. A propagating thermal runaway event is a cascading series of failures in which energy released from a failed cell triggers subsequent failures in nearby cells. Without intervention, propagation can turn an otherwise manageable single cell failure into a full system conflagration. This study presents a method of mitigating the severity of propagating thermal runaway events in utility-scale storage systems by leveraging the capabilities of a module-interfaced power conversion architecture. The method involves strategic depletion of storage modules to delay or arrest propagation, reducing the total thermal energy released in the failure event. The feasibility of the method is assessed through simulations of propagating thermal runaway events in a 160 kW/80 kWh energy storage system.
Numerical simulations of pressure-shear loading of a granular material are performed using the shock physics code CTH. A simple mesoscale model for the granular material is used that consists of a randomly packed arrangement of solid circular or spherical grains of uniform size separated by vacuum. The grain material is described by a simple shock equation of state, elastic perfectly plastic strength model, and fracture model with baseline parameters for WC taken from previous mesoscale modeling work. Simulations using the baseline material parameters are performed at the same initial conditions of pressure-shear experiments on dry WC powders. Except for some localized flow regions appearing in simulations with an approximate treatment of sliding interfaces among grains, the samples respond elastically during shear, which is in contrast to experimental observations. By extending the simulations to higher shear wave amplitudes, macroscopic shear failure of the simulated samples is observed with the shear strength increasing with increasing stress confinement. The shear strength is also found to be strongly dependent on the grain interface treatment and on the fracture stress of the grains, though the variation in shear strength due to fracture stress decreases with increasing stress confinement. At partial compactions, the transverse velocity histories show strain-hardening behavior followed by formation of a shear interface that extends through the transverse dimensions of the sample. Near full compaction, no strain hardening is observed and, instead, the sample transitions sharply from an elastic response to formation of an internal shear interface. Agreement with experiment is shown to worsen with increasing confinement stress with simulations overpredicting the shear strengths measured in experiment. The source of the disagreement can be ultimately attributed to the Eulerian nature of the simulations, which do not treat contact and fracture realistically.
In the summer of 2020, the National Aeronautics and Space Administration (NASA) launched a spacecraft as part of the Mars 2020 mission. The rover on the spacecraft uses a Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) to provide continuous electrical and thermal power for the mission. The MMRTG uses radioactive plutonium dioxide. NASA prepared a Supplemental Environmental Impact Statement (SEIS) for the mission in accordance with the National Environmental Policy Act. The SEIS provides information related to updates to the potential environmental impacts associated with the Mars 2020 mission as outlined in the Final Environmental Impact Statement (FEIS) for the Mars 2020 Mission issued in 2014 and associated Record of Decision (ROD) issued in January 2015. The Nuclear Risk Assessment (NRA) 2019 Update includes new and updated Mars 2020 mission information since the publication of the 2014 FEIS and the updates to the Launch Approval Process with the issuance of Presidential Memorandum on Launch of Spacecraft Containing Space Nuclear Systems, National Security Presidential Memorandum 20 (NSPM-20). The NRA 2019 Update addresses the responses of the MMRTG to potential accident and abort conditions during the launch opportunity for the Mars 2020 mission and the associated consequences. This information provides the technical basis for the radiological risks discussed in the SEIS. This paper provides a summary of the methods and results used in the NRA 2019 Update.
This paper presents a run-to-run (R2R) controller for mechanical serial sectioning (MSS). MSS is a destructive material analysis process which repeatedly removes a thin layer of material and images the exposed surface. The images are then used to gain insight into the material properties and often to construct a 3-dimensional reconstruction of the material sample. Currently, an experience human operator selects the parameters of the MSS to achieve the desired thickness. The proposed R2R controller will automate this process while improving the precision of the material removal. The proposed R2R controller solves an optimization problem designed to minimize the variance of the material removal subject to achieving the expected target removal. This optimization problem was embedded in an R2R framework to provide iterative feedback for disturbance rejection and convergence to the target removal amount. Since an analytic model of the MSS system is unavailable, we adopted a data-driven approach to synthesize our R2R controller from historical data. The proposed R2R controller is demonstrated through simulations. Future work will empirically demonstrate the proposed R2R through experiments with a real MSS system.
Deep operator learning has emerged as a promising tool for reduced-order modelling and PDE model discovery. Leveraging the expressive power of deep neural networks, especially in high dimensions, such methods learn the mapping between functional state variables. While proposed methods have assumed noise only in the dependent variables, experimental and numerical data for operator learning typically exhibit noise in the independent variables as well, since both variables represent signals that are subject to measurement error. In regression on scalar data, failure to account for noisy independent variables can lead to biased parameter estimates. With noisy independent variables, linear models fitted via ordinary least squares (OLS) will show attenuation bias, wherein the slope will be underestimated. In this work, we derive an analogue of attenuation bias for linear operator regression with white noise in both the independent and dependent variables, showing that the norm upper bound of the operator learned via OLS decreases with increasing noise in the independent variable. In the nonlinear setting, we computationally demonstrate underprediction of the action of the Burgers operator in the presence of noise in the independent variable. We propose error-in-variables (EiV) models for two operator regression methods, MOR-Physics and DeepONet, and demonstrate that these new models reduce bias in the presence of noisy independent variables for a variety of operator learning problems. Considering the Burgers operator in 1D and 2D, we demonstrate that EiV operator learning robustly recovers operators in high-noise regimes that defeat OLS operator learning. We also introduce an EiV model for time-evolving PDE discovery and show that OLS and EiV perform similarly in learning the Kuramoto-Sivashinsky evolution operator from corrupted data, suggesting that the effect of bias in OLS operator learning depends on the regularity of the target operator.
Proceedings of Correctness 2022: 6th International Workshop on Software Correctness for HPC Applications, Held in conjunction with SC 2022: The International Conference for High Performance Computing, Networking, Storage and Analysis
Iterative methods for solving linear systems serve as a basic building block for computational science. The computational cost of these methods can be significantly influenced by the round-off errors that accumulate as a result of their implementation in finite precision. In the extreme case, round-off errors that occur in practice can completely prevent an implementation from satisfying the accuracy and convergence behavior prescribed by its underlying algorithm. In the exascale era where cost is paramount, a thorough and rigorous analysis of the delay of convergence due to round-off should not be ignored. In this paper, we use a small model problem and the Jacobi iterative method to demonstrate how the Coq proof assistant can be used to formally specify the floating-point behavior of iterative methods, and to rigorously prove the accuracy of these methods.
Given a graph, finding the distance-2 maximal independent set (MIS-2) of the vertices is a problem that is useful in several contexts such as algebraic multigrid coarsening or multilevel graph partitioning. Such multilevel methods rely on finding the independent vertices so they can be used as seeds for aggregation in a multilevel scheme. We present a parallel MIS-2 algorithm to improve performance on modern accelerator hardware. This algorithm is implemented using the Kokkos programming model to enable performance portability. We demonstrate the portability of the algorithm and the performance on a variety of architectures (x86/ARM CPUs and NVIDIA/AMD GPUs). The resulting algorithm is also deterministic, producing an identical result for a given input across all of these platforms. The new MIS-2 implementation outperforms implementations in state of the art libraries like CUSP and ViennaCL by 3-8x while producing similar quality results. We further demonstrate the benefits of this approach by developing parallel graph coarsening scheme for two different use cases. First, we develop an algebraic multigrid (AMG) aggregation scheme using parallel MIS-2 and demonstrate the benefits as opposed to previous approaches used in the MueLu multigrid package in Trilinos. We also describe an approach for implementing a parallel multicolor 'cluster' Gauss-Seidel preconditioner using this MIS-2 coarsening, and demonstrate better performance with an efficient, parallel, mul-ticolor Gauss-Seidel algorithm.
This work explores deriving transmissibility functions for a missile from a measured location at the base of the fairing to a desired location within the payload. A pressure on the outside of the fairing and the rocket motor’s excitation creates an acceleration at a measured location and a desired location. Typically, the desired location is not measured. In fact, it is typical that the payload may change, but measured acceleration at the base of the fairing is generally similar to previous test flights. Given this knowledge, it is desired to use a finite-element model to create a transmissibility function which relates acceleration from the previous test flight’s measured location at the base of the fairing to acceleration at a location in the new payload. Four methods are explored for deriving this transmissibility, with the goal of finding an appropriate transmissibility when both the pressure and rocket motor excitation are equally present. These methods are assessed using transient results from a simple example problem, and it is found that one of the methods gives good agreement with the transient results for the full range of loads considered.
Aria is a Galerkin finite element based program for solving coupled-physics problems described by systems of PDEs and is capable of solving nonlinear, implicit, transient and direct-to-steady state problems in two and three dimensions on parallel architectures. The suite of physics currently supported by Aria includes thermal energy transport, species transport, and electrostatics as well as generalized scalar, vector and tensor transport equations. Additionally, Aria includes support for manufacturing process flows via the incompressible Navier-Stokes equations specialized to a low Reynolds number (Re < 1) regime. Enhanced modeling support of manufacturing processing is made possible through use of either arbitrary Lagrangian-Eulerian (ALE) and level set based free and moving boundary tracking in conjunction with quasi-static nonlinear elastic solid mechanics for mesh control. Coupled physics problems are solved in several ways including fully-coupled Newton’s method with analytic or numerical sensitivities, fully-coupled Newton-Krylov methods and a loosely-coupled nonlinear iteration about subsets of the system that are solved using combinations of the aforementioned methods. Error estimation, uniform and dynamic ℎ-adaptivity and dynamic load balancing are some of Aria’s more advanced capabilities.
This paper presents the formulation, implementation, and demonstration of a new, largely phenomenological, model for the damage-free (micro-crack-free) thermomechanical behavior of rock salt. Unlike most salt constitutive models, the new model includes both drag stress (isotropic) and back stress (kinematic) hardening. The implementation utilizes a semi-implicit scheme and a fall-back fully-implicit scheme to numerically integrate the model's differential equations. Particular attention was paid to the initial guesses for the fully-implicit scheme. Of the four guesses investigated, an initial guess that interpolated between the previous converged state and the fully saturated hardening state had the best performance. The numerical implementation was then used in simulations that highlighted the difference between drag stress hardening versus combined drag and back stress hardening. Simulations of multi-stage constant stress tests showed that only combined hardening could qualitatively represent reverse (inverse transient) creep, as well as the large transient strains experimentally observed upon switching from axisymmetric compression to axisymmetric extension. Simulations of a gas storage cavern subjected to high and low gas pressure cycles showed that combined hardening led to substantially greater volume loss over time than drag stress hardening alone.
An array of Wave Energy Converters (WEC) is required to supply a significant power level to the grid. However, the control and optimization of such an array is still an open research question. This paper analyzes two aspects that have a significant impact on the power production. First the spacing of the buoys in a WEC array will be analyzed to determine the optimal shift between the buoys in an array. Then the wave force interacting with the buoys will be angled to create additional sequencing between the electrical signals. A cost function is proposed to minimize the power variation and energy storage while maximizing the delivered energy to the onshore point of common coupling to the electrical grid.
This paper presents the Symbolic Linear Covariance Analysis Tool (SLiC), a Python framework capable of simplifying the construction, verification, and analysis of aerospace systems using linear covariance analysis techniques. The framework leverages open-source libraries to enable symbolic manipulation and object-oriented abstraction to remove many of the barriers to linear covariance analysis when compared to other methods. The benefits of linear covariance analysis with Monte Carlo verification are addressed and the framework design is described. The framework is validated against existing literature results and demonstrated for a sample aerospace use case of a hypersonic entry system.
This paper studies a novel mixed-integer linear programming (MILP) formulation on the application of mobile energy storage (MES) to assist with black-start restoration following the full blackout of an electrical network. By synthesizing techniques in the literature to model generator black start and MES activity, the formulation is the first to integrate the two concepts. Furthermore, it recognizes that the manner in which MES facilitates black-start (BS) restoration may differ depending on what component damages occurred during the event that induced the blackout. Within the IEEE 14-Bus System, testing of the formulation has not only confirmed its efficacy but also underscored circumstances where BS restoration could especially benefit from MES intervention in practice. With an MES sized at 2.59% of total MW generation capacity, in certain damage configuration categories the median load energy unserved is reduced by as much as 45.52 MWh (8.26%), and the median final load supplied is raised by as much as 22.98 MW (10.39%).
Measurements of gas-phase pressure and temperature in hypersonic flows are important to understanding fluid–structure interactions on vehicle surfaces, and to develop compressible flow turbulence models. To achieve this measurement capability, femtosecond coherent anti-Stokes Raman scattering (fs CARS) is applied at Sandia National Laboratories’ hypersonic wind tunnel. After excitation of rotational Raman transitions by a broadband femtosecond laser pulse, two probe pulses are used: one at an early time where the collisional environment has largely not affected the Raman coherence, and another at a later time after the collisional environment has led to significant J-dependent dephasing of the Raman coherence. CARS spectra from the early probe are fit for temperature, while the later CARS spectra are fit for pressure. Challenges related to implementing fs CARS in cold-flow hypersonic facilities are discussed. Excessive fs pump energy can lead to flow perturbations. The output of a second-harmonic bandwidth compressor (SHBC) is spectrally filtered using a volume Bragg grating to provide the narrowband ps probe pulses and enable single-shot CARS measurements at 1 kHz. Measurements are demonstrated at temperatures and pressures relevant to cold-flow hypersonic wind tunnels in a low-pressure cryostat with an initial demonstration in the hypersonic wind tunnel.
There is a need to perform offline anomaly detection in count data streams to simultaneously identify both systemic changes and outliers, simultaneously. We propose a new algorithmic method, called the Anomaly Detection Pipeline, which leverages common statistical process control procedures in a novel way to accomplish this. The method we propose does not require user-defined control or phase I training data, automatically identifying regions of stability for improved parameter estimation to support change point detection. The method does not require data to be normally distributed, and it detects outliers relative to the regimes in which they occur. Our proposed method performs comparably to state-of-the-art change point detection methods, provides additional capabilities, and is extendable to a larger set of possible data streams than known methods.
In this paper, we address the problem of convergence of sequential variational inference filter (VIF) through the application of a robust variational objective and H∞-norm based correction for a linear Gaussian system. As the dimension of state or parameter space grows, performing the full Kalman update with the dense covariance matrix for a large-scale system requires increased storage and computational complexity, making it impractical. The VIF approach, based on mean-field Gaussian variational inference, reduces this burden through the variational approximation to the covariance usually in the form of a diagonal covariance approximation. The challenge is to retain convergence and correct for biases introduced by the sequential VIF steps. We desire a frame-work that improves feasibility while still maintaining reasonable proximity to the optimal Kalman filter as data is assimilated. To accomplish this goal, a H∞-norm based optimization perturbs the VIF covariance matrix to improve robustness. This yields a novel VIF-H∞ recursion that employs consecutive variational inference and H∞ based optimization steps. We explore the development of this method and investigate a numerical example to illustrate the effectiveness of the proposed filter.
Grid operating security studies are typically employed to establish operating boundaries, ensuring secure and stable operation for a range of operation under NERC guidelines. However, if these boundaries are violated, the existing system security margins will be largely unknown. As an alternative to the use of complex optimizations over dynamic conditions, this work employs the use of Reinforcement-based Machine Learning to identify a sequence of secure state transitions which place the grid in a higher degree of operating security with greater static and dynamic stability margins. The approach requires the training of a Machine Learning Agent to accomplish this task using modeled data and employs it as a decision support tool under severe, near-blackout conditions.
This paper studies the differences in a synthetic inertia controller of using two different feedback measurements: (i) an estimate of the rate of change of frequency from local voltage measurements, and (ii) a remote machine acceleration from a generator nearby to the actuator. The device that provides the synthetic inertia action is a converter interfaced generator (CIG). The paper carries out analysis in the frequency domain, using Bode plots, to show that synthetic inertia control using frequency estimates is more prone to instabilities than for the case where a machine speed is used. The paper then proposes a controller (or a filter) to mitigate these effects. In addition, the paper shows the effects that a delay of the machine speed signal of the nearby generator has on the synthetic inertia control of the system and how a controller is also needed in this case. Finally, the paper shows the difference in performance of a synthetic inertia controller when using these different measurement signals with simulations in time domain a electromagnetic transient program platform.
The proper coordination of power system protective devices is essential for maintaining grid safety and reliability but requires precise knowledge of fault current contributions from generators like solar photovoltaic (PV) systems. PV inverter fault response is known to change with atmospheric conditions, grid conditions, and inverter control settings, but this time-varying behavior may not be fully captured by conventional static fault studies that are used to evaluate protection constraints in PV hosting capacity analyses. To address this knowledge gap, hosting capacity protection constraints were evaluated on a simplified test circuit using both a time-series fault analysis and a conventional static fault study approach. A PV fault contribution model was developed and utilized in the test circuit after being validated by hardware experiments under various irradiances, fault voltages, and advanced inverter control settings. While the results were comparable for certain protection constraints, the time-series fault study identified additional impacts that would not have been captured with the conventional static approach. Overall, while conducting full time-series fault studies may become prohibitively burdensome, these findings indicate that existing fault study practices may be improved by including additional test scenarios to better capture the time-varying impacts of PV on hosting capacity protection constraints.
Ultra-Wide-Bandgap semiconductors hold great promise for future power conversion applications. Figures of Merit (FOMs) are often used as a first means to understand the impact of semiconductor material parameters on power semiconductor performance, and in particular the Unipolar (or Baliga) FOM is often cited for this purpose. However, several factors of importance for Ultra-Wide-Bandgap semiconductors are not considered in the standard treatment of this FOM. For example, the Critical Field approximation has many shortcomings, and alternative transport mechanisms and incomplete dopant ionization are typically neglected. This paper presents the results of a study aimed at incorporating some of these effects into more realistic FOM calculations.
This user’s guide documents capabilities in Sierra/SolidMechanics which remain “in-development” and thus are not tested and hardened to the standards of capabilities listed in Sierra/SM 5.4 User’s Guide. Capabilities documented herein are available in Sierra/SM for experimental use only until their official release. These capabilities include, but are not limited to, novel discretization approaches such as the conforming reproducing kernel (CRK) method, numerical fracture and failure modeling aids such as the extended finite element method (XFEM) and J-integral, explicit time step control techniques, dynamic mesh rebalancing, as well as a variety of new material models and finite element formulations.
Integrating recent advancements in resilient algorithms and techniques into existing codes is a singular challenge in fault tolerance - in part due to the underlying complexity of implementing resilience in the first place, but also due to the difficulty introduced when integrating the functionality of a standalone new strategy with the preexisting resilience layers of an application. We propose that the answer is not to build integrated solutions for users, but runtimes designed to integrate into a larger comprehensive resilience system and thereby enable the necessary jump to multi-layered recovery. Our work designs, implements, and verifies one such comprehensive system of runtimes. Utilizing Fenix, a process resilience tool with integration into preexisting resilience systems as a design priority, we update Kokkos Resilience and the use pattern of VeloC to support application-level integration of resilience runtimes. Our work shows that designing integrable systems rather than integrated systems allows for user-designed optimization and upgrading of resilience techniques while maintaining the simplicity and performance of all-in-one resilience solutions. More application-specific choice in resilience strategies allows for better long-term flexibility, performance, and - importantly - simplicity.
Neural networks (NN)s have been increasingly proposed as surrogates for approximation of systems with computationally expensive physics for rapid online evaluation or exploration. As these surrogate models are integrated into larger optimization problems used for decision making, there is a need to verify their behavior to ensure adequate performance over the desired parameter space. We extend the ideas of optimization-based neural network verification to provide guarantees of surrogate performance over the feasible optimization space. In doing so, we present formulations to represent neural networks within decision-making problems, and we develop verification approaches that use model constraints to provide increasingly tight error estimates. We demonstrate the capabilities on a simple steady-state reactor design problem.
Numerical simulations of Greenland and Antarctic ice sheets involve the solution of large-scale highly nonlinear systems of equations on complex shallow geometries. This work is concerned with the construction of Schwarz preconditioners for the solution of the associated tangent problems, which are challenging for solvers mainly because of the strong anisotropy of the meshes and wildly changing boundary conditions that can lead to poorly constrained problems on large portions of the domain. Here, two-level generalized Dryja-Smith-Widlund (GDSW)-type Schwarz preconditioners are applied to different land ice problems, i.e., a velocity problem, a temperature problem, as well as the coupling of the former two problems. We employ the message passing interface (MPI)- parallel implementation of multilevel Schwarz preconditioners provided by the package FROSch (fast and robust Schwarz) from the Trilinos library. The strength of the proposed preconditioner is that it yields out-of-the-box scalable and robust preconditioners for the single physics problems. To the best of our knowledge, this is the first time two-level Schwarz preconditioners have been applied to the ice sheet problem and a scalable preconditioner has been used for the coupled problem. The preconditioner for the coupled problem differs from previous monolithic GDSW preconditioners in the sense that decoupled extension operators are used to compute the values in the interior of the subdomains. Several approaches for improving the performance, such as reuse strategies and shared memory OpenMP parallelization, are explored as well. In our numerical study we target both uniform meshes of varying resolution for the Antarctic ice sheet as well as nonuniform meshes for the Greenland ice sheet. We present several weak and strong scaling studies confirming the robustness of the approach and the parallel scalability of the FROSch implementation. Among the highlights of the numerical results are a weak scaling study for up to 32 K processor cores (8 K MPI ranks and 4 OpenMP threads) and 566 M degrees of freedom for the velocity problem as well as a strong scaling study for up to 4 K processor cores (and MPI ranks) and 68 M degrees of freedom for the coupled problem.
The Multi-Fidelity Toolkit (MFTK) is a simulation tool being developed at Sandia National Laboratories for aerodynamic predictions of compressible flows over a range of physics fidelities and computational speeds. These models include the Reynolds-Averaged Navier–Stokes (RANS) equations, the Euler equations, and modified Newtonian aerodynamics (MNA) equations, and they can be invoked independently or coupled with hierarchical Kriging to interpolate between high-fidelity simulations using lower-fidelity data. However, as with any new simulation capability, verification and validation are necessary to gather credibility evidence. This work describes formal code-and solution-verification activities. Code verification is performed on the MNA model by comparing with an analytical solution for flat-plate and inclined-plate geometries. Solution-verification activities include grid-refinement studies of HIFiRE-1 wind tunnel measurements, which are used for validation, for all model fidelities.
Ship emissions can form linear cloud structures, or ship tracks, when atmospheric water vapor condenses on aerosols in the ship exhaust. These structures are of interest because they are observable and traceable examples of MCB, a mechanism that has been studied as a potential approach for solar climate intervention. Ship tracks can be observed throughout the diurnal cycle via space-borne assets like the advanced baseline imagers on the national oceanic and atmospheric administration geostationary operational environmental satellites, the GOES-R series. Due to complex atmospheric dynamics, it can be difficult to track these aerosol perturbations over space and time to precisely characterize how long a single emission source can significantly contribute to indirect radiative forcing. We propose an optical flow approach to estimate the trajectories of ship-emitted aerosols after they begin mixing with low boundary layer clouds using GOES-17 satellite imagery. Most optical flow estimation methods have only been used to estimate large scale atmospheric motion. We demonstrate the ability of our approach to precisely isolate the movement of ship tracks in low-lying clouds from the movement of large swaths of high clouds that often dominate the scene. This efficient approach shows that ship tracks persist as visible, linear features beyond 9 h and sometimes longer than 24 h.
Heterogeneous computing is becoming common in the HPC world. The fast-changing hardware landscape is pushing programmers and developers to rely on performance-portable programming models to rewrite old and legacy applications and develop new ones. While this approach is suitable for individual applications, outstanding challenges still remain when multiple applications are combined into complex workflows. One critical difficulty is the exchange of data between communicating applications where performance constraints imposed by heterogeneous hardware advantage different data layouts. We attempt to solve this problem by exploring asynchronous data layout conversions for applications requiring different memory access patterns for shared data. We implement the proposed solution within the DataSpaces data staging service, extending it to support heterogeneous application workflows across a broad spectrum of programming models. In addition, we integrate heterogeneous DataSpaces with the Kokkos programming model and propose the Kokkos Staging Space as an extension of the Kokkos data abstraction. This new abstraction enables us to express data on a virtual shared space for multiple Kokkos applications, thus guaranteeing the portability of each application when assembling them into an efficient heterogeneous workflow. We present performance results for the Kokkos Staging Space using a synthetic workflow emulator and three different scenarios representing access frequency and use patterns in shared data. The results show that the Kokkos Staging Space is a superior solution in terms of time-to-solution and scalability compared to existing file-based Kokkos data abstractions for inter-application data exchange.
Propagating thermal runaway events are a significant threat to utility-scale storage installations. A propagating thermal runaway event is a cascading series of failures in which energy released from a failed cell triggers subsequent failures in nearby cells. Without intervention, propagation can turn an otherwise manageable single cell failure into a full system conflagration. This study presents a method of mitigating the severity of propagating thermal runaway events in utility-scale storage systems by leveraging the capabilities of a module-interfaced power conversion architecture. The method involves strategic depletion of storage modules to delay or arrest propagation, reducing the total thermal energy released in the failure event. The feasibility of the method is assessed through simulations of propagating thermal runaway events in a 160 kW/80 kWh energy storage system.
As transistors have been scaled over the past decade, modern systems have become increasingly susceptible to faults. Increased transistor densities and lower capacitances make a particle strike more likely to cause an upset. At the same time, complex computer systems are increasingly integrated into safety-critical systems such as autonomous vehicles. These two trends make the study of system reliability and fault tolerance essential for modern systems. To analyze and improve system reliability early in the design process, new tools are needed for RTL fault analysis.This paper proposes Eris, a novel framework to identify vulnerable components in hardware designs through fault-injection and fault propagation tracking. Eris builds on ESSENT - a fast C/C++ RTL simulation framework - to provide fault injection, fault tracking, and control-flow deviation detection capabilities for RTL designs. To demonstrate Eris' capabilities, we analyze the reliability of the open source Rocket Chip SoC by randomly injecting faults during thousands of runs on four microbenchmarks. As part of this analysis we measure the sensitivity of different hardware structures to faults based on the likelihood of a random fault causing silent data corruption, unrecoverable data errors, program crashes, and program hangs. We detect control flow deviations and determine whether or not they are benign. Additionally, using Eris' novel fault-tracking capabilities we are able to find 78% more vulnerable components in the same number of simulations compared to RTL-based fault injection techniques without these capabilities. We will release Eris as an open-source tool to aid future research into processor reliability and hardening.
Demonstration of broadband nanosecond output from a burst-mode-pumped noncolinear optical parametric oscillator (NOPO) has been achieved at 40 kHz. The NOPO is pumped by 355-nm output at 50 mJ/pulse for 45 pulses. A bandwidth of 540 cm-1 was achieved from the OPO with a conversion efficiency of 10% for 5 mJ/pulse. Higher bandwidths up to 750 cm-1 were readily achievable at reduced performance and beam quality. The broadband NOPO output was used for a planar BOXCARS phase matching scheme for N2 CARS measurements in a near adiabatic H2/air flame. Single-shot CARS measurements were taken for equivalence ratios of φ=0.52-0.86 for temperatures up to 2200 K.
Interest in the application of DC Microgrids to distribution systems have been spurred by the continued rise of renewable energy resources and the dependence on DC loads. However, in comparison to AC systems, the lack of natural zero crossing in DC Microgrids makes the interruption of fault currents with fuses and circuit breakers more difficult. DC faults can cause severe damage to voltage-source converters within few milliseconds, hence, the need to quickly detect and isolate the fault. In this paper, the potential for five different Machine Learning (ML) classifiers to identify fault type and fault resistance in a DC Microgrid is explored. The ML algorithms are trained using simulated fault data recorded from a 750 VDC Microgrid modeled in PSCAD/EMTDC. The performance of the trained algorithms are tested using real fault data gathered from an operational DC Microgrid located on the Kirtland Air Force Base. Of the five ML algorithms, three could detect the fault and determine the fault type with at least 99% accuracy, and only one could estimate the fault resistance with at least 99% accuracy. By performing a self-learning monitoring and decision making analysis, protection relays equipped with ML algorithms can quickly detect and isolate faults to improve the protection operations on DC Microgrids.
Time-resolved, absolute number densities of metastable N2(A3ς u +, v = 0, 1) molecules, ground state N2 and H atoms, and rotational-translational temperature have been measured by tunable diode laser absorption spectroscopy and two-photon absorption laser-induced fluorescence in diffuse N2 and N2-H2 plasmas during and after a nanosecond pulse discharge burst. Comparison of the measurement results with the kinetic modeling predictions, specifically the significant reduction of the N2(A3ς u +) populations and the rate of N atom generation during the burst, suggests that these two trends are related. The slow N atom decay in the afterglow, on a time scale longer than the discharge burst, demonstrates that the latter trend is not affected by N atom recombination, diffusion to the walls, or convection with the flow. This leads to the conclusion that the energy pooling in collisions of N2(A3ς u +) molecules is a major channel of N2 dissociation in electric discharges where a significant fraction of the input energy goes to electronic excitation of N2. Additional measurements in a 1% H2-N2 mixture demonstrate a further significant reduction of N2(A3ς u +, v = 0, 1) populations, due to the rapid quenching by H atoms accumulating in the plasma. Comparison with the modeling predictions suggests that the N2(A3ς u +) molecules may be initially formed in the highly vibrationally excited states. The reduction of the N2(A3ς u +) number density also diminishes the contribution of the energy pooling process into N2 dissociation, thus reducing the N atom number density. The rate of N atom generation during the burst also decreases, due to its strong coupling to N2(A3ς u +, v) populations. On the other hand, the rate of H atom generation, produced predominantly by the dissociative quenching of the excited electronic states of N2 by H2, remains about the same during the burst, resulting in a nearly linear rise in the H atom number density. Comparison of the kinetic model predictions with the experimental results suggests that the yield of H atoms during the quenching of the excited electronic state of N2 by molecular H2 is significantly less than 100%. The present results quantify the yield of N and H atoms in high-pressure H2-N2 plasmas, which have significant potential for ammonia generation using plasma-assisted catalysis.
We used a micro-fabricated fused silica light guide plate to uniformly illuminate a GaAs photovoltaic array with a fiber-coupled 808 nm laser. Greater than 1 Watt of galvanically-isolated electrical power was generated from this compact edge-illuminated monochromatic photovoltaic module.