Event-based sensors are a novel sensing technology which capture the dynamics of a scene via pixel-level change detection. This technology operates with high speed (>10 kHz), low latency (10 µs), low power consumption (<1 W), and high dynamic range (120 dB). Compared to conventional, frame-based architectures that consistently report data for each pixel at a given frame rate, event-based sensor pixels only report data if a change in pixel intensity occurred. This affords the possibility of dramatically reducing the data reported in bandwidth-limited environments (e.g., remote sensing) and thus, the data needed to be processed while still recovering significant events. Degraded visual environments, such as those generated by fog, often hinder situational awareness by decreasing optical resolution and transmission range via random scattering of light. To respond to this challenge, we present the deployment of an event-based sensor in a controlled, experimentally generated, well-characterized degraded visual environment (a fog analogue), for detection of a modulated signal and comparison of data collected from an event-based sensor and from a traditional framing sensor.
We consider the intersection between nonrepeating random FM (RFM) waveforms and practical forms of optimal mismatched filtering (MMF). Specifically, the spectrally-shaped inverse filter (SIF) is a well-known approximation to the least-squares (LS-MMF) that provides significant computational savings. Given that nonrepeating waveforms likewise require unique nonrepeating MMFs, this efficient form is an attractive option. Moreover, both RFM waveforms and the SIF rely on spectrum shaping, which establishes a relationship between the goodness of a particular waveform and the mismatch loss (MML) the corresponding filter can achieve. Both simulated and open-air experimental results are shown to demonstrate performance.
Sandia National Laboratories and Idaho National Laboratory deployed state-of-the-art cybersecurity technologies within a virtualized, cyber-physical wind energy site to demonstrate their impact on security and resilience. This work was designed to better quantify cost-benefit tradeoffs and risk reductions when layering different security technologies on wind energy operational technology networks. Standardized step-by-step attack scenarios were drafted for adversaries with remote and local access to the wind network. Then, the team investigated the impact of encryption, access control, intrusion detection, security information and event management, and security, orchestration, automation, and response (SOAR) tools on multiple metrics, including physical impacts to the power system and termination of the adversary kill chain. We found, once programmed, the intrusion detection systems could detect attacks and the SOAR system was able to effectively and autonomously quarantine the adversary, prior to power system impacts. Cyber and physical metrics indicated network and endpoint visibility were essential to provide human defenders situational awareness to maintain system resilience. Certain hardening technologies, like encryption, reduced adversary access, but recognition and response were also critical to maintain wind site operations. Lastly, a cost-benefit analysis was performed to estimate payback periods for deploying cybersecurity technologies based on projected breach costs.
Multiple Input Multiple Output (MIMO) vibration testing provides the capability to expose a system to a field environment in a laboratory setting, saving both time and money by mitigating the need to perform multiple and costly large-scale field tests. However, MIMO vibration test design is not straightforward oftentimes relying on engineering judgment and multiple test iterations to determine the proper selection of response Degree of Freedom (DOF) and input locations that yield a successful test. This work investigates two DOF selection techniques for MIMO vibration testing to assist with test design, an iterative algorithm introduced in previous work and an Optimal Experiment Design (OED) approach. The iterative-based approach downselects the control set by removing DOF that have the smallest impact on overall error given a target Cross Power Spectral Density matrix and laboratory Frequency Response Function (FRF) matrix. The Optimal Experiment Design (OED) approach is formulated with the laboratory FRF matrix as a convex optimization problem and solved with a gradient-based optimization algorithm that seeks a set of weighted measurement DOF that minimize a measure of model prediction uncertainty. The DOF selection approaches are used to design MIMO vibration tests using candidate finite element models and simulated target environments. The results are generalized and compared to exemplify the quality of the MIMO test using the selected DOF.
In this study, we develop an end-to-end deep learning-based inverse design approach to determine the scatterer shape necessary to achieve a target acoustic field. This approach integrates non-uniform rational B-spline (NURBS) into a convolutional autoencoder (CAE) architecture while concurrently leveraging (in a weak sense) the governing physics of the acoustic problem. By utilizing prior physical knowledge and NURBS parameterization to regularize the ill-posed inverse problem, this method does not require enforcing any geometric constraint on the inverse design space, hence allowing the determination of scatterers with potentially any arbitrary shape (within the set allowed by NURBS). A numerical study is presented to showcase the ability of this approach to identify physically-consistent scatterer shapes capable of producing user-defined acoustic fields.
Thiagarajan, Raghav S.; Subramaniam, Akshay; Kolluri, Suryanarayana; Garrick, Taylor R.; Preger, Yuliya; De Angelis, Valerio; Lim, Jin H.; Subramanian, Venkat R.
Lithium-ion batteries are typically modeled using porous electrode theory coupled with various transport and reaction mechanisms, along with suitable discretization or approximations for the solid-phase diffusion equation. The solid-phase diffusion equation represents the main computational burden for typical pseudo-2-dimensional (p2D) models since these equations in the pseudo r-dimension must be solved at each point in the computational grid. This substantially increases the complexity of the model as well as the computational time. Traditional approaches towards simplifying solid-phase diffusion possess certain significant limitations, especially in modeling emerging electrode materials which involve phase changes and variable diffusivities. A computationally efficient representation for solid-phase diffusion is discussed in this paper based on symmetric polynomials using Orthogonal Collocation and Galerkin formulation (weak form). A systematic approach is provided to increase the accuracy of the approximation (p form in finite element methods) to enable efficient simulation with a minimal number of semi-discretized equations, ensuring mass conservation even for non-linear diffusion problems involving variable diffusivities. These methods are then demonstrated by incorporation into the full p2D model, illustrating their advantages in simulating high C-rates and short-time dynamic operation of Lithium-ion batteries.
This study investigated the durability of four high temperature coatings for use as a Gardon gauge foil coating. Failure modes and effects analysis have identified Gardon gauge foil coating as a critical component for the development of a robust flux gauge for high intensity flux measurements. Degradation of coating optical properties and physical condition alters flux gauge sensitivity, resulting in flux measurement errors. In this paper, four coatings were exposed to solar and thermal cycles to simulate real-world aging. Solar simulator and box furnace facilities at the National Solar Thermal Test Facility (NSTTF) were utilized in separate test campaigns. Coating absorptance and emissivity properties were measured and combined into a figure of merit (FOM) to characterize the optical property stability of each coating, and physical coating degradation was assessed qualitatively using microscope images. Results suggest rapid high temperature cycling did not significantly impact coating optical properties and physical state. In contrast, prolonged exposure of coatings to high temperatures degraded coating optical properties and physical state. Coatings degraded after 1 hour of exposure at temperatures above 400 °C and stabilized after 6-24 hours of exposure. It is concluded that the combination of high temperatures and prolonged exposure provide the energy necessary to sustain coating surface reactions and alter optical and physical coating properties. Results also suggest flux gauge foil coatings could benefit from long duration high temperature curing (>400 °C) prior to sensor calibration to stabilize coating properties and increase measurement reliability in high flux and high temperature applications.
Researchers at Sandia National Laboratories, in conjunction with the Nuclear Energy Institute and Light Water Reactor Sustainability Programs, have conducted testing and analysis to reevaluate and redefine the minimum passible opening size through which a person can effectively pass and navigate. Physical testing with a representative population has been performed on both simple two-dimensional (rectangular and circular cross sections up to 91.4 cm in depth) and more complex three-dimensional (circular cross sections of longer lengths up to 9.1 m and changes in direction) opening configurations. The primary impact of this effort is to define the physical design in which an adversary could successfully pass through a potentially complex opening, as well as to define the designs in which an adversary would not be expected to successfully traverse a complex opening. These data can then be used to support risk-informed decision making.
7th IEEE Electron Devices Technology and Manufacturing Conference: Strengthen the Global Semiconductor Research Collaboration After the Covid-19 Pandemic, EDTM 2023
This paper presents an assessment of electrical device measurements using functional data analysis (FDA) on a test case of Zener diode devices. We employ three techniques from FDA to quantify the variability in device behavior, primarily due to production lot and demonstrate that this has a significant effect in our data set. We also argue for the expanded use of FDA methods in providing principled, quantitative analysis of electrical device data.
The block version of GMRES (BGMRES) is most advantageous over the single right hand side (RHS) counterpart when the cost of communication is high while the cost of floating point operations is not. This is the particular case on modern graphics processing units (GPUs), while it is generally not the case on traditional central processing units (CPUs). In this paper, experiments on both GPUs and CPUs are shown that compare the performance of BGMRES against GMRES as the number of RHS increases, with a particular forcus on GPU performance. The experiments indicate that there are many cases in which BGMRES is slower than GMRES on CPUs, but faster on GPUs. Furthermore, when varying the number of RHS on the GPU, there is an optimal number of RHS where BGMRES is clearly most advantageous over GMRES. A computational model for the GPU is developed using hardware specific parameters, providing insight towards how the qualitative behavior of BGMRES changes as the number of RHS increase, and this model also helps explain the phenomena observed in the experiments.
The generalized Dryja-Smith-Widlund (GDSW) preconditioner is a two-level overlapping Schwarz domain decomposition (DD) preconditioner that couples a classical one-level overlapping Schwarz preconditioner with an energy-minimizing coarse space. When used to accelerate the convergence rate of Krylov subspace iterative methods, the GDSW preconditioner provides robustness and scalability for the solution of sparse linear systems arising from the discretization of a wide range of partial different equations. In this paper, we present FROSch (Fast and Robust Schwarz), a domain decomposition solver package which implements GDSW-type preconditioners for both CPU and GPU clusters. To improve the solver performance on GPUs, we use a novel decomposition to run multiple MPI processes on each GPU, reducing both solver's computational and storage costs and potentially improving the convergence rate. This allowed us to obtain competitive or faster performance using GPUs compared to using CPUs alone. We demonstrate the performance of FROSch on the Summit supercomputer with NVIDIA V100 GPUs, where we used NVIDIA Multi-Process Service (MPS) to implement our decomposition strategy.The solver has a wide variety of algorithmic and implementation choices, which poses both opportunities and challenges for its GPU implementation. We conduct a thorough experimental study with different solver options including the exact or inexact solution of the local overlapping subdomain problems on a GPU. We also discuss the effect of using the iterative variant of the incomplete LU factorization and sparse-triangular solve as the approximate local solver, and using lower precision for computing the whole FROSch preconditioner. Overall, the solve time was reduced by factors of about 2× using GPUs, while the GPU acceleration of the numerical setup time depend on the solver options and the local matrix sizes.
This presentation describes a new effort to better understand insulator flashover in high current, high voltage pulsed power systems. Both experimental and modeling investigations are described. Particular emphasis is put upon understand flashover that initiate in the anode triple junction (anode-vacuum-dielectric).
With increasing penetration of variable renewable generation, battery energy storage systems (BESS) are becoming important for power system stability due to their operational flexibility. In this paper, we propose a method for determining the minimum BESS rated power that guarantees security constraints in a grid subject to disturbances induced by variable renewable generation. The proposed framework leverages sensitivity-based inverse uncertainty propagation where the dynamical responses of the states are parameterized with respect to random variables. Using this approach, the original nonlinear optimization problem for finding the security-constrained uncertainty interval may be formulated as a quadratically-constrained linear program. The resulting estimated uncertainty interval is utilized to find the BESS rated power required to satisfy grid stability constraints.
This work developed a methodology for transmission line modeling of cable installations to predict the propagation of conducted high altitude electromagnetic pulses in a substation or generating plant. The methodology was applied to a termination cabinet example that was modeled with SPICE transmission line elements with information from electromagnetic field modeling and with validation using experimental data. The experimental results showed reasonable agreement to the modeled propagating pulse and can be applied to other installation structures in the future.
We study both conforming and non-conforming versions of the practical DPG method for the convection-reaction problem. We determine that the most common approach for DPG stability analysis - construction of a local Fortin operator - is infeasible for the convection-reaction problem. We then develop a line of argument based on a direct proof of discrete stability; we find that employing a polynomial enrichment for the test space does not suffice for this purpose, motivating the introduction of a (two-element) subgrid mesh. The argument combines mathematical analysis with numerical experiments.
Soft magnetic composites (SMCs) offer a promising alternative to electrical steels and soft ferrites in high performance motors and power electronics. They are ideal for incorporation into passive electronic components such as inductors and transformers, which require a non-permanent magnetic core to rapidly switch magnetization. As a result, there is a need for materials with the right combination of low coercivity, low magnetic remanence, high relative permeability, and high saturation magnetization to achieve these goals. Iron nitride is an attractive soft magnetic material for incorporation into an amine/epoxy resin matrix. This permits the synthesis of net-shaped SMCs using a “bottom-up” approach for overcoming the limitations of current state-of-the-art SMCs made via conventional powder metal processing techniques. In this work we present the fabrication of various net-shaped, iron nitride-based SMCs using two different amine/epoxy resin systems and their magnetic characterization. The maximum volume loading of iron nitride reached was ∼77% via hot pressing, which produced SMCs with a saturation magnetic polarization (Js) of ∼0.9 T, roughly 2–3 times the Js of soft ferrites.
The current interest in hypersonic flows and the growing importance of plasma applications necessitate the development of diagnostics for high-enthalpy flow environments. Reliable and novel experimental data at relevant conditions will drive engineering and modeling efforts forward significantly. This study demonstrates the usage of nanosecond Coherent Anti-Stokes Raman Scattering (CARS) to measure temperature in an atmospheric, high-temperature (> 5500 K) air plasma. The experimental configuration is of interest as the plasma is close to thermodynamic equilibrium and the setup is a test-bed for heat shield materials. The determination of the non-resonant background at such high-temperatures is explored and rotational-vibrational equilibrium temperatures of the N2 ground state are determined via fits of the theory to measured spectra. Results show that the accuracy of the temperature measurements is affected by slow periodic variations in the plasma, causing sampling error. Moreover, depending on the experimental configuration, the measurements can be affected by two-beam interaction, which causes a bias towards lower temperatures, and stimulated Raman pumping, which causes a bias towards higher temperatures. The successful demonstration of CARS at the present conditions, and the exploration of its sensitivities, paves the way towards more complex measurements, e.g. close to interfaces in high-enthalpy plasma flows.
Sandia National Laboratories has conducted geomechanical analysis to evaluate the performance of the Strategic Petroleum Reserve by modeling the viscoplastic, or creep, behavior of the salt in which their oil-storage caverns reside. The operation-driven imbalance between fluid pressure within the salt cavern and in-situ stress acting on the surrounding salt can cause the salt to creep, potentially leading to a loss of the cavern volume and consequently deformation of borehole casings. Therefore, a greater understanding of salt creep's behavior on borehole casing needs to be addressed to drive cavern operations decisions. To evaluate potential casing damage mechanisms with variation in geological constraints (e.g. material characteristics of salt or caprock) or physical mechanisms of cavern leakage, we developed a generic model with a layered and domal geometry including nine caverns, rather than use a specific field-site model, to save computational costs. The geomechanical outputs, such as cavern volume changes, vertical strain along the dome and caprock above the cavern and vertical displacement at the surface or cavern top, quantifies the impact of material parameters and cavern locations as well as multiple operations in multiple caverns on an individual cavern stability.
We present a design paradigm based on topological charge splitting for creating nearly-degenerate, high-quality factor (Q) states with arbitrary polarization states in all-dielectric metasurfaces.
A comprehensive control strategy is necessary to safely and effectively operate particle based concentrating solar power (CSP) technologies. Particle based CSP with thermal energy storage (TES) is an emerging technology with potential to decarbonize power and process heat applications. The high-temperature nature of particle based CSP technologies and daily solar transients present challenges for system control to prevent equipment damage and ensure operator safety. An operational controls strategy for a tower based particle CSP system during steady state and transient conditions with safety interlocks is described in this paper. Control of a solar heated particle recirculation loop, TES, and a supercritical carbon dioxide (sCO2) cooling loop designed to reject 1 MW of thermal power are considered and associated operational limitations and their influence on control strategy are discussed.
This paper presents a die-embedded glass interposer with minimum warpage for 5G/6G applications. The interposer performs high integration with low-loss interconnects by embedding multiple chips in the same glass substrate and interconnecting the chips through redistributive layers (RDL). Novel processes for cavity creation, multi-die embedding, carrier- less RDL build up and heat spreader attachment are proposed and demonstrated in this work. Performance of the interposer from 1 GHz to 110 GHz are evaluated. This work provides an advanced packaging solution for low-loss die-to-die and die-to-package interconnects, which is essential to high performance wireless system integration.
Reinforcement learning (RL) may enable fixedwing unmanned aerial vehicle (UAV) guidance to achieve more agile and complex objectives than typical methods. However, RL has yet struggled to achieve even minimal success on this problem; fixed-wing flight with RL-based guidance has only been demonstrated in literature with reduced state and/or action spaces. In order to achieve full 6-DOF RL-based guidance, this study begins training with imitation learning from classical guidance, a method known as warm-staring (WS), before further training using Proximal Policy Optimization (PPO). We show that warm starting is critical to successful RL performance on this problem. PPO alone achieved a 2% success rate in our experiments. Warm-starting alone achieved 32% success. Warm-starting plus PPO achieved 57% success over all policies, with 40% of policies achieving 94% success.
A comprehensive study of the mechanical response of a 316 stainless steel is presented. The split-Hopkinson bar technique was used to evaluate the mechanical behavior at dynamic strain rates of 500 s−1, 1500 s−1, and 3000 s−1 and temperatures of 22 °C and 300 °C under tension and compression loading, while the Drop-Hopkinson bar was used to characterize the tension behavior at an intermediate strain rate of 200 s−1. The experimental results show that the tension and compression flow stress are reasonably symmetric, exhibit positive strain rate sensitivity, and are inversely dependent on temperature. The true failure strain was determined by measuring the minimum diameter of the post-test tension specimen. The 316 stainless steel exhibited a ductile response, and the true failure strain increased with increasing temperature and decreased with increasing strain rate.
Two relatively under-reported facets of fuel storage fire safety are examined in this work for a 250, 000 gallon two-tank storage system. Ignition probability is linked to the radiative flux from a presumed fire. First, based on observed features of existing designs, fires are expected to be largely contained within a designed footprint that will hold the full spilled contents of the fuel. The influence of the walls and the shape of the tanks on the magnitude of the fire is not a well-described aspect of conventional fire safety assessment utilities. Various resources are herein used to explore the potential hazard for a contained fire of this nature. Second, an explosive attack on the fuel storage has not been widely considered in prior work. This work explores some options for assessing this hazard. The various methods for assessing the constrained conventional fires are found to be within a reasonable degree of agreement. This agreement contrasts with the hazard from an explosive dispersal. Best available assessment techniques are used, which highlight some inadequacies in the existing toolsets for making predictions of this nature. This analysis, using the best available tools, suggests the offset distance for the ignition hazard from a fireball will be on the same order as the offset distance for the blast damage. This suggests the buy-down of risk by considering the fireball is minimal when considering the blast hazards. Assessment tools for the fireball predictions are not particularly mature, and ways to improve them for a higher-fidelity estimate are noted.
Pulsed dielectric barrier discharges (DBD) in He-H2O and He-H2O-O2 mixtures are studied in near atmospheric conditions using temporally and spatially resolved quantitative 2D imaging of the hydroxyl radical (OH) and hydrogen peroxide (H2O2). The primary goal was to detect and quantify the production of these strongly oxidative species in water-laden helium discharges in a DBD jet configuration, which is of interest for biomedical applications such as disinfection of surfaces and treatment of biological samples. Hydroxyl profiles are obtained by laser-induced fluorescence (LIF) measurements using 282 nm laser excitation. Hydrogen peroxide profiles are measured by photo-fragmentation LIF (PF-LIF), which involves photo-dissociating H2O2 into OH with a 212.8 nm laser sheet and detecting the OH fragments by LIF. The H2O2 profiles are calibrated by measuring PF-LIF profiles in a reference mixture of He seeded with a known amount of H2O2. OH profiles are calibrated by measuring OH-radical decay times and comparing these with predictions from a chemical kinetics model. Two different burst discharge modes with five and ten pulses per burst are studied, both with a burst repetition rate of 50 Hz. In both cases, dynamics of OH and H2O2 distributions in the afterglow of the discharge are investigated. Gas temperatures determined from the OH-LIF spectra indicate that gas heating due to the plasma is insignificant. The addition of 5% O2 in the He admixture decreases the OH densities and increases the H2O2 densities. The increased coupled energy in the ten-pulse discharge increases OH and H2O2 mole fractions, except for the H2O2 in the He-H2O-O2 mixture which is relatively insensitive to the additional pulses.
There is a global interest in decarbonizing the existing natural gas infrastructure by blending the natural gas with hydrogen. However, hydrogen is known to embrittle pipeline and pressure vessel steels used in gas transportation and storage applications. Thus, assessing the structural integrity of vintage pipeline (pre-1970s) in the presence of gaseous hydrogen is a critical step towards successful implementation of hydrogen blending into existing infrastructure. To this end, fatigue crack growth (FCG) behavior and fracture resistance of several vintage X52 pipeline steels were evaluated in high purity gaseous hydrogen environments at pressure of 210 bar (3,000 psi) and 34 bar (500 psi). The base metal and seam weld microstructures were characterized using optical microscopy, scanning electron microscopy (SEM) and Vickers hardness mapping. The base metals consisted of ferrite-pearlite banded microstructures, whereas the weld regions contained ferrite and martensite. In one case, a hook-like crack was observed in an electric resistance (seam) weld; whereas hard spots were observed near the bond line of a double-submerged arc (seam) weld. For a given hydrogen gas pressure, comparable FCG rates were observed for the different base metal and weld microstructures. Generally, the higher strength microstructures had lower fracture resistance in hydrogen. In particular, lower fracture resistance was measured when local hard spots were observed in the approximate region of the crack plane of the weld. Samples tested in lower H2 pressure (34 bar) exhibited lower FCG rates (in the lower ∆K regime) and greater fracture resistance when compared to the respective high-pressure (210 bar) hydrogen tests. The hydrogen-assisted fatigue and fracture surfaces were qualitatively characterized using SEM to rationalize the influence of microstructure on the dominant fracture mechanisms in gaseous hydrogen environment.
Despite state-of-the-art deep learning-based computer vision models achieving high accuracy on object recognition tasks, x-ray screening of baggage at checkpoints is largely performed by hand. Part of the challenge in automation of this task is the relatively small amount of available labeled training data. Furthermore, realistic threat objects may have forms or orientations that do not appear in any training data, and radiographs suffer from high amounts of occlusion. Using deep generative models, we explore data augmentation techniques to expand the intra-class variation of threat objects synthetically injected into baggage radiographs using openly available baggage x-ray datasets. We also benchmark the performance of object detection algorithms on raw and augmented data.
Austenitic stainless steels have been extensively tested in hydrogen environments; however, limited information exists for the effects of hydrogen on the fatigue life of high-strength grades of austenitic stainless steels. Moreover, fatigue life testing of finished product forms (such as tubing and welds) is challenging. A novel test method for evaluating the influence of internal hydrogen on fatigue of orbital tube welds was reported, where a cross hole in a tubing specimen is used to establish a stress concentration analogous to circumferentially notched bar fatigue specimens for constant-load, axial fatigue testing. In that study (Kagay et al, ASME PVP2020-8576), annealed 316L tubing with a cross hole displayed similar fatigue performance as more conventional materials test specimens. A similar cross-hole tubing geometry is adopted here to evaluate the fatigue crack initiation and fatigue life of XM-19 austenitic stainless steel with high concentration of internal hydrogen. XM-19 is a nitrogen-strengthened Fe-Cr-Ni-Mn austenitic stainless steel that offers higher strength than conventional 3XX series stainless steels. A uniform hydrogen concentration in the test specimen is achieved by thermal precharging (exposure to high-pressure hydrogen at elevated temperature for two weeks) prior to testing in air to simulate the equilibrium hydrogen concentration near a stress concentration in gaseous hydrogen service. Specimens are also instrumented for direct current potential difference measurements to identify crack initiation. After accounting for the strengthening associated with thermal precharging, the fatigue crack initiation and fatigue life of XM-19 tubing were virtually unchanged by internal hydrogen.
Multiple rotors on single structures have long been proposed to increase wind turbine energy capture with no increase in rotor size, but at the cost of additional mechanical complexity in the yaw and tower designs. Standard turbines on their own very-closely-spaced towers avoid these disadvantages but create a significant disadvantage; for some wind directions the wake turbulence of a rotor enters the swept area of a very close downwind rotor causing low output, fatigue stress, and changes in wake recovery. Knowing how the performance of pairs of closely spaced rotors varies with wind direction is essential to design a layout that maximizes the useful directions and minimizes the losses and stress at other directions. In the current work, the high-fidelity large-eddy simulation (LES) code Exa-Wind/Nalu-Wind is used to simulate the wake interactions from paired-rotor configurations in a neutrally stratified atmospheric boundary layer to investigate performance and feasibility. Each rotor pair consists of two Vestas V27 turbines with hub-to-hub separation distances of 1.5 rotor diameters. The on-design wind direction results are consistent with previous literature. For an off-design wind direction of 26.6°, results indicate little change in power and far-wake recovery relative to the on-design case. At a direction of 45.0°, significant rotor-wake interactions produce an increase in power but also in far-wake velocity deficit and turbulence intensity. A severely off-design case is also considered.
Helium or neopentane can be used as surrogate gas fill for deuterium (D2) or deuterium-tritium (DT) in laser-plasma interaction studies. Surrogates are convenient to avoid flammability hazards or the integration of cryogenics in an experiment. To test the degree of equivalency between deuterium and helium, experiments were conducted in the Pecos target chamber at Sandia National Laboratories. Observables such as laser propagation and signatures of laser-plasma instabilities (LPI) were recorded for multiple laser and target configurations. It was found that some observables can differ significantly despite the apparent similarity of the gases with respect to molecular charge and weight. While a qualitative behaviour of the interaction may very well be studied by finding a suitable compromise of laser absorption, electron density, and LPI cross sections, a quantitative investigation of expected values for deuterium fills at high laser intensities is not likely to succeed with surrogate gases.
Mann, James B.; Mohanty, Debapriya P.; Kustas, Andrew B.; Stiven Puentes Rodriguez, B.; Issahaq, Mohammed N.; Udupa, Anirudh; Sugihara, Tatsuya; Trumble, Kevin P.; M'Saoubi, Rachid; Chandrasekar, Srinivasan
Machining-based deformation processing is used to produce metal foil and flat wire (strip) with suitable properties and quality for electrical power and renewable energy applications. In contrast to conventional multistage rolling, the strip is produced in a single-step and with much less process energy. Examples are presented from metal systems of varied workability, and strip product scale in terms of size and production rate. By utilizing the large-strain deformation intrinsic to cutting, bulk strip with ultrafine-grained microstructure, and crystallographic shear-texture favourable for formability, are achieved. Implications for production of commercial strip for electric motor applications and battery electrodes are discussed.
Finding the maximum cut of a graph (MAXCUT) is a classic optimization problem that has motivated parallel algorithm development. While approximate algorithms to MAXCUT offer attractive theoretical guarantees and demonstrate compelling empirical performance, such approximation approaches can shift the dominant computational cost to the stochastic sampling operations. Neuromorphic computing, which uses the organizing principles of the nervous system to inspire new parallel computing architectures, offers a possible solution. One ubiquitous feature of natural brains is stochasticity: the individual elements of biological neural networks possess an intrinsic randomness that serves as a resource enabling their unique computational capacities. By designing circuits and algorithms that make use of randomness similarly to natural brains, we hypothesize that the intrinsic randomness in microelectronics devices could be turned into a valuable component of a neuromorphic architecture enabling more efficient computations. Here, we present neuromorphic circuits that transform the stochastic behavior of a pool of random devices into useful correlations that drive stochastic solutions to MAXCUT. We show that these circuits perform favorably in comparison to software solvers and argue that this neuromorphic hardware implementation provides a path for scaling advantages. This work demonstrates the utility of combining neuromorphic principles with intrinsic randomness as a computational resource for new computational architectures.
Surrogate construction is an essential component for all non-deterministic analyses in science and engineering. The efficient construction of easy and cheaper-to-run alternatives to a computationally expensive code paves the way for outer loop workflows for forward and inverse uncertainty quantification and optimization. Unfortunately, the accurate construction of a surrogate still remains a task that often requires a prohibitive number of computations, making the approach unattainable for large-scale and high-fidelity applications. Multifidelity approaches offer the possibility to lower the computational expense requirement on the highfidelity code by fusing data from additional sources. In this context, we have demonstrated that multifidelity Bayesian Networks (MFNets) can efficiently fuse information derived from models with an underlying complex dependency structure. In this contribution, we expand on our previous work by adopting a basis adaptation procedure for the selection of the linear model representing each data source. Our numerical results demonstrate that this procedure is computationally advantageous because it can maximize the use of limited data to learn and exploit the important structures shared among models. Two examples are considered to demonstrate the benefits of the proposed approach: an analytical problem and a nuclear fuel finite element assembly. From these two applications, a lower dependency of MFnets on the model graph has been also observed.
The design of thermal protection systems (TPS), including heat shields for reentry vehicles, rely more and more on computational simulation tools for design optimization and uncertainty quantification. Since high-fidelity simulations are computationally expensive for full vehicle geometries, analysts primarily use reduced-physics models instead. Recent work has shown that projection-based reduced-order models (ROMs) can provide accurate approximations of high-fidelity models at a lower computational cost. ROMs are preferable to alternative approximation approaches for high-consequence applications due to the presence of rigorous error bounds. The following paper extends our previous work on projection-based ROMs for ablative TPS by considering hyperreduction methods which yield further reductions in computational cost and demonstrating the approach for simulations of a three-dimensional flight vehicle. We compare the accuracy and potential performance of several different hyperreduction methods and mesh sampling strategies. This paper shows that with the correct implementation, hyperreduction can make ROMs up to 1-3 orders of magnitude faster than the full order model by evaluating the residual at only a small fraction of the mesh nodes.
The V31 containment vessel was procured by the US Army Recovered Chemical Material Directorate (RCMD) as a third-generation EDS containment vessel. It is the fifth EDS vessel to be fabricated under Code Case 2564 of the 2019 ASME Boiler and Pressure Vessel Code, which provides rules for the design of impulsively loaded vessels. The explosive rating for the vessel, based on the code case, is 24 lb (11 kg) TNT-equivalent for up to 1092 detonations. This report documents the results of explosive tests that were performed on the vessel at Sandia National Laboratories in Albuquerque, New Mexico to qualify the vessel for field operations use. There were three design basis configurations for qualification testing. Qualification test (1) consisted of a simulated M55 rocket motor and warhead assembly of 24 lb (11 kg) of Composition C-4 (30 lb [14 kg] TNT equivalent). This test was considered the maximum load case, based on modeling and simulation methods performed by Sandia prior to the vessel design phase. Qualification test (2) consisted of a regular, right circular cylinder, unitary charge, located central to the vessel interior of 19.2 lb (8.72 kg) of Composition C-4 (24 lb [11 kg] TNT equivalent). Qualification test (3) consisted of a 12-pack of regular, right circular cylinders of 2 lb (908 g) each, distributed evenly inside the vessel (totaling 19.2 lb [8.72 kg] of C-4, or 24 lb [11 kg] TNT equivalent). All vessel acceptance criteria were met.
Puerto Rico faced a double strike from hurricanes Irma and Maria in 2017. The resulting damage required a comprehensive rebuild of electric infrastructure. There are plans and pilot projects to rebuild with microgrids to increase resilience. This paper provides a techno-economic analysis technique and case study of a potential future community in Puerto Rico that combines probabilistic microgrid design analysis with tiered circuits in building energy modeling. Tiered circuits in buildings allow electric load reduction via remote disconnection of non-critiñl circuits during an emergency. When coupled to a microgrid, tiered circuitry can reduce the chances of a microgrid's storage and generation resources being depleted. The analysis technique is applied to show 1) Approximate cost savings due to a tiered circuit structure and 2) Approximate cost savings gained by simultaneously considering resilience and sustainability constraints in the microgrid optimization. The analysis technique uses a resistive capacitive thermal model with load profiles for four tiers (tier 1-3 and non-critical loads). Three analyses were conducted using: 1) open-source software called Tiered Energy in Buildings and 2) the Microgrid Design Toolkit. For a fossil fuel based microgrid 30% of the total microgrid costs of 1.18 million USD were calculated where the non-tiered case keeps all loads 99.9% available and the tiered case keeps tier 1 at 99.9%, tier 2 at 95%, tier 3 at 80% availability, with no requirement on non-critical loads. The same comparison for a sustainable microgrid showed 8% cost savings on a 5.10 million USD microgrid due to tiered circuits. The results also showed 6-7% cost savings when our analysis technique optimizes sustainability and resilience simultaneously in comparison to doing microgrid resilience analysis and renewables net present value analysis independently. Though highly specific to our case study, similar assessments using our analysis technique can elucidate value of tiered circuits and simultaneous consideration of sustainability and resilience in other locations.
This paper introduces a new microprocessor-based system that is capable of detecting faults via the Traveling Wave (TW) generated from a fault event. The fault detection system is comprised of a commercially available Digital Signal Processing (DSP) board capable of accurately sampling signals at high speeds, performing the Discrete Wavelet Transform (DWT) decomposition to extract features from the TW, and a detection algorithm that makes use of the extracted features to determine the occurrence of a fault. Results show that this inexpensive fault detection system's performance is comparable to commercially available TW relays as accurate sampling and fault detection are achieved in a hundred and fifty microseconds. A detailed analysis of the execution times of each part of the process is provided.
Inverse problems constrained by partial differential equations (PDEs) play a critical role in model development and calibration. In many applications, there are multiple uncertain parameters in a model that must be estimated. However, high dimensionality of the parameters and computational complexity of the PDE solves make such problems challenging. A common approach is to reduce the dimension by fixing some parameters (which we will call auxiliary parameters) to a best estimate and use techniques from PDE-constrained optimization to estimate the other parameters. In this article, hyper-differential sensitivity analysis (HDSA) is used to assess the sensitivity of the solution of the PDE-constrained optimization problem to changes in the auxiliary parameters. Foundational assumptions for HDSA require satisfaction of the optimality conditions which are not always practically feasible as a result of ill-posedness in the inverse problem. We introduce novel theoretical and computational approaches to justify and enable HDSA for ill-posed inverse problems by projecting the sensitivities on likelihood informed subspaces and defining a posteriori updates. Our proposed framework is demonstrated on a nonlinear multiphysics inverse problem motivated by estimation of spatially heterogeneous material properties in the presence of spatially distributed parametric modeling uncertainties.
Open Charge Point Protocol (OCPP) 1.6 is widely used in the electric vehicle (EV) charging industry to communicate between Charging System Management Services (CSMSs) and Electric Vehicle Supply Equipment (EVSE). Unlike OCPP 2.0.1, OCPP 1.6 uses unencrypted websocket communications to exchange information between EVSE devices and an on-premise or cloud-based CSMS. In this work, we demonstrate two machine-in-the-middle attacks on OCPP sessions to terminate charging sessions and gain root access to the EVSE equipment via remote code execution. Second, we demonstrate a malicious firmware update with a code injection payload to compromise an EVSE. Lastly, we demonstrate two methods to prevent availability of the EVSE or CSMS. One of these, originally reported by SaiFlow, prevents traffic to legitimate EVSE equipment using a DoS-like attack on CSMSs by repeatedly connecting and authenticating several CPs with the same identities as the legitimate CP. These vulnerabilities were demonstrated with proof-of-concept exploits in a virtualized Cyber Range at Wright State University and/or with a 350 kW Direct Current Fast Charger at Idaho National Laboratory. The team found that OCPP 1.6 could be protected from these attacks by adding secure shell tunnels to the protocol, if upgrading to OCPP 2.0.1 was not an option.
This paper describes the methodology of designing a replacement blade tip and winglet for a wind turbine blade to demonstrate the potential of additive-manufacturing for wind energy. The team will later field-demonstrate this additive-manufactured, system-integrated tip (AMSIT) on a wind turbine. The blade tip aims to reduce the cost of wind energy by improving aerodynamic performance and reliability, while reducing transportation costs. This paper focuses on the design and modeling of a winglet for increased power production while maintaining acceptable structural loads of the original Vestas V27 blade design. A free-wake vortex model, WindDVE, was used for the winglet design analysis. A summary of the aerodynamic design process is presented along with a case study of a specific design.
Geomagnetic disturbances (GMDs) give rise to geomagnetically induced currents (GICs) on the earth's surface which find their way into power systems via grounded transformer neutrals. The quasi-dc nature of the GICs results in half-cycle saturation of the power grid transformers which in turn results in transformer failure, life reduction, and other adverse effects. Therefore, transformers need to be more resilient to dc excitation. This paper sets forth dc immunity metrics for transformers. Furthermore, this paper sets forth a novel transformer architecture and a design methodology which employs the dc immunity metrics to make it more resilient to dc excitation. This is demonstrated using a time-stepping 2D finite element analysis (FEA) simulation. It was found that a relatively small change in the core geometry significantly increases transformer resiliency with respect to dc excitation.
Most recently, stochastic control methods such as deep reinforcement learning (DRL) have proven to be efficient and quick converging methods in providing localized grid voltage control. Because of the random dynamical characteristics of grid reactive loads and bus voltages, such stochastic control methods are particularly useful in accurately predicting future voltage levels and in minimizing associated cost functions. Although DRL is capable of quickly inferring future voltage levels given specific voltage control actions, it is prone to high variance when the learning rate or discount factors are set for rapid convergence in the presence of bus noise. Evolutionary learning is also capable of minimizing cost function and can be leveraged for localized grid control, but it does not infer future voltage levels given specific control inputs and instead simply selects those control actions that result in the best voltage control. For this reason, evolutionary learning is better suited than DRL for voltage control in noisy grid environments. To illustrate this, using a cyber adversary to inject random noise, we compare the use of evolutionary learning and DRL in autonomous voltage control (AVC) under noisy control conditions and show that it is possible to achieve a high mean voltage control using a genetic algorithm (GA). We show that the GA additionally can provide superior AVC to DRL with comparable computational efficiency. We illustrate that the superior noise immunity properties of evolutionary learning make it a good choice for implementing AVC in noisy environments or in the presence of random cyber-attacks.
This work investigates the low- and high-temperature ignition and combustion processes, applied to the Engine Combustion Network Spray A flame, combining advanced optical diagnostics and large-eddy simulations (LES). Simultaneous high-speed (50 kHz) formaldehyde (CH2O) planar laser-induced fluorescence (PLIF) and line-of-sight OH* chemiluminescence imaging were used to measure the low- and high-temperature flame, during ignition as well as during quasi-steady combustion. While tracking the cool flame at the laser sheet plane, the present experimental setup allows detection of distinct ignition spots and dynamic fluctuations of the lift-off length over time, which overcomes limitations for flame tracking when using schlieren imaging [Sim et al.Proc. Combust. Inst. 38 (4) (2021) 5713–5721]. After significant development to improve LES prediction of the low-and high-temperature flame position, both during the ignition processes and quasi-steady combustion, the simulations were analyzed to gain understanding of the mixture variance and how this variance affects formation/consumption of CH2O. Analysis of the high-temperature ignition period shows that a key improvement in the LES is the ability to predict heterogeneous ignition sites, not only in the head of the jet, but in shear layers at the jet edge close to the position where flame lift-off eventually stabilizes. The LES analysis also shows concentrated pockets of CH2O, in the center of jet and at 20 mm downstream of the injector (in regions where the equivalence ratio is greater than 6), that are of similar length scale and frequency as the experiment (approximately 5–6 kHz). The periodic oscillation of CH2O match the frequency of pressure waves generated during auto-ignition and reflected within the constant-volume vessel throughout injection. The ability of LES to capture the periodic appearance and destruction of CH2O is particularly important because these structures travel downstream and become rich premixed flames that affect soot production.
Despite state-of-the-art deep learning-based computer vision models achieving high accuracy on object recognition tasks, x-ray screening of baggage at checkpoints is largely performed by hand. Part of the challenge in automation of this task is the relatively small amount of available labeled training data. Furthermore, realistic threat objects may have forms or orientations that do not appear in any training data, and radiographs suffer from high amounts of occlusion. Using deep generative models, we explore data augmentation techniques to expand the intra-class variation of threat objects synthetically injected into baggage radiographs using openly available baggage x-ray datasets. We also benchmark the performance of object detection algorithms on raw and augmented data.
Simulation of the interaction of light with matter, including at the few-photon level, is important for understanding the optical and optoelectronic properties of materials and for modeling next-generation nonlinear spectroscopies that use entangled light. At the few-photon level the quantum properties of the electromagnetic field must be accounted for with a quantized treatment of the field, and then such simulations quickly become intractable, especially if the matter subsystem must be modeled with a large number of degrees of freedom, as can be required to accurately capture many-body effects and quantum noise sources. Motivated by this we develop a quantum simulation framework for simulating such light-matter interactions on platforms with controllable bosonic degrees of freedom, such as vibrational modes in the trapped ion platform. The key innovation in our work is a scheme for simulating interactions with a continuum field using only a few discrete bosonic modes, which is enabled by a Green's function (response function) formalism. We develop the simulation approach, sketch how the simulation can be performed using trapped ions, and then illustrate the method with numerical examples. Our work expands the reach of quantum simulation to important light-matter interaction models and illustrates the advantages of extracting dynamical quantities such as response functions from quantum simulations.
High-pressure, ultra-zero air is being evaluated as a potential replacement to SF6 in a strategic focus to move away from environmentally damaging insulating gasses. There are a lot of unknowns about the dominant breakdown mechanisms of ultra-zero air in the high-pressure regime. The classical equations for Paschen curves appear to not be valid above 500 psig. In order to better understand the phenomena of gas breakdown in the high-pressure regime, Sandia National Laboratories is evaluating the basic gas physics breakdown using nonuniform-field electrode designs. Recent data has been collected at SNL to study the breakdown of this high-pressure regime in the range of 300 - 1500 psi with gaps on the order of 0.6 - 1 cm with different electrode designs. The self-breakdown voltages range from 200-900 kV with a pulse-charge rise times of 200-300 ns and discharge currents from 25-60 kA. This research investigates the phenomenon of high-pressure breakdown, highlights the data collected, and presents a few of the mechanisms that dominate in the high-pressure regime for electronegative gasses.
As the path towardsUrban Air Mobility (UAM) continues to take shape, there are outstanding technical challenges to achieving safe and effective air transportation operations under this new paradigm. To inform and guide technology development for UAM, NASA is investigating the current state-of-the-art in key technology areas including traffic management, detect-and-avoid, and autonomy. In support of this effort, a new perception testbed was developed at NASA Ames Research Center to collect data from an array of sensing systems representative of those that could be found on a future UAM vehicle. This testbed, featuring a Light-Detection-and-Ranging (LIDAR) instrument, a long-wave infrared sensor, and a visible spectrum camera was deployed for a multiday test campaign in the Fog Chamber at Sandia National Laboratories (SNL), in Albuquerque, New Mexico. During the test campaign, fog conditions were created for tests with targets including a human, a resolution chart, and a small unmanned aerial vehicle (sUAV). This paper describes in detail, the developed perception testbed, the experimental setup in the fog chamber, the resulting data, and presents an initial result from analysis of the data with the evaluation of methods to increase contrast through filtering techniques.
Sandia National Laboratories (SNL) and Oak Ridge National Laboratory (ORNL) have collaborated to develop a capability to test the epithermal/intermediate cross sections of materials at the SNL critical experiments facility using the Seven Percent Critical Experiment (7uPCX) fuel. The Sandia Critical Experiments Program provides a specialized facility for performing water moderated and reflected critical experiments with UO2 fuel rod arrays. The facility offers the ability to modify the core configuration and reactor tank to evaluate various reactor cores for pitch, moderator characteristics, and other criteria. A history of safe operations and flexibility in reactor core configuration has resulted in the completion of nine sets of critical benchmark experiments that have been documented in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. The experiment described here is expected to be evaluated for inclusion in the 2024 edition of the ICSBEP Handbook.