The interplay between hydrogen and dislocations (e.g., core and elastic energies, and dislocation–dislocation interactions) has implications on hydrogen embrittlement but is poorly understood. Continuum models of hydrogen enhanced local plasticity have not considered the effect of hydrogen on dislocation core energies. Energy minimization atomistic simulations can only resolve dislocation core energies in hydrogen-free systems because hydrogen motion is omitted so hydrogen atmosphere formation can’t occur. Additionally, previous studies focused more on face-centered-cubic than body-centered-cubic metals. Discrete dislocation dynamics studies of hydrogen–dislocation interactions assume isotropic elasticity, but the validity of this assumption isn’t understood. Here, we perform time-averaged molecular dynamics simulations to study the effect of hydrogen on dislocation energies in body-centered-cubic iron for several dislocation character angles. We see atmosphere formation and highly converged dislocation energies. We find that hydrogen reduces dislocation core energies but can increase or decrease elastic energies of isolated dislocations and dislocation–dislocation interaction energies depending on character angle. We also find that isotropic elasticity can be well fitted to dislocation energies obtained from simulations if the isotropic elastic constants are not constrained to their anisotropic counterparts. These results are relevant to ongoing efforts in understanding hydrogen embrittlement and provide a foundation for future work in this field.
In this work, we investigate the potential of liquid hydrogen storage (LH2) on-board Class-8 heavy duty trucks to resolve many of the range, weight, volume, refueling time and cost issues associated with 350 or 700-bar compressed H2 storage in Type-3 or Type-4 composite tanks. We present and discuss conceptual storage system configurations capable of supplying H2 to fuel cells at 5-bar with or without on-board LH2 pumps. Structural aspects of storing LH2 in double walled, vacuum insulated, and low-pressure Type-1 tanks are investigated. Structural materials and insulation methods are discussed for service at cryogenic temperatures and mitigation of heat leak to prevent LH2 boiloff. Failure modes of the liner and shell are identified and analyzed using the regulatory codes and detailed finite element (FE) methods. The conceptual systems are subjected to a Failure modes and effects analysis (FMEA) and a safety, codes, and standards (SCS) review to rank failures and identify safety gaps. The results indicate that the conceptual systems can reach 19.6% usable gravimetric capacity, 40.9 g-H2/L usable volumetric capacity and $174-183/kg-H2 cost (2016 USD) when manufactured 100,000 systems annually.
High-altitude electromagnetic pulse events are a growing concern for electric power grid vulnerability assessments and mitigation planning, and accurate modeling of surge arrester mitigations installed on the grid is necessary to predict pulse effects on existing equipment and to plan future mitigation. While some models of surge arresters at high frequency have been proposed, experimental backing for any given model has not been shown. This work examines a ZnO lightning surge arrester modeling approach previously developed for accurate prediction of nanosecond-scale pulse response. Four ZnO metal-oxide varistor pucks with different sizes and voltage ratings were tested for voltage and current response on a conducted electromagnetic pulse testbed. The measured clamping response was compared to SPICE circuit models to compare the electromagnetic pulse response and validate model accuracy. Results showed good agreement between simulation results and the experimental measurements, after accounting for stray testbed inductance between 100 and 250 nH.
Awile, Omar; Knight, James C.; Nowotny, Thomas; Aimone, James B.; Diesmann, Markus; Schurmann, Felix
At the turn of the millennium the computational neuroscience community realized that neuroscience was in a software crisis: software development was no longer progressing as expected and reproducibility declined. The International Neuroinformatics Coordinating Facility (INCF) was inaugurated in 2007 as an initiative to improve this situation. The INCF has since pursued its mission to help the development of standards and best practices. In a community paper published this very same year, Brette et al. tried to assess the state of the field and to establish a scientific approach to simulation technology, addressing foundational topics, such as which simulation schemes are best suited for the types of models we see in neuroscience. In 2015, a Frontiers Research Topic “Python in neuroscience” by Muller et al. triggered and documented a revolution in the neuroscience community, namely in the usage of the scripting language Python as a common language for interfacing with simulation codes and connecting between applications. The review by Einevoll et al. documented that simulation tools have since further matured and become reliable research instruments used by many scientific groups for their respective questions. Open source and community standard simulators today allow research groups to focus on their scientific questions and leave the details of the computational work to the community of simulator developers. A parallel development has occurred, which has been barely visible in neuroscientific circles beyond the community of simulator developers: Supercomputers used for large and complex scientific calculations have increased their performance from ~10 TeraFLOPS (1013 floating point operations per second) in the early 2000s to above 1 ExaFLOPS (1018 floating point operations per second) in the year 2022. This represents a 100,000-fold increase in our computational capabilities, or almost 17 doublings of computational capability in 22 years. Moore's law (the observation that it is economically viable to double the number of transistors in an integrated circuit every other 18–24 months) explains a part of this; our ability and willingness to build and operate physically larger computers, explains another part. It should be clear, however, that such a technological advancement requires software adaptations and under the hood, simulators had to reinvent themselves and change substantially to embrace this technological opportunity. It actually is quite remarkable that—apart from the change in semantics for the parallelization—this has mostly happened without the users knowing. The current Research Topic was motivated by the wish to assemble an update on the state of neuroscientific software (mostly simulators) in 2022, to assess whether we can see more clearly which scientific questions can (or cannot) be asked due to our increased capability of simulation, and also to anticipate whether and for how long we can expect this increase of computational capabilities to continue.
The challenge of cyberattack detection can be illustrated by the complexity of the MITRE ATT&CKTM matrix, which catalogues >200 attack techniques (most with multiple sub-techniques). To reliably detect cyberattacks, we propose an evidence-based approach which fuses multiple cyber events over varying time periods to help differentiate normal from malicious behavior. We use Bayesian Networks (BNs) - probabilistic graphical models consisting of a set of variables and their conditional dependencies - for fusion/classification due to their interpretable nature, ability to tolerate sparse or imbalanced data, and resistance to overfitting. Our technique utilizes a small collection of expert-informed cyber intrusion indicators to create a hybrid detection system that combines data-driven training with expert knowledge to form a host-based intrusion detection system (HIDS). We demonstrate a software pipeline for efficiently generating and evaluating various BN classifier architectures for specific datasets and discuss explainability benefits thereof.
A method is presented to detect clear-sky periods for plane-of-array, time-averaged irradiance data that is based on the algorithm originally described by Reno and Hansen. We show this new method improves the state-of-the-art by providing accurate detection at longer data intervals, and by detecting clear periods in plane-of-array data, which is novel. We illustrate how accurate determination of clear-sky conditions helps to eliminate data noise and bias in the assessment of long-term performance of PV plants.
This study investigated the durability of four high temperature coatings for use as a Gardon gauge foil coating. Failure modes and effects analysis have identified Gardon gauge foil coating as a critical component for the development of a robust flux gauge for high intensity flux measurements. Degradation of coating optical properties and physical condition alters flux gauge sensitivity, resulting in flux measurement errors. In this paper, four coatings were exposed to solar and thermal cycles to simulate real-world aging. Solar simulator and box furnace facilities at the National Solar Thermal Test Facility (NSTTF) were utilized in separate test campaigns. Coating absorptance and emissivity properties were measured and combined into a figure of merit (FOM) to characterize the optical property stability of each coating, and physical coating degradation was assessed qualitatively using microscope images. Results suggest rapid high temperature cycling did not significantly impact coating optical properties and physical state. In contrast, prolonged exposure of coatings to high temperatures degraded coating optical properties and physical state. Coatings degraded after 1 hour of exposure at temperatures above 400 °C and stabilized after 6-24 hours of exposure. It is concluded that the combination of high temperatures and prolonged exposure provide the energy necessary to sustain coating surface reactions and alter optical and physical coating properties. Results also suggest flux gauge foil coatings could benefit from long duration high temperature curing (>400 °C) prior to sensor calibration to stabilize coating properties and increase measurement reliability in high flux and high temperature applications.
The Sliding Scale of Cybersecurity is a framework for understanding the actions that contribute to cybersecurity. The model consists of five categories that provide varying value towards cybersecurity and incur varying implementation costs. These categories range from offensive cybersecurity measures providing the least value and incurring the greatest cost, to architecture providing the greatest value and incurring the least cost. This paper presents an application of the Sliding Scale of Cybersecurity to the Tiered Cybersecurity Analysis (TCA) of digital instrumentation and control systems for advanced reactors. The TCA consists of three tiers. Tier 1 is design and impact analysis. In Tier 1 it is assumed that the adversary has control over all digital systems, components, and networks in the plant, and that the adversary is only constrained by the physical limitations of the plant design. The plant’s safety design features are examined to determine whether the consequences of an attack by this cyber-enabled adversary are eliminated or mitigated. Accident sequences that are not eliminated or mitigated by security by design features are examined in Tier 2 analysis. In Tier 2, adversary access pathways are identified for the unmitigated accident sequences, and passive measures are implemented to deny system and network access to those pathways wherever feasible. Any systems with remaining susceptible access pathways are then examined in Tier 3. In Tier 3, active defensive cybersecurity architecture features and cybersecurity plan controls are applied to deny the adversary the ability to conduct the tasks needed to cause a severe consequence. Earlier application of the TCA in the design process provides greater opportunity for an efficient graded approach and defense-in-depth.
While research in multiple-input/multiple-output (MIMO) random vibration testing techniques, control methods, and test design has been increasing in recent years, research into specifications for these types of tests has not kept pace. This is perhaps due to the very particular requirement for most MIMO random vibration control specifications – they must be narrowband, fully populated cross-power spectral density matrices. This requirement puts constraints on the specification derivation process and restricts the application of many of the traditional techniques used to define single-axis random vibration specifications, such as averaging or straight-lining. This requirement also restricts the applicability of MIMO testing by requiring a very specific and rich field test data set to serve as the basis for the MIMO test specification. Here, frequency-warping and channel averaging techniques are proposed to soften the requirements for MIMO specifications with the goal of expanding the applicability of MIMO random vibration testing and enabling tests to be run in the absence of the necessary field test data.
Laser-induced photoemission of electrons offers opportunities to trigger and control plasmas and discharges [1]. However, the underlying mechanisms are not sufficiently characterized to be fully utilized [2]. We present an investigation to characterize the effects of photoemission on plasma breakdown for different reduced electric fields, laser intensities, and photon energies. We perform Townsend breakdown experiments assisted by high-speed imaging and employ a quantum model of photoemission along with a 0D discharge model [3], [4] to interpret the experimental measurements.
Measurements of gas-phase temperature and pressure in hypersonic flows are important for understanding gas-phase fluctuations which can drive dynamic loading on model surfaces and to study fundamental compressible flow turbulence. To achieve this capability, femtosecond coherent anti-Stokes Raman scattering (fs CARS) is applied in Sandia National Laboratories’ cold-flow hypersonic wind tunnel facility. Measurements were performed for tunnel freestream temperatures of 42–58 K and pressures of 1.5–2.2 Torr. The CARS measurement volume was translated in the flow direction during a 30-second tunnel run using a single computer-controlled translation stage. After broadband femtosecond laser excitation, the rotational Raman coherence was probed twice, once at an early time where the collisional environment has not affected the Raman coherence, and another at a later time after the collisional environment has led to significant dephasing of the Raman coherent. The gas-phase temperature was obtained primarily from the early-probe CARS spectra, while the gas-phase pressure was obtained primarily from the late-probe CARS spectra. Challenges in implementing fs CARS in this facility such as changes in the nonresonant spectrum at different measurement location are discussed.
Due to their increased levels of reliability, meshed low-voltage (LV) grid and spot networks are common topologies for supplying power to dense urban areas and critical customers. Protection schemes for LV networks often use highly sensitive reverse current trip settings to detect faults in the medium-voltage system. As a result, interconnecting even low levels of distributed energy resources (DERs) can impact the reliability of the protection system and cause nuisance tripping. This work analyzes the possibility of modifying the reverse current relay trip settings to increase the DER hosting capacity of LV networks without impacting fault detection performance. The results suggest that adjusting relay settings can significantly increase DER hosting capacity on LV networks without adverse effects, and that existing guidance on connecting DERs to secondary networks, such as that contained in IEEE Std 1547-2018, could potentially be modified to allow higher DER deployment levels.
Despite state-of-the-art deep learning-based computer vision models achieving high accuracy on object recognition tasks, x-ray screening of baggage at checkpoints is largely performed by hand. Part of the challenge in automation of this task is the relatively small amount of available labeled training data. Furthermore, realistic threat objects may have forms or orientations that do not appear in any training data, and radiographs suffer from high amounts of occlusion. Using deep generative models, we explore data augmentation techniques to expand the intra-class variation of threat objects synthetically injected into baggage radiographs using openly available baggage x-ray datasets. We also benchmark the performance of object detection algorithms on raw and augmented data.
Puerto Rico faced a double strike from hurricanes Irma and Maria in 2017. The resulting damage required a comprehensive rebuild of electric infrastructure. There are plans and pilot projects to rebuild with microgrids to increase resilience. This paper provides a techno-economic analysis technique and case study of a potential future community in Puerto Rico that combines probabilistic microgrid design analysis with tiered circuits in building energy modeling. Tiered circuits in buildings allow electric load reduction via remote disconnection of non-critiñl circuits during an emergency. When coupled to a microgrid, tiered circuitry can reduce the chances of a microgrid's storage and generation resources being depleted. The analysis technique is applied to show 1) Approximate cost savings due to a tiered circuit structure and 2) Approximate cost savings gained by simultaneously considering resilience and sustainability constraints in the microgrid optimization. The analysis technique uses a resistive capacitive thermal model with load profiles for four tiers (tier 1-3 and non-critical loads). Three analyses were conducted using: 1) open-source software called Tiered Energy in Buildings and 2) the Microgrid Design Toolkit. For a fossil fuel based microgrid 30% of the total microgrid costs of 1.18 million USD were calculated where the non-tiered case keeps all loads 99.9% available and the tiered case keeps tier 1 at 99.9%, tier 2 at 95%, tier 3 at 80% availability, with no requirement on non-critical loads. The same comparison for a sustainable microgrid showed 8% cost savings on a 5.10 million USD microgrid due to tiered circuits. The results also showed 6-7% cost savings when our analysis technique optimizes sustainability and resilience simultaneously in comparison to doing microgrid resilience analysis and renewables net present value analysis independently. Though highly specific to our case study, similar assessments using our analysis technique can elucidate value of tiered circuits and simultaneous consideration of sustainability and resilience in other locations.
The V31 containment vessel was procured by the US Army Recovered Chemical Material Directorate (RCMD) as a third-generation EDS containment vessel. It is the fifth EDS vessel to be fabricated under Code Case 2564 of the 2019 ASME Boiler and Pressure Vessel Code, which provides rules for the design of impulsively loaded vessels. The explosive rating for the vessel, based on the code case, is 24 lb (11 kg) TNT-equivalent for up to 1092 detonations. This report documents the results of explosive tests that were performed on the vessel at Sandia National Laboratories in Albuquerque, New Mexico to qualify the vessel for field operations use. There were three design basis configurations for qualification testing. Qualification test (1) consisted of a simulated M55 rocket motor and warhead assembly of 24 lb (11 kg) of Composition C-4 (30 lb [14 kg] TNT equivalent). This test was considered the maximum load case, based on modeling and simulation methods performed by Sandia prior to the vessel design phase. Qualification test (2) consisted of a regular, right circular cylinder, unitary charge, located central to the vessel interior of 19.2 lb (8.72 kg) of Composition C-4 (24 lb [11 kg] TNT equivalent). Qualification test (3) consisted of a 12-pack of regular, right circular cylinders of 2 lb (908 g) each, distributed evenly inside the vessel (totaling 19.2 lb [8.72 kg] of C-4, or 24 lb [11 kg] TNT equivalent). All vessel acceptance criteria were met.
Adler, James H.; He, Yunhui; Hu, Xiaozhe; Maclachlan, Scott; Ohm, Peter
Advanced finite-element discretizations and preconditioners for models of poroelasticity have attracted significant attention in recent years. The equations of poroelasticity offer significant challenges in both areas, due to the potentially strong coupling between unknowns in the system, saddle-point structure, and the need to account for wide ranges of parameter values, including limiting behavior such as incompressible elasticity. This paper was motivated by an attempt to develop monolithic multigrid preconditioners for the discretization developed in [C. Rodrigo et al., Comput. Methods App. Mech. Engrg, 341 (2018), pp. 467-484]; we show here why this is a difficult task and, as a result, we modify the discretization in [Rodrigo et al.] through the use of a reduced-quadrature approximation, yielding a more “solver-friendly” discretization. Local Fourier analysis is used to optimize parameters in the resulting monolithic multigrid method, allowing a fair comparison between the performance and costs of methods based on Vanka and Braess-Sarazin relaxation. Numerical results are presented to validate the local Fourier analysis predictions and demonstrate efficiency of the algorithms. Finally, a comparison to existing block-factorization preconditioners is also given.
Multiple rotors on single structures have long been proposed to increase wind turbine energy capture with no increase in rotor size, but at the cost of additional mechanical complexity in the yaw and tower designs. Standard turbines on their own very-closely-spaced towers avoid these disadvantages but create a significant disadvantage; for some wind directions the wake turbulence of a rotor enters the swept area of a very close downwind rotor causing low output, fatigue stress, and changes in wake recovery. Knowing how the performance of pairs of closely spaced rotors varies with wind direction is essential to design a layout that maximizes the useful directions and minimizes the losses and stress at other directions. In the current work, the high-fidelity large-eddy simulation (LES) code Exa-Wind/Nalu-Wind is used to simulate the wake interactions from paired-rotor configurations in a neutrally stratified atmospheric boundary layer to investigate performance and feasibility. Each rotor pair consists of two Vestas V27 turbines with hub-to-hub separation distances of 1.5 rotor diameters. The on-design wind direction results are consistent with previous literature. For an off-design wind direction of 26.6°, results indicate little change in power and far-wake recovery relative to the on-design case. At a direction of 45.0°, significant rotor-wake interactions produce an increase in power but also in far-wake velocity deficit and turbulence intensity. A severely off-design case is also considered.
Austenitic stainless steels are used in high-pressure hydrogen containment infrastructure for their resistance to hydrogen embrittlement. Applications for the use of austenitic stainless steels include pressure vessels, tubing, piping, valves, fittings and other piping components. Despite their resistance to brittle behavior in the presence of hydrogen, austenitic stainless steels can exhibit degraded fracture performance. The mechanisms of hydrogen-assisted fracture, however, remain elusive, which has motivated continued research on these alloys. There are two principal approaches to evaluate the influence of gaseous hydrogen on mechanical properties: internal and external hydrogen, respectively. The austenite phase has high solubility and low diffusivity of hydrogen at room temperature, which enables introduction of hydrogen into the material through thermal precharging at elevated temperature and pressure; a condition referred to as internal hydrogen. H-precharged material can subsequently be tested in ambient conditions. Alternatively, mechanical testing can be performed while test coupons are immersed in gaseous hydrogen thereby evaluating the effects of external hydrogen on property degradation. The slow diffusivity of hydrogen in austenite at room temperature can often be a limiting factor in external hydrogen tests and may not properly characterize lower bound fracture behavior in components exposed to hydrogen for long time periods. In this study, the differences between internal and external hydrogen environments are evaluated in the context of fracture resistance measurements. Fracture testing was performed on two different forged austenitic stainless steel alloys (304L and XM-11) in three different environments: 1) non-charged and tested in gaseous hydrogen at pressure of 1,000 bar (external H2), 2) hydrogen precharged and tested in air (internal H), 3) hydrogen precharged and tested in 1,000 bar H2 (internal H + external H2). For all environments, elastic-plastic fracture measurements were conducted to establish J-R curves following the methods of ASTM E1820. Following fracture testing, fracture surfaces were examined to reveal predominant fracture mechanisms for the different conditions and to characterize differences (and similarities) in the macroscale fracture processes associated with these environmental conditions.
High reliability (Hi-Rel) electronics for mission critical applications are handled with extreme care; stress testing upon full assembly can increase a likelihood of degrading these systems before their deployment. Moreover, novel material parts, such as wide bandgap semiconductor devices, tend to have more complicated fabrication processing needs which could ultimately result in larger part variability or potential defects. Therefore, an intelligent screening and inspection technique for electronic parts, in particular gallium nitride (GaN) power transistors, is presented in this paper. We present a machine-learning-based non-intrusive technique that can enhance part-selection decisions to categorize the part samples to the population's expected electrical characteristics. This technique provides relevant information about GaN HEMT device characteristics without having to operate all of these devices at the high current region of the transfer and output characteristics, lowering the risk of damaging the parts prematurely. The proposed non-intrusive technique uses a small signal pulse width modulation (PWM) of various frequencies, ranging from 10 kHz to 500 kHz, injected into the transistor terminals and the corresponding output signals are observed and used as training dataset. Unsupervised clustering techniques with K-means and feature dimensional reduction through principal component analysis (PCA) have been used to correlate a population of GaN HEMT transistors to the expected mean of the devices' electrical characteristic performance.
Complex angle theory can offer new fundamental insights into refraction at the absorptive interface. In this work we propose a new method to induce isofrequency opening via addition of scattering in the dual interface system.
A challenge for TW-class accelerators, such as Sandia's Z machine, is efficient power coupling due to current loss in the final power feed. It is also important to understand how such losses will scale to larger next generation pulsed power (NGPP) facilities. While modeling is studying these power flow losses it is important to have diagnostic that can experimentally measure plasmas in these conditions and help inform simulations. The plasmas formed in the power flow region can be challenging to diagnose due to both limited lines of sight and being at significantly lower temperatures and densities than typical plasmas studied on Z. This necessitates special diagnostic development to accurately measure the power flow plasma on Z.
Kolmogorov's theory of turbulence assumes that the small-scale turbulent structures in the energy cascade are universal and are determined by the energy dissipation rate and the kinematic viscosity alone. However, thermal fluctuations, absent from the continuum description, terminate the energy cascade near the Kolmogorov length scale. Here, we propose a simple superposition model to account for the effects of thermal fluctuations on small-scale turbulence statistics. For compressible Taylor-Green vortex flow, we demonstrate that the superposition model in conjunction with data from direct numerical simulation of the Navier-Stokes equations yields spectra and structure functions that agree with the corresponding quantities computed from the direct simulation Monte Carlo method of molecular gas dynamics, verifying the importance of thermal fluctuations in the dissipation range.
The error detection performance of cyclic redundancy check (CRC) codes combined with bit framing in digital serial communication systems is evaluated. Advantages and disadvantages of the combined method are treated in light of the probability of undetected errors. It is shown that bit framing can increase the burst error detection of the CRC but it can also adversely affect CRC random error detection performance. To quantify the effect of bit framing on CRC error detection the concept of error "exposure"is introduced. Our investigations lead us to propose resilient generator polynomials that, when combined with bit framing, can result in improved CRC error detection performance at no additional implementation cost. Example results are generated for short codewords showing that proper choice of CRC generator polynomial can improve error detection performance when combined with bit framing. The implication is that CRC combined with bit framing can reduce the probability of undetected errors even under high error rate conditions.
Two relatively under-reported facets of fuel storage fire safety are examined in this work for a 250, 000 gallon two-tank storage system. Ignition probability is linked to the radiative flux from a presumed fire. First, based on observed features of existing designs, fires are expected to be largely contained within a designed footprint that will hold the full spilled contents of the fuel. The influence of the walls and the shape of the tanks on the magnitude of the fire is not a well-described aspect of conventional fire safety assessment utilities. Various resources are herein used to explore the potential hazard for a contained fire of this nature. Second, an explosive attack on the fuel storage has not been widely considered in prior work. This work explores some options for assessing this hazard. The various methods for assessing the constrained conventional fires are found to be within a reasonable degree of agreement. This agreement contrasts with the hazard from an explosive dispersal. Best available assessment techniques are used, which highlight some inadequacies in the existing toolsets for making predictions of this nature. This analysis, using the best available tools, suggests the offset distance for the ignition hazard from a fireball will be on the same order as the offset distance for the blast damage. This suggests the buy-down of risk by considering the fireball is minimal when considering the blast hazards. Assessment tools for the fireball predictions are not particularly mature, and ways to improve them for a higher-fidelity estimate are noted.
Event-based sensors are a novel sensing technology which capture the dynamics of a scene via pixel-level change detection. This technology operates with high speed (>10 kHz), low latency (10 µs), low power consumption (<1 W), and high dynamic range (120 dB). Compared to conventional, frame-based architectures that consistently report data for each pixel at a given frame rate, event-based sensor pixels only report data if a change in pixel intensity occurred. This affords the possibility of dramatically reducing the data reported in bandwidth-limited environments (e.g., remote sensing) and thus, the data needed to be processed while still recovering significant events. Degraded visual environments, such as those generated by fog, often hinder situational awareness by decreasing optical resolution and transmission range via random scattering of light. To respond to this challenge, we present the deployment of an event-based sensor in a controlled, experimentally generated, well-characterized degraded visual environment (a fog analogue), for detection of a modulated signal and comparison of data collected from an event-based sensor and from a traditional framing sensor.
This paper presents a die-embedded glass interposer with minimum warpage for 5G/6G applications. The interposer performs high integration with low-loss interconnects by embedding multiple chips in the same glass substrate and interconnecting the chips through redistributive layers (RDL). Novel processes for cavity creation, multi-die embedding, carrier- less RDL build up and heat spreader attachment are proposed and demonstrated in this work. Performance of the interposer from 1 GHz to 110 GHz are evaluated. This work provides an advanced packaging solution for low-loss die-to-die and die-to-package interconnects, which is essential to high performance wireless system integration.
Unlike traditional base excitation vibration qualification testing, multi-axis vibration testing methods can be significantly faster and more accurate. Here, a 12-shaker multiple-input/multiple-output (MIMO) test method called intrinsic connection excitation (ICE) is developed and assessed for use on an example aerospace component. In this study, the ICE technique utilizes 12 shakers, 1 for each boundary condition attachment degree of freedom to the component, specially designed fixtures, and MIMO control to provide an accurate set of loads and boundary conditions during the test. Acceleration, force, and voltage control provide insight into the viability of this testing method. System field test and ICE test results are compared to traditional single degree of freedom specification development and testing. Results indicate the multi-shaker ICE test provided a much more accurate replication of system field test response compared with single degree of freedom testing.
Modern Industrial Control Systems (ICS) attacks evade existing tools by using knowledge of ICS processes to blend their activities with benign Supervisory Control and Data Acquisition (SCADA) operation, causing physical world damages. We present Scaphy to detect ICS attacks in SCADA by leveraging the unique execution phases of SCADA to identify the limited set of legitimate behaviors to control the physical world in different phases, which differentiates from attacker's activities. For example, it is typical for SCADA to setup ICS device objects during initialization, but anomalous during process-control. To extract unique behaviors of SCADA execution phases, Scaphy first leverages open ICS conventions to generate a novel physical process dependency and impact graph (PDIG) to identify disruptive physical states. Scaphy then uses PDIG to inform a physical process-aware dynamic analysis, whereby code paths of SCADA process-control execution is induced to reveal API call behaviors unique to legitimate process-control phases. Using this established behavior, Scaphy selectively monitors attacker's physical world-targeted activities that violates legitimate process-control behaviors. We evaluated Scaphy at a U.S. national lab ICS testbed environment. Using diverse ICS deployment scenarios and attacks across 4 ICS industries, Scaphy achieved 95% accuracy & 3.5% false positives (FP), compared to 47.5% accuracy and 25% FP of existing work. We analyze Scaphy's resilience to futuristic attacks where attacker knows our approach.
Phosphor thermometry has become an established remote sensing technique for acquiring the temperature of surfaces and gas-phase flows. Often, phosphors are excited by a light source (typically emitting in the UV region), and their temperature-sensitive emission is captured. Temperature can be inferred from shifts in the emission spectra or the radiative decay lifetime during relaxation. While recent work has shown that the emission of several phosphors remains thermographic during x-ray excitation, the radiative decay lifetime was not investigated. The focus of the present study is to characterize the lifetime decay of the phosphor Gd2O2S:Tb for temperature sensitivity after excitation from a pulsed x-ray source. These results are compared to the lifetime decays found for this phosphor when excited using a pulsed UV laser. Results show that the lifetime of this phosphor exhibits comparable sensitivity to temperature between both excitation sources for a temperature range between 21 °C to 140 °C in increments of 20 °C. This work introduces a novel method of thermometry for researchers to implement when employing x-rays for diagnostics.
The Information Harm Triangle (IHT) is a novel approach that aims to adapt intuitive engineering concepts to simplify defense in depth for instrumentation and control (I&C) systems at nuclear power plants. This approach combines digital harm, real-world harm, and unsafe control actions (UCAs) into a single graph named “Information Harm Triangle.” The IHT is based on the postulation that the consequences of cyberattacks targeting I&C systems can be expressed in terms of two orthogonal components: a component representing the magnitude of data harm (DH) (i.e., digital information harm) and a component representing physical information harm (PIH) (i.e., real-world harm, e.g., an inadvertent plant trip). The magnitude of the severity of the physical consequence is the aspect of risk that is of concern. The sum of these two components represents the total information harm. The IHT intuitively informs risk-informed cybersecurity strategies that employ independent measures that either act to prevent, reduce, or mitigate DH or PIH. Another aspect of the IHT is that the DH can result in cyber-initiated UCAs that result in severe physical consequences. The orthogonality of DH and PIH provides insights into designing effective defense in depth. The IHT can also represent cyberattacks that have the potential to impede, evade, or compromise countermeasures from taking appropriate action to reduce, stop, or mitigate the harm caused by such UCAs. Cyber-initiated UCAs transform DH to PIH.
The structure-property linkage is one of the two most important relationships in materials science besides the process-structure linkage, especially for metals and polycrystalline alloys. The stochastic nature of microstructures begs for a robust approach to reliably address the linkage. As such, uncertainty quantification (UQ) plays an important role in this regard and cannot be ignored. To probe the structure-property linkage, many multi-scale integrated computational materials engineering (ICME) tools have been proposed and developed over the last decade to accelerate the material design process in the spirit of Material Genome Initiative (MGI), notably crystal plasticity finite element model (CPFEM) and phase-field simulations. Machine learning (ML) methods, including deep learning and physics-informed/-constrained approaches, can also be conveniently applied to approximate the computationally expensive ICME models, allowing one to efficiently navigate in both structure and property spaces effortlessly. Since UQ also plays a crucial role in verification and validation for both ICME and ML models, it is important to include UQ in the picture. In this paper, we summarize a few of our recent research efforts addressing UQ aspects of homogenized properties using CPFEM in a big picture context.
Mann, James B.; Mohanty, Debapriya P.; Kustas, Andrew B.; Stiven Puentes Rodriguez, B.; Issahaq, Mohammed N.; Udupa, Anirudh; Sugihara, Tatsuya; Trumble, Kevin P.; M'Saoubi, Rachid; Chandrasekar, Srinivasan
Machining-based deformation processing is used to produce metal foil and flat wire (strip) with suitable properties and quality for electrical power and renewable energy applications. In contrast to conventional multistage rolling, the strip is produced in a single-step and with much less process energy. Examples are presented from metal systems of varied workability, and strip product scale in terms of size and production rate. By utilizing the large-strain deformation intrinsic to cutting, bulk strip with ultrafine-grained microstructure, and crystallographic shear-texture favourable for formability, are achieved. Implications for production of commercial strip for electric motor applications and battery electrodes are discussed.
Multiple Input Multiple Output (MIMO) vibration testing provides the capability to expose a system to a field environment in a laboratory setting, saving both time and money by mitigating the need to perform multiple and costly large-scale field tests. However, MIMO vibration test design is not straightforward oftentimes relying on engineering judgment and multiple test iterations to determine the proper selection of response Degree of Freedom (DOF) and input locations that yield a successful test. This work investigates two DOF selection techniques for MIMO vibration testing to assist with test design, an iterative algorithm introduced in previous work and an Optimal Experiment Design (OED) approach. The iterative-based approach downselects the control set by removing DOF that have the smallest impact on overall error given a target Cross Power Spectral Density matrix and laboratory Frequency Response Function (FRF) matrix. The Optimal Experiment Design (OED) approach is formulated with the laboratory FRF matrix as a convex optimization problem and solved with a gradient-based optimization algorithm that seeks a set of weighted measurement DOF that minimize a measure of model prediction uncertainty. The DOF selection approaches are used to design MIMO vibration tests using candidate finite element models and simulated target environments. The results are generalized and compared to exemplify the quality of the MIMO test using the selected DOF.
Creation of a Sandia internally developed, shock-hardened Recoverable Data Recorder (RDR) necessitated experimentation by ballistically-firing the device into water targets at velocities up to 5,000 ft/s. The resultant mechanical environments were very severe—routinely achieving peak accelerations in excess of 200 kG and changes in pseudo-velocity greater than 38,000 inch/s. High-quality projectile deceleration datasets were obtained though high-speed imaging during the impact events. The datasets were then used to calibrate and validate computational models in both CTH and EPIC. Hydrodynamic stability in these environments was confirmed to differ from aerodynamic stability; projectile stability is maintained through a phenomenon known as “tail-slapping” or impingement of the rear of the projectile on the cavitation vapor-water interface which envelopes the projectile. As the projectile slows the predominate forces undergo a transition which is outside the codes’ capabilities to calculate accurately, however, CTH and EPIC both predict the projectile trajectory well in the initial hypervelocity regime. Stable projectile designs and the achievable acceleration space are explored through a large parameter sweep of CTH simulations. Front face chamfer angle has the largest influence on stability with low angles being more stable.
The widespread adoption of residential solar PV requires distribution system studies to ensure the addition of solar PV at a customer location does not violate the system constraints, which can be referred to as locational hosting capacity (HC). These model-based analyses are prone to error due to their dependencies on the accuracy of the system information. Model-free approaches to estimate the solar PV hosting capacity for a customer can be a good alternative to this approach as their accuracies do not depend on detailed system information. In this paper, an Adaptive Boosting (AdaBoost) algorithm is deployed to utilize the statistical properties (mean, minimum, maximum, and standard deviation) of the customer's historical data (real power, reactive power, voltage) as inputs to estimate the voltage-constrained PV HC for the customer. A baseline comparison approach is also built that utilizes just the maximum voltage of the customer to predict PV HC. The results show that the ensemble-based AdaBoost algorithm outperformed the proposed baseline approach. The developed methods are also compared and validated by existing state-of-the-art model-free PV HC estimation methods.
Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering - OMAE
Foulk, James W.; Davis, Jacob; Sharman, Krish; Tom, Nathan; Husain, Salman
Experiments were conducted on a wave tank model of a bottom raised oscillating surge wave energy converter (OSWEC) model in regular waves. The OSWEC model shape was a thin rectangular flap, which was allowed to pitch in response to incident waves about a hinge located at the intersection of the flap and the top of the supporting foundation. Torsion springs were added to the hinge in order to position the pitch natural frequency at the center of the wave frequency range of the wave maker. The flap motion as well as the loads at the base of the foundation were measured. The OSWEC was modeled analytically using elliptic functions in order to obtain closed form expressions for added mass and radiation damping coefficients, along with the excitation force and torque. These formulations were derived and reported in a previous publication by the authors. While analytical predictions of the foundation loads agree very well with experiments, large discrepancies are seen in the pitch response close to resonance. These differences are analyzed by conducting a sensitivity study, in which system parameters, including damping and added mass values, are varied. The likely contributors to the differences between predictions and experiments are attributed to tank reflections, standing waves that can occur in long, narrow wave tanks, as well as the thin plate assumption employed in the analytical approach.
Computational simulation allows scientists to explore, observe, and test physical regimes thought to be unattainable. Validation and uncertainty quantification play crucial roles in extrapolating the use of physics-based models. Bayesian analysis provides a natural framework for incorporating the uncertainties that undeniably exist in computational modeling. However, the ability to perform quality Bayesian and uncertainty analyses is often limited by the computational expense of first-principles physics models. In the absence of a reliable low-fidelity physics model, phenomenological surrogate or machine learned models can be used to mitigate this expense; however, these data-driven models may not adhere to known physics or properties. Furthermore, the interactions of complex physics in high-fidelity codes lead to dependencies between quantities of interest (QoIs) that are difficult to quantify and capture when individual surrogates are used for each observable. Although this is not always problematic, predicting multiple QoIs with a single surrogate preserves valuable insights regarding the correlated behavior of the target observables and maximizes the information gained from available data. A method of constructing a Gaussian Process (GP) that emulates multiple QoIs simultaneously is presented. As an exemplar, we consider Magnetized Liner Inertial Fusion, a fusion concept that relies on the direct compression of magnetized, laser-heated fuel by a metal liner to achieve thermonuclear ignition. Magneto-hydrodynamics (MHD) codes calculate diagnostics to infer the state of the fuel during experiments, which cannot be measured directly. The calibration of these diagnostic metrics is complicated by sparse experimental data and the expense of high-fidelity neutron transport models. The development of an appropriate surrogate raises long-standing issues in modeling and simulation, including calibration, validation, and uncertainty quantification. The performance of the proposed multi-output GP surrogate model, which preserves correlations between QoIs, is compared to the standard single-output GP for a 1D realization of the MagLIF experiment.
This paper describes the methodology of designing a replacement blade tip and winglet for a wind turbine blade to demonstrate the potential of additive-manufacturing for wind energy. The team will later field-demonstrate this additive-manufactured, system-integrated tip (AMSIT) on a wind turbine. The blade tip aims to reduce the cost of wind energy by improving aerodynamic performance and reliability, while reducing transportation costs. This paper focuses on the design and modeling of a winglet for increased power production while maintaining acceptable structural loads of the original Vestas V27 blade design. A free-wake vortex model, WindDVE, was used for the winglet design analysis. A summary of the aerodynamic design process is presented along with a case study of a specific design.
This presentation describes a new effort to better understand insulator flashover in high current, high voltage pulsed power systems. Both experimental and modeling investigations are described. Particular emphasis is put upon understand flashover that initiate in the anode triple junction (anode-vacuum-dielectric).
A high altitude electromagnetic pulse (HEMP) or other similar geomagnetic disturbance (GMD) has the potential to severely impact the operation of large-scale electric power grids. By introducing low-frequency common-mode (CM) currents, these events can impact the performance of key system components such as large power transformers. In this work, a solid-state transformer (SST) that can replace susceptible equipment and improve grid resiliency by safely absorbing these CM insults is described. An overview of the proposed SST power electronics and controls architecture is provided, a system model is developed, and the performance of the SST in response to a simulated CM insult is evaluated. Compared to a conventional magnetic transformer, the SST is found to recover quickly from the insult while maintaining nominal ac input/output behavior.
When exposed to mechanical environments such as shock and vibration, electrical connections may experience increased levels of contact resistance associated with the physical characteristics of the electrical interface. A phenomenon known as electrical chatter occurs when these vibrations are large enough to interrupt the electric signals. It is critical to understand the root causes behind these events because electrical chatter may result in unexpected performance or failure of the system. The root causes span a variety of fields, such as structural dynamics, contact mechanics, and tribology. Therefore, a wide range of analyses are required to fully explore the physical phenomenon. This paper intends to provide a better understanding of the relationship between structural dynamics and electrical chatter events. Specifically, electrical contact assembly composed of a cylindrical pin and bifurcated structure were studied using high fidelity simulations. Structural dynamic simulations will be performed with both linear and nonlinear reduced-order models (ROM) to replicate the relevant structural dynamics. Subsequent multi-physics simulations will be discussed to relate the contact mechanics associated with the dynamic interactions between the pin and receptacle to the chatter. Each simulation method was parametrized by data from a variety of dynamic experiments. Both structural dynamics and electrical continuity were observed in both the simulation and experimental approaches, so that the relationship between the two can be established.
Prescriptive approaches for the cybersecurity of digital nuclear instrumentation and control (I&C) systems can be cumbersome and costly. These considerations are of particular concern for advanced reactors that implement digital technologies for monitoring, diagnostics, and control. A risk-informed performance-based approach is needed to enable the efficient design of secure digital I&C systems for nuclear power plants. This paper presents a tiered cybersecurity analysis (TCA) methodology as a graded approach for cybersecurity design. The TCA is a sequence of analyses that align with the plant, system, and component stages of design. Earlier application of the TCA in the design process provides greater opportunity for an efficient graded approach and defense-in-depth. The TCA consists of three tiers. Tier 1 is design and impact analysis. In Tier 1 it is assumed that the adversary has control over all digital systems, components, and networks in the plant, and that the adversary is only constrained by the physical limitations of the plant design. The plant's safety design features are examined to determine whether the consequences of an attack by this cyber-enabled adversary are eliminated or mitigated. Accident sequences that are not eliminated or mitigated by security by design features are examined in Tier 2 analysis. In Tier 2, adversary access pathways are identified for the unmitigated accident sequences, and passive measures are implemented to deny system and network access to those pathways wherever feasible. Any systems with remaining susceptible access pathways are then examined in Tier 3. In Tier 3, active defensive cybersecurity architecture features and cybersecurity plan controls are applied to deny the adversary the ability to conduct the tasks needed to cause a severe consequence. Tier 3 is not performed in this analysis because of the design maturity required for this tier of analysis.
A quantum-cascade-laser-absorption-spectroscopy (QCLAS) diagnostic was used to characterize post-detonation fireballs of RP-80 detonators via measurements of temperature, pressure, and CO column pressure at a repetition rate of 1 MHz. Scanned-wavelength direct-absorption spectroscopy was used to measure CO absorbance spectra near 2008.5 cm−1 which are dominated by the P(0,31), P(2,20), and P(3,14) transitions. Line-of-sight (LOS) measurements were acquired 51 and 91 mm above the detonator surface. Three strategies were employed to facilitate interpretation of the LAS measurements in this highly nonuniform environment and to evaluate the accuracy of four post-detonation fireball models: (1) High-energy transitions were used to deliberately bias the measurements to the high-temperature outer shell, (2) a novel dual-zone absorption model was used to extract temperature, pressure, and CO measurements in two distinct regions of the fireball at times where pressure variations along the LOS were pronounced, and (3) the LAS measurements were compared with synthetic LAS measurements produced using the simulated distributions of temperature, pressure, and gas composition predicted by reactive CFD modeling. The results indicate that the QCLAS diagnostic provides high-fidelity data for evaluating post-detonation fireball models, and that assumptions regarding thermochemical equilibrium and carbon freeze-out during expansion of detonation gases have a large impact on the predicted chemical composition of the fireball.
The DevOps movement, which aims to accelerate the continuous delivery of high-quality software, has taken a leading role in reshaping the software industry. Likewise, there is growing interest in applying DevOps tools and practices in the domains of computational science and engineering (CSE) to meet the ever-growing demand for scalable simulation and analysis. Translating insights from industry to research computing, however, remains an ongoing challenge; DevOps for science and engineering demands adaptation and innovation in those tools and practices. There is a need to better understand the challenges faced by DevOps practitioners in CSE contexts in bridging this divide. To that end, we conducted a participatory action research study to collect and analyze the experiences of DevOps practitioners at a major US national laboratory through the use of storytelling techniques. We share lessons learned and present opportunities for future investigation into DevOps practice in the CSE domain.
We present the SEU sensitivity and SEL results from proton and heavy ion testing performed on NVIDIA Xavier NX and AMD Ryzen V1605B GPU devices in both static and dynamic operation.
Conference Record of the IEEE Photovoltaic Specialists Conference
Hobbs, William B.; Black, Chloe L.; Holmgren, William F.; Anderson, Kevin S.
Subhourly changes in solar irradiance can lead to energy models being biased high if realistic distributions of irradiance values are not reflected in the resource data and model. This is particularly true in solar facility designs with high inverter loading ratios (ILRs). When resource data with sufficient temporal and spatial resolution is not available for a site, synthetic variability can be added to the data that is available in an attempt to address this issue. In this work, we demonstrate the use of anonymized commercial resource datasets with synthetic variability and compare results with previous estimates of model bias due to inverter clipping and increasing ILR.
Here we examine models for particle curtain dispersion using drag based formalisms and their connection to streamwise pressure difference closures. Focusing on drag models, we specifically demonstrate that scaling arguments developed in DeMauro et. al. [1] using early time drag modeling can be extended to include late time particle curtain dispersion behavior by weighting the dynamic portion of the drag relative velocity e.g. (Formula Presented) by the inverse of the particle volume fraction to the ¼th power. The additional parameter e.g. α introduced in this scaling is related to the model drag parameters by employing an early-time latetime matching argument. Comparison with the scaled measurements of DeMauro et. al. suggest that the proposed modification is an effective formalism. Next, the connection between drag-based models and streamwise pressure difference-based expressions is explored by formulating simple analytical models that verify an empirical (Daniel and Wagner [2]) upstream-downstream expression. Though simple, these models provide physics-based approached describing shock particle curtain interaction behavior.
Geomagnetic disturbances (GMDs) give rise to geomagnetically induced currents (GICs) on the earth's surface which find their way into power systems via grounded transformer neutrals. The quasi-dc nature of the GICs results in half-cycle saturation of the power grid transformers which in turn results in transformer failure, life reduction, and other adverse effects. Therefore, transformers need to be more resilient to dc excitation. This paper sets forth dc immunity metrics for transformers. Furthermore, this paper sets forth a novel transformer architecture and a design methodology which employs the dc immunity metrics to make it more resilient to dc excitation. This is demonstrated using a time-stepping 2D finite element analysis (FEA) simulation. It was found that a relatively small change in the core geometry significantly increases transformer resiliency with respect to dc excitation.
Spent nuclear fuel repository simulations are currently not able to incorporate detailed fuel matrix degradation (FMD) process models due to their computational cost, especially when large numbers of waste packages breach. The current paper uses machine learning to develop artificial neural network and k-nearest neighbor regression surrogate models that approximate the detailed FMD process model while being computationally much faster to evaluate. Using fuel cask temperature, dose rate, and the environmental concentrations of CO32−, O2, Fe2+, and H2 as inputs, these surrogates show good agreement with the FMD process model predictions of the UO2 degradation rate for conditions within the range of the training data. A demonstration in a full-scale shale repository reference case simulation shows that the incorporation of the surrogate models captures local and temporal environmental effects on fuel degradation rates while retaining good computational efficiency.
The design of thermal protection systems (TPS), including heat shields for reentry vehicles, rely more and more on computational simulation tools for design optimization and uncertainty quantification. Since high-fidelity simulations are computationally expensive for full vehicle geometries, analysts primarily use reduced-physics models instead. Recent work has shown that projection-based reduced-order models (ROMs) can provide accurate approximations of high-fidelity models at a lower computational cost. ROMs are preferable to alternative approximation approaches for high-consequence applications due to the presence of rigorous error bounds. The following paper extends our previous work on projection-based ROMs for ablative TPS by considering hyperreduction methods which yield further reductions in computational cost and demonstrating the approach for simulations of a three-dimensional flight vehicle. We compare the accuracy and potential performance of several different hyperreduction methods and mesh sampling strategies. This paper shows that with the correct implementation, hyperreduction can make ROMs up to 1-3 orders of magnitude faster than the full order model by evaluating the residual at only a small fraction of the mesh nodes.