A comprehensive study of the mechanical response of a 316 stainless steel is presented. The split-Hopkinson bar technique was used to evaluate the mechanical behavior at dynamic strain rates of 500 s−1, 1500 s−1, and 3000 s−1 and temperatures of 22 °C and 300 °C under tension and compression loading, while the Drop-Hopkinson bar was used to characterize the tension behavior at an intermediate strain rate of 200 s−1. The experimental results show that the tension and compression flow stress are reasonably symmetric, exhibit positive strain rate sensitivity, and are inversely dependent on temperature. The true failure strain was determined by measuring the minimum diameter of the post-test tension specimen. The 316 stainless steel exhibited a ductile response, and the true failure strain increased with increasing temperature and decreased with increasing strain rate.
Finding the maximum cut of a graph (MAXCUT) is a classic optimization problem that has motivated parallel algorithm development. While approximate algorithms to MAXCUT offer attractive theoretical guarantees and demonstrate compelling empirical performance, such approximation approaches can shift the dominant computational cost to the stochastic sampling operations. Neuromorphic computing, which uses the organizing principles of the nervous system to inspire new parallel computing architectures, offers a possible solution. One ubiquitous feature of natural brains is stochasticity: the individual elements of biological neural networks possess an intrinsic randomness that serves as a resource enabling their unique computational capacities. By designing circuits and algorithms that make use of randomness similarly to natural brains, we hypothesize that the intrinsic randomness in microelectronics devices could be turned into a valuable component of a neuromorphic architecture enabling more efficient computations. Here, we present neuromorphic circuits that transform the stochastic behavior of a pool of random devices into useful correlations that drive stochastic solutions to MAXCUT. We show that these circuits perform favorably in comparison to software solvers and argue that this neuromorphic hardware implementation provides a path for scaling advantages. This work demonstrates the utility of combining neuromorphic principles with intrinsic randomness as a computational resource for new computational architectures.
We analyze the regression accuracy of convolutional neural networks assembled from encoders, decoders and skip connections and trained with multifidelity data. Besides requiring significantly less trainable parameters than equivalent fully connected networks, encoder, decoder, encoder-decoder or decoder-encoder architectures can learn the mapping between inputs to outputs of arbitrary dimensionality. We demonstrate their accuracy when trained on a few high-fidelity and many low-fidelity data generated from models ranging from one-dimensional functions to Poisson equation solvers in two-dimensions. We finally discuss a number of implementation choices that improve the reliability of the uncertainty estimates generated by Monte Carlo DropBlocks, and compare uncertainty estimates among low-, high- and multifidelity approaches.
Computational modeling frequently generates sets of related simulation runs, known as ensembles. These simulations often output 3D surface mesh data, where the geometry and variable values of the mesh are changing with each time step. Comparing these ensembles depends on comparing not only geometric properties, but also associated field data. In this paper, we propose a new metric for comparing mesh geometry combined with field data variables. Our measure is a generalization of the well-known Metro algorithm used in mesh simplification. The Metro algorithm can compare two meshes but doesn't consider field variables. Our metric evaluates a single variable in combination with the mesh geometry. Combining our metric with multidimensional scaling, we visualize a low dimensional representation of all the time steps from a set of example ensembles to demonstrate the effectiveness of this approach.
Austenitic stainless steels have been extensively tested in hydrogen environments; however, limited information exists for the effects of hydrogen on the fatigue life of high-strength grades of austenitic stainless steels. Moreover, fatigue life testing of finished product forms (such as tubing and welds) is challenging. A novel test method for evaluating the influence of internal hydrogen on fatigue of orbital tube welds was reported, where a cross hole in a tubing specimen is used to establish a stress concentration analogous to circumferentially notched bar fatigue specimens for constant-load, axial fatigue testing. In that study (Kagay et al, ASME PVP2020-8576), annealed 316L tubing with a cross hole displayed similar fatigue performance as more conventional materials test specimens. A similar cross-hole tubing geometry is adopted here to evaluate the fatigue crack initiation and fatigue life of XM-19 austenitic stainless steel with high concentration of internal hydrogen. XM-19 is a nitrogen-strengthened Fe-Cr-Ni-Mn austenitic stainless steel that offers higher strength than conventional 3XX series stainless steels. A uniform hydrogen concentration in the test specimen is achieved by thermal precharging (exposure to high-pressure hydrogen at elevated temperature for two weeks) prior to testing in air to simulate the equilibrium hydrogen concentration near a stress concentration in gaseous hydrogen service. Specimens are also instrumented for direct current potential difference measurements to identify crack initiation. After accounting for the strengthening associated with thermal precharging, the fatigue crack initiation and fatigue life of XM-19 tubing were virtually unchanged by internal hydrogen.
Prescriptive approaches for the cybersecurity of digital nuclear instrumentation and control (I&C) systems can be cumbersome and costly. These considerations are of particular concern for advanced reactors that implement digital technologies for monitoring, diagnostics, and control. A risk-informed performance-based approach is needed to enable the efficient design of secure digital I&C systems for nuclear power plants. This paper presents a tiered cybersecurity analysis (TCA) methodology as a graded approach for cybersecurity design. The TCA is a sequence of analyses that align with the plant, system, and component stages of design. Earlier application of the TCA in the design process provides greater opportunity for an efficient graded approach and defense-in-depth. The TCA consists of three tiers. Tier 1 is design and impact analysis. In Tier 1 it is assumed that the adversary has control over all digital systems, components, and networks in the plant, and that the adversary is only constrained by the physical limitations of the plant design. The plant's safety design features are examined to determine whether the consequences of an attack by this cyber-enabled adversary are eliminated or mitigated. Accident sequences that are not eliminated or mitigated by security by design features are examined in Tier 2 analysis. In Tier 2, adversary access pathways are identified for the unmitigated accident sequences, and passive measures are implemented to deny system and network access to those pathways wherever feasible. Any systems with remaining susceptible access pathways are then examined in Tier 3. In Tier 3, active defensive cybersecurity architecture features and cybersecurity plan controls are applied to deny the adversary the ability to conduct the tasks needed to cause a severe consequence. Tier 3 is not performed in this analysis because of the design maturity required for this tier of analysis.
A quantum-cascade-laser-absorption-spectroscopy (QCLAS) diagnostic was used to characterize post-detonation fireballs of RP-80 detonators via measurements of temperature, pressure, and CO column pressure at a repetition rate of 1 MHz. Scanned-wavelength direct-absorption spectroscopy was used to measure CO absorbance spectra near 2008.5 cm−1 which are dominated by the P(0,31), P(2,20), and P(3,14) transitions. Line-of-sight (LOS) measurements were acquired 51 and 91 mm above the detonator surface. Three strategies were employed to facilitate interpretation of the LAS measurements in this highly nonuniform environment and to evaluate the accuracy of four post-detonation fireball models: (1) High-energy transitions were used to deliberately bias the measurements to the high-temperature outer shell, (2) a novel dual-zone absorption model was used to extract temperature, pressure, and CO measurements in two distinct regions of the fireball at times where pressure variations along the LOS were pronounced, and (3) the LAS measurements were compared with synthetic LAS measurements produced using the simulated distributions of temperature, pressure, and gas composition predicted by reactive CFD modeling. The results indicate that the QCLAS diagnostic provides high-fidelity data for evaluating post-detonation fireball models, and that assumptions regarding thermochemical equilibrium and carbon freeze-out during expansion of detonation gases have a large impact on the predicted chemical composition of the fireball.
Diffusion bonding of two immiscible, binary metallic systems, Cu-Ta and Cu-W was employed to make repeatable and predictable dual-layer impactors for shock-reshock experiments. The diffusion bonded impactors were characterized using ultrasonic imaging and optical microscopy to ensure bonding and the absence of excessive Cu grain coarsening. The diffusion bonded impactors were launched via a two-stage gas gun at [100] LiF windows instrumented with multiple interferometry probes spanning nearly the entire impactor area. Consistent interferometry data was obtained from all experiments with no evidence of release prior to recompression, indicating a uniform bond. Comparisons to hydrocode simulations show excellent agreement for all experiments, facilitating easy application of these impactors to future experiments.
Creation of a Sandia internally developed, shock-hardened Recoverable Data Recorder (RDR) necessitated experimentation by ballistically-firing the device into water targets at velocities up to 5,000 ft/s. The resultant mechanical environments were very severe—routinely achieving peak accelerations in excess of 200 kG and changes in pseudo-velocity greater than 38,000 inch/s. High-quality projectile deceleration datasets were obtained though high-speed imaging during the impact events. The datasets were then used to calibrate and validate computational models in both CTH and EPIC. Hydrodynamic stability in these environments was confirmed to differ from aerodynamic stability; projectile stability is maintained through a phenomenon known as “tail-slapping” or impingement of the rear of the projectile on the cavitation vapor-water interface which envelopes the projectile. As the projectile slows the predominate forces undergo a transition which is outside the codes’ capabilities to calculate accurately, however, CTH and EPIC both predict the projectile trajectory well in the initial hypervelocity regime. Stable projectile designs and the achievable acceleration space are explored through a large parameter sweep of CTH simulations. Front face chamfer angle has the largest influence on stability with low angles being more stable.
Multiple rotors on single structures have long been proposed to increase wind turbine energy capture with no increase in rotor size, but at the cost of additional mechanical complexity in the yaw and tower designs. Standard turbines on their own very-closely-spaced towers avoid these disadvantages but create a significant disadvantage; for some wind directions the wake turbulence of a rotor enters the swept area of a very close downwind rotor causing low output, fatigue stress, and changes in wake recovery. Knowing how the performance of pairs of closely spaced rotors varies with wind direction is essential to design a layout that maximizes the useful directions and minimizes the losses and stress at other directions. In the current work, the high-fidelity large-eddy simulation (LES) code Exa-Wind/Nalu-Wind is used to simulate the wake interactions from paired-rotor configurations in a neutrally stratified atmospheric boundary layer to investigate performance and feasibility. Each rotor pair consists of two Vestas V27 turbines with hub-to-hub separation distances of 1.5 rotor diameters. The on-design wind direction results are consistent with previous literature. For an off-design wind direction of 26.6°, results indicate little change in power and far-wake recovery relative to the on-design case. At a direction of 45.0°, significant rotor-wake interactions produce an increase in power but also in far-wake velocity deficit and turbulence intensity. A severely off-design case is also considered.
Simulation of the interaction of light with matter, including at the few-photon level, is important for understanding the optical and optoelectronic properties of materials and for modeling next-generation nonlinear spectroscopies that use entangled light. At the few-photon level the quantum properties of the electromagnetic field must be accounted for with a quantized treatment of the field, and then such simulations quickly become intractable, especially if the matter subsystem must be modeled with a large number of degrees of freedom, as can be required to accurately capture many-body effects and quantum noise sources. Motivated by this we develop a quantum simulation framework for simulating such light-matter interactions on platforms with controllable bosonic degrees of freedom, such as vibrational modes in the trapped ion platform. The key innovation in our work is a scheme for simulating interactions with a continuum field using only a few discrete bosonic modes, which is enabled by a Green's function (response function) formalism. We develop the simulation approach, sketch how the simulation can be performed using trapped ions, and then illustrate the method with numerical examples. Our work expands the reach of quantum simulation to important light-matter interaction models and illustrates the advantages of extracting dynamical quantities such as response functions from quantum simulations.
Bao, Jichao; Lee, Jonghyun; Yoon, Hongkyu Y.; Pyrak-Nolte, Laura
Characterization of geologic heterogeneity at an enhanced geothermal system (EGS) is crucial for cost-effective stimulation planning and reliable heat production. With recent advances in computational power and sensor technology, large-scale fine-resolution simulations of coupled thermal-hydraulic-mechanical (THM) processes have been available. However, traditional large-scale inversion approaches have limited utility for sites with complex subsurface structures unless one can afford high, often computationally prohibitive, computations. Key computational burdens are predominantly associated with a number of large-scale coupled numerical simulations and large dense matrix multiplications derived from fine discretization of the field site domain and a large number of THM and chemical (THMC) measurements. In this work, we present deep-generative model-based Bayesian inversion methods for the computationally efficient and accurate characterization of EGS sites. Deep generative models are used to learn the approximate subsurface property (e.g., permeability, thermal conductivity, and elastic rock properties) distribution from multipoint geostatistics-derived training images or discrete fracture network models as a prior and accelerated stochastic inversion is performed on the low-dimensional latent space in a Bayesian framework. Numerical examples with synthetic permeability fields with fracture inclusions with THM data sets based on Utah FORGE geothermal site will be presented to test the accuracy, speed, and uncertainty quantification capability of our proposed joint data inversion method.
This work developed a methodology for transmission line modeling of cable installations to predict the propagation of conducted high altitude electromagnetic pulses in a substation or generating plant. The methodology was applied to a termination cabinet example that was modeled with SPICE transmission line elements with information from electromagnetic field modeling and with validation using experimental data. The experimental results showed reasonable agreement to the modeled propagating pulse and can be applied to other installation structures in the future.
We propose a set of benchmark tests for current-voltage (IV) curve fitting algorithms. Benchmark tests enable transparent and repeatable comparisons among algorithms, allowing for measuring algorithm improvement over time. An absence of such tests contributes to the proliferation of fitting methods and inhibits achieving consensus on best practices. Benchmarks include simulated curves with known parameter solutions, with and without simulated measurement error. We implement the reference tests on an automated scoring platform and invite algorithm submissions in an open competition for accurate and performant algorithms.
The error detection performance of cyclic redundancy check (CRC) codes combined with bit framing in digital serial communication systems is evaluated. Advantages and disadvantages of the combined method are treated in light of the probability of undetected errors. It is shown that bit framing can increase the burst error detection of the CRC but it can also adversely affect CRC random error detection performance. To quantify the effect of bit framing on CRC error detection the concept of error "exposure"is introduced. Our investigations lead us to propose resilient generator polynomials that, when combined with bit framing, can result in improved CRC error detection performance at no additional implementation cost. Example results are generated for short codewords showing that proper choice of CRC generator polynomial can improve error detection performance when combined with bit framing. The implication is that CRC combined with bit framing can reduce the probability of undetected errors even under high error rate conditions.
We study both conforming and non-conforming versions of the practical DPG method for the convection-reaction problem. We determine that the most common approach for DPG stability analysis - construction of a local Fortin operator - is infeasible for the convection-reaction problem. We then develop a line of argument based on a direct proof of discrete stability; we find that employing a polynomial enrichment for the test space does not suffice for this purpose, motivating the introduction of a (two-element) subgrid mesh. The argument combines mathematical analysis with numerical experiments.
Filamentous fungi can synthesize a variety of nanoparticles (NPs), a process referred to as mycosynthesis that requires little energy input, do not require the use of harsh chemicals, occurs at near neutral pH, and do not produce toxic byproducts. While NP synthesis involves reactions between metal ions and exudates produced by the fungi, the chemical and biochemical parameters underlying this process remain poorly understood. Here, the role of fungal species and precursor salt on the mycosynthesis of zinc oxide (ZnO) NPs is investigated. This data demonstrates that all five fungal species tested are able to produce ZnO structures that can be morphologically classified into i) well-defined NPs, ii) coalesced/dissolving NPs, and iii) micron-sized square plates. Further, species-dependent preferences for these morphologies are observed, suggesting potential differences in the profile or concentration of the biochemical constituents in their individual exudates. This data also demonstrates that mycosynthesis of ZnO NPs is independent of the anion species, with nitrate, sulfate, and chloride showing no effect on NP production. Finally, these results enhance the understanding of factors controlling the mycosynthesis of ceramic NPs, supporting future studies that can enable control over the physical and chemical properties of NPs formed through this “green” synthesis method.
This paper presents the uncertainty propagation of turbulent coefficients for the Spalart– Allmaras (SA) turbulence model using projection-based reduced-order models (ROMs). ROMs are used instead of Reynolds-averaged Navier–Stokes (RANS) solvers and stochastic collocation/ Galerkin and Monte Carlo methods because they are computationally inexpensive and tend to offer more accuracy than a polynomial surrogate. The uncertainty propagation is performed on two benchmark RANS cases documented on NASA’s turbulence modeling resource. Uncertainty propagation of the SA turbulent coefficients using a ROMis shown to compare well against uncertainty propagation performed using only RANS and using a Gaussian process regression (GP) model. The ROM is shown to be more robust to the size and spread of the training data compared to a GP model.
A challenge for TW-class accelerators, such as Sandia's Z machine, is efficient power coupling due to current loss in the final power feed. It is also important to understand how such losses will scale to larger next generation pulsed power (NGPP) facilities. While modeling is studying these power flow losses it is important to have diagnostic that can experimentally measure plasmas in these conditions and help inform simulations. The plasmas formed in the power flow region can be challenging to diagnose due to both limited lines of sight and being at significantly lower temperatures and densities than typical plasmas studied on Z. This necessitates special diagnostic development to accurately measure the power flow plasma on Z.
We consider the intersection between nonrepeating random FM (RFM) waveforms and practical forms of optimal mismatched filtering (MMF). Specifically, the spectrally-shaped inverse filter (SIF) is a well-known approximation to the least-squares (LS-MMF) that provides significant computational savings. Given that nonrepeating waveforms likewise require unique nonrepeating MMFs, this efficient form is an attractive option. Moreover, both RFM waveforms and the SIF rely on spectrum shaping, which establishes a relationship between the goodness of a particular waveform and the mismatch loss (MML) the corresponding filter can achieve. Both simulated and open-air experimental results are shown to demonstrate performance.
The novel Hydromine harvests energy from flowing water with no external moving parts, resulting in a robust system with minimal environmental impact. Here two deployment scenarios are considered: an offshore floating platform configuration to capture energy from relatively steady ocean currents at megawatt-scale, and a river-based system at kilowatt-scale mounted on a pylon. Hydrodynamic and techno-economic models are developed. The hydrodynamic models are used to maximize the efficiency of the power conversion. The techno-economic models optimize the system size and layout and ultimately seek to minimize the levelized-cost-of-electricity produced. Parametric and sensitivity analyses are performed on the models to optimize performance and reduce costs.
Terrain-relative autonomous navigation is a challenging task. In traditional approaches, an elevation map is carried onboard and compared to measurements of the terrain below the vehicle. These methods are computationally expensive, and it is impractical to store high-quality maps of large swaths of terrain. In this article, we generate position measurements using NeuroGrid, a recently-proposed algorithm for computing position information from terrain elevation measurements. We incorporate NeuroGrid into an inertial navigation scheme using a novel measurement rejection strategy and online covariance computation. Our results show that the NeuroGrid filter provides highly accurate state information over the course of a long trajectory.
This study presents a method for constructing machine learning-based reduced order models (ROMs) that accurately simulate nonlinear contact problems while quantifying epistemic uncertainty. These purely non-intrusive ROMs significantly lower computational costs compared to traditional full order models (FOMs). The technique utilizes adversarial training combined with an ensemble of Barlow twins reduced order models (BT-ROMs) to maximize the information content of the nonlinear reduced manifolds. These lower-dimensional manifolds are equipped with Gaussian error estimates, allowing for quantifying epistemic uncertainty in the ROM predictions. The effectiveness of these ROMs, referred to as UQ-BT-ROMs, is demonstrated in the context of contact between a rigid indenter and a hyperelastic substrate under finite deformations. The ensemble of BT-ROMs improves accuracy and computational efficiency compared to existing alternatives. The relative error between the UQ-BT-ROM and FOM solutions ranges from approximately 3% to 8% across all benchmarks. Remarkably, this high level of accuracy is achieved at a significantly reduced computational cost compared to FOMs. For instance, the online phase of the UQ-BT-ROM takes only 0.001 seconds, while a single FOM evaluation requires 63 seconds. Furthermore, the error estimate produced by the UQ-BT-ROMs reasonably captures the errors in the ROMs, with increasing accuracy as training data increases. The ensemble approach improves accuracy and computational efficiency compared to existing alternatives. The UQ-BT-ROMs provide a cost-effective solution with significantly reduced computational times while maintaining a high level of accuracy.
Advances in laser diagnostics and models have been leveraged to investigate plasmas in two spatial dimensions (2D), but the spatially complex structure in actual plasmas requires techniques that can provide a more complete three-dimensional (3D) picture. To address this limitation, a plasma tomographic optical imaging diagnostic has been developed at Sandia National Labs. The system includes four intensified cameras that can measure eight angular projections of the light source with a temporal resolution of 5 ns. An algebraic reconstruction technique (ART) is used to determine the light intensity at each voxel within the interrogated volume using the method of projections onto convex sets. Initial efforts have focused on 3D optical emission imaging. Development challenges have included reconstruction algorithm development and achieving sufficient 3D spatial and temporal resolution to resolve features of interest.
Earth and Space 2022: Space Exploration, Utilization, Engineering, and Construction in Extreme Environments - Selected Papers from the 18th Biennial International Conference on Engineering, Science, Construction, and Operations in Challenging Environments
Partitioned methods allow one to build a simulation capability for coupled problems by reusing existing single-component codes. In so doing, partitioned methods can shorten code development and validation times for multiphysics and multiscale applications. In this work, we consider a scenario in which one or more of the “codes” being coupled are projection-based reduced order models (ROMs), introduced to lower the computational cost associated with a particular component. We simulate this scenario by considering a model interface problem that is discretized independently on two non-overlapping subdomains. We then formulate a partitioned scheme for this problem that allows the coupling between a ROM “code” for one of the subdomains with a finite element model (FEM) or ROM “code” for the other subdomain. The ROM “codes” are constructed by performing proper orthogonal decomposition (POD) on a snapshot ensemble to obtain a low-dimensional reduced order basis, followed by a Galerkin projection onto this basis. The ROM and/or FEM “codes” on each subdomain are then coupled using a Lagrange multiplier representing the interface flux. To partition the resulting monolithic problem, we first eliminate the flux through a dual Schur complement. Application of an explicit time integration scheme to the transformed monolithic problem decouples the subdomain equations, allowing their independent solution for the next time step. We show numerical results that demonstrate the proposed method’s efficacy in achieving both ROM-FEM and ROM-ROM coupling.
The Sliding Scale of Cybersecurity is a framework for understanding the actions that contribute to cybersecurity. The model consists of five categories that provide varying value towards cybersecurity and incur varying implementation costs. These categories range from offensive cybersecurity measures providing the least value and incurring the greatest cost, to architecture providing the greatest value and incurring the least cost. This paper presents an application of the Sliding Scale of Cybersecurity to the Tiered Cybersecurity Analysis (TCA) of digital instrumentation and control systems for advanced reactors. The TCA consists of three tiers. Tier 1 is design and impact analysis. In Tier 1 it is assumed that the adversary has control over all digital systems, components, and networks in the plant, and that the adversary is only constrained by the physical limitations of the plant design. The plant’s safety design features are examined to determine whether the consequences of an attack by this cyber-enabled adversary are eliminated or mitigated. Accident sequences that are not eliminated or mitigated by security by design features are examined in Tier 2 analysis. In Tier 2, adversary access pathways are identified for the unmitigated accident sequences, and passive measures are implemented to deny system and network access to those pathways wherever feasible. Any systems with remaining susceptible access pathways are then examined in Tier 3. In Tier 3, active defensive cybersecurity architecture features and cybersecurity plan controls are applied to deny the adversary the ability to conduct the tasks needed to cause a severe consequence. Earlier application of the TCA in the design process provides greater opportunity for an efficient graded approach and defense-in-depth.
This paper elaborates the results of the hardware implementation of a traveling wave (TW) protection device (PD) for DC microgrids. The proposed TWPD is implemented on a commercial digital signal processor (DSP) board. In the developed TWPD, first, the DSP board's Analog to Digital Converter (ADC) is used to sample the input at a 1 MHz sampling rate. The Analog Input card of DSP board measures the pole current at the TWPD location in DC microgrid. Then, a TW detection algorithm is applied on the output of the ADC to detect the fault occurrence instance. Once this instance is detected, multi-resolution analysis (MRA) is performed on a 128-sample data butter that is created around the fault instance. The MRA utilizes discrete wavelet transform (DWT) to extract the high-frequency signatures of measured pole current. To quantity the extracted TW features, the Parseval theorem is used to calculate the Parseval energy of reconstructed wavelet coefficients created by MRA. These Parseval energy values are later used as inputs to a polynomial linear regression tool to estimate the fault location. The performance of the created TWPD is verified using an experimental testbed.
Conference Record of the IEEE Photovoltaic Specialists Conference
Hobbs, William B.; Black, Chloe L.; Holmgren, William F.; Anderson, Kevin
Subhourly changes in solar irradiance can lead to energy models being biased high if realistic distributions of irradiance values are not reflected in the resource data and model. This is particularly true in solar facility designs with high inverter loading ratios (ILRs). When resource data with sufficient temporal and spatial resolution is not available for a site, synthetic variability can be added to the data that is available in an attempt to address this issue. In this work, we demonstrate the use of anonymized commercial resource datasets with synthetic variability and compare results with previous estimates of model bias due to inverter clipping and increasing ILR.
Open Charge Point Protocol (OCPP) 1.6 is widely used in the electric vehicle (EV) charging industry to communicate between Charging System Management Services (CSMSs) and Electric Vehicle Supply Equipment (EVSE). Unlike OCPP 2.0.1, OCPP 1.6 uses unencrypted websocket communications to exchange information between EVSE devices and an on-premise or cloud-based CSMS. In this work, we demonstrate two machine-in-the-middle attacks on OCPP sessions to terminate charging sessions and gain root access to the EVSE equipment via remote code execution. Second, we demonstrate a malicious firmware update with a code injection payload to compromise an EVSE. Lastly, we demonstrate two methods to prevent availability of the EVSE or CSMS. One of these, originally reported by SaiFlow, prevents traffic to legitimate EVSE equipment using a DoS-like attack on CSMSs by repeatedly connecting and authenticating several CPs with the same identities as the legitimate CP. These vulnerabilities were demonstrated with proof-of-concept exploits in a virtualized Cyber Range at Wright State University and/or with a 350 kW Direct Current Fast Charger at Idaho National Laboratory. The team found that OCPP 1.6 could be protected from these attacks by adding secure shell tunnels to the protocol, if upgrading to OCPP 2.0.1 was not an option.
7th IEEE Electron Devices Technology and Manufacturing Conference: Strengthen the Global Semiconductor Research Collaboration After the Covid-19 Pandemic, EDTM 2023
This paper presents an assessment of electrical device measurements using functional data analysis (FDA) on a test case of Zener diode devices. We employ three techniques from FDA to quantify the variability in device behavior, primarily due to production lot and demonstrate that this has a significant effect in our data set. We also argue for the expanded use of FDA methods in providing principled, quantitative analysis of electrical device data.
The block version of GMRES (BGMRES) is most advantageous over the single right hand side (RHS) counterpart when the cost of communication is high while the cost of floating point operations is not. This is the particular case on modern graphics processing units (GPUs), while it is generally not the case on traditional central processing units (CPUs). In this paper, experiments on both GPUs and CPUs are shown that compare the performance of BGMRES against GMRES as the number of RHS increases, with a particular forcus on GPU performance. The experiments indicate that there are many cases in which BGMRES is slower than GMRES on CPUs, but faster on GPUs. Furthermore, when varying the number of RHS on the GPU, there is an optimal number of RHS where BGMRES is clearly most advantageous over GMRES. A computational model for the GPU is developed using hardware specific parameters, providing insight towards how the qualitative behavior of BGMRES changes as the number of RHS increase, and this model also helps explain the phenomena observed in the experiments.
Motion primitives (MPs) provide a fundamental abstraction of movement templates that can be used to guide and navigate a complex environment while simplifying the movement actions. These MPs, when utilized as an action space in reinforcement learning (RL), can allow an agent to learn to select a sequence of simple actions to guide a vehicle towards desired complex mission outcomes. This is particularly useful for missions involving high speed aerospace vehicles (HSAVs) (i.e., Mach 1 to 30) where near real time trajectory generation is needed but the computational cost and timeliness of trajectory generation remains prohibitive. This paperdemonstrates that when MPs are employed in conjunction with RL, the agent can learn to solve a wider range of problems for HSAV missions. To this end, using both a MP and and non-MP approach, RL is employed to solve the problem of an HSAV arriving at a non-maneuvering moving target at a constant altitude and with an arbitrary, but constant, velocity and heading angle. The MPs for HSAV consist of multiple pull (flight path angle) and turn (heading angle) commands that are defined for a specific duration based on mission phases; whereas the non-MP approach uses angle of attack and bank angle as action space for RL. The paper describes details on HSAV problem formulation to include equations of motion, observation space, telescopic reward function, RL algorithm and hyperparameters, RL curriculum, formation of the MPs, and calculation of time to execute the MP used for the problem. Our results demonstrate that the non-MP approach is unable to even train an agent that is successful in the base-case of the RL curriculum. The MP approach, however, can train an agent with success rate of 76.6% inarriving at a target moving with any heading angle with a velocity between 0 and 500 m/s.
We report on a two-step technique for post-bond III-V substrate removal involving precision mechanical milling and selective chemical etching. We show results on GaAs, GaSb, InP, and InAs substrates and from mm-scale chips to wafers.
Modern Industrial Control Systems (ICS) attacks evade existing tools by using knowledge of ICS processes to blend their activities with benign Supervisory Control and Data Acquisition (SCADA) operation, causing physical world damages. We present Scaphy to detect ICS attacks in SCADA by leveraging the unique execution phases of SCADA to identify the limited set of legitimate behaviors to control the physical world in different phases, which differentiates from attacker's activities. For example, it is typical for SCADA to setup ICS device objects during initialization, but anomalous during process-control. To extract unique behaviors of SCADA execution phases, Scaphy first leverages open ICS conventions to generate a novel physical process dependency and impact graph (PDIG) to identify disruptive physical states. Scaphy then uses PDIG to inform a physical process-aware dynamic analysis, whereby code paths of SCADA process-control execution is induced to reveal API call behaviors unique to legitimate process-control phases. Using this established behavior, Scaphy selectively monitors attacker's physical world-targeted activities that violates legitimate process-control behaviors. We evaluated Scaphy at a U.S. national lab ICS testbed environment. Using diverse ICS deployment scenarios and attacks across 4 ICS industries, Scaphy achieved 95% accuracy & 3.5% false positives (FP), compared to 47.5% accuracy and 25% FP of existing work. We analyze Scaphy's resilience to futuristic attacks where attacker knows our approach.
As the electric grid becomes increasingly cyber-physical, it is important to characterize its inherent cyber-physical interdepedencies and explore how that characterization can be leveraged to improve grid operation. It is crucial to investigate what data features are transferred at the system boundaries, how disturbances cascade between the systems, and how planning and/or mitigation measures can leverage that information to increase grid resilience. In this paper, we explore several numerical analysis and graph decomposition techniques that may be suitable for modeling these cyber-physical system interdependencies and for understanding their significance. An augmented WSCC 9-bus cyber-physical system model is used as a small use-case to assess these techniques and their ability in characterizing different events within the cyber-physical system. These initial results are then analyzed to formulate a high-level approach for characterizing cyber-physical interdependencies.
The Big Hill SPR site has a rich data set consisting of multi-arm caliper (MAC) logs collected from the cavern wells. This data set provides insight into the on-going casing deformation at the Big Hill site. This report summarizes the MAC surveys for each well and presents well longevity estimates where possible. Included in the report is an examination of the well twins for each cavern and a discussion on what may or may not be responsible for the different levels of deformation between some of the well twins. The report also takes a systematic view of the MAC data presenting spatial patterns of casing deformation and deformation orientation in an effort to better understand the underlying causes. The conclusions present a hypothesis suggesting the small-scale variations in casing deformation are attributable to similar scale variations in the character of the salt-caprock interface. These variations do not appear directly related to shear zones or faults.
Unlike traditional base excitation vibration qualification testing, multi-axis vibration testing methods can be significantly faster and more accurate. Here, a 12-shaker multiple-input/multiple-output (MIMO) test method called intrinsic connection excitation (ICE) is developed and assessed for use on an example aerospace component. In this study, the ICE technique utilizes 12 shakers, 1 for each boundary condition attachment degree of freedom to the component, specially designed fixtures, and MIMO control to provide an accurate set of loads and boundary conditions during the test. Acceleration, force, and voltage control provide insight into the viability of this testing method. System field test and ICE test results are compared to traditional single degree of freedom specification development and testing. Results indicate the multi-shaker ICE test provided a much more accurate replication of system field test response compared with single degree of freedom testing.
Smoke may be defined as the particulate products from fire and is composed of organics originating from unburnt fuel and soot, which is mostly carbon and is formed in the rich side of the flame. The fire community regularly measures smoke emissions using the cone calorimeter (CC) and the fire propagation analyzer (FPA) devices via laser extinction. Their measurements are conducted over the burn time of the material, generally minutes. Our high-flux exposures from concentrated solar irradiance result in emissions lasting only a few seconds. We have adapted the historical methods to our application to permit similar quantitative assessments of smoke. We illustrate here our modified procedure and present some results of the testing performed by exposing materials to concentrated solar energy. An assessment of the uncertainty in the smoke yield measurements is made. The data are expected to contribute to the body of knowledge on the emissions of smoke from ignitions caused by more unconventional initiating events involving very high heat fluxes.
This presentation describes a new effort to better understand insulator flashover in high current, high voltage pulsed power systems. Both experimental and modeling investigations are described. Particular emphasis is put upon understand flashover that initiate in the anode triple junction (anode-vacuum-dielectric).
We demonstrate an InAs-based nonlinear dielectric metasurface, which can generate terahertz (THz) pulses with opposite phase in comparison to an unpatterned InAs layer. It enables binary phase THz metasurfaces for generation and focusing of THz pulses.
A microgrid is characterized by a high R/X ratio, making the voltage more sensitive to active power changes unlike in bulk power systems where voltage is mostly regulated by reactive power. Because of its sensitivity to active power, control approach should incorporate active power as well. Thus, the voltage control approach for microgrids is very different from conventional power systems. The energy costs associated with these power are different. Furthermore, because of diverse generation sources and different components such as distributed energy resources, energy storage systems, etc, model-based control approaches might not perform very well. This paper proposes a reinforcement learning-based voltage support framework for a microgrid where an agent learns control policy by interacting with the microgrid without requiring a mathematical model of the system. A MATLAB/Simulink simulation study on a test system from Cordova, Alaska shows that there is a large reduction in voltage deviation (about 2.5-4.5 times). This reduction in voltage deviation can improve the power quality of the microgrid: ensuring a reliable supply, longer equipment lifespan, and stable user operations.
Thiagarajan, Raghav S.; Subramaniam, Akshay; Kolluri, Suryanarayana; Garrick, Taylor R.; Preger, Yuliya P.; De Angelis, Valerio D.; Lim, Jin H.; Subramanian, Venkat R.
Lithium-ion batteries are typically modeled using porous electrode theory coupled with various transport and reaction mechanisms, along with suitable discretization or approximations for the solid-phase diffusion equation. The solid-phase diffusion equation represents the main computational burden for typical pseudo-2-dimensional (p2D) models since these equations in the pseudo r-dimension must be solved at each point in the computational grid. This substantially increases the complexity of the model as well as the computational time. Traditional approaches towards simplifying solid-phase diffusion possess certain significant limitations, especially in modeling emerging electrode materials which involve phase changes and variable diffusivities. A computationally efficient representation for solid-phase diffusion is discussed in this paper based on symmetric polynomials using Orthogonal Collocation and Galerkin formulation (weak form). A systematic approach is provided to increase the accuracy of the approximation (p form in finite element methods) to enable efficient simulation with a minimal number of semi-discretized equations, ensuring mass conservation even for non-linear diffusion problems involving variable diffusivities. These methods are then demonstrated by incorporation into the full p2D model, illustrating their advantages in simulating high C-rates and short-time dynamic operation of Lithium-ion batteries.
Two-dimensional (2D) layered oxides have recently attracted wide attention owing to the strong coupling among charges, spins, lattice, and strain, which allows great flexibility and opportunities in structure designs as well as multifunctionality exploration. In parallel, plasmonic hybrid nanostructures exhibit exotic localized surface plasmon resonance (LSPR) providing a broad range of applications in nanophotonic devices and sensors. A hybrid material platform combining the unique multifunctional 2D layered oxides and plasmonic nanostructures brings optical tuning into the new level. In this work, a novel self-assembled Bi2MoO6 (BMO) 2D layered oxide incorporated with plasmonic Au nanoinclusions has been demonstrated via one-step pulsed laser deposition (PLD) technique. Comprehensive microstructural characterizations, including scanning transmission electron microscopy (STEM), differential phase contrast imaging (DPC), and STEM tomography, have demonstrated the high epitaxial quality and particle-in-matrix morphology of the BMO-Au nanocomposite film. DPC-STEM imaging clarifies the magnetic domain structures of BMO matrix. Three different BMO structures including layered supercell (LSC) and superlattices have been revealed which is attributed to the variable strain states throughout the BMO-Au film. Owing to the combination of plasmonic Au and layered structure of BMO, the nanocomposite film exhibits a typical LSPR in visible wavelength region and strong anisotropy in terms of its optical and ferromagnetic properties. This study opens a new avenue for developing novel 2D layered complex oxides incorporated with plasmonic metal or semiconductor phases showing great potential for applications in multifunctional nanoelectronics devices. [Figure not available: see fulltext.]
The Information Harm Triangle (IHT) is an approach that seeks to simplify the defense-in-depth design of digital instrumentation and control (I&C) systems. The IHT provides a novel framework for understanding how cyber-attacks targeting digital I&C systems can harm the physical process. The utility of the IHT arises from the decomposition of cybersecurity analysis into two orthogonal vectors: data harm and physical information harm. Cyber-attacks on I&C systems can only directly cause data harm. Data harm is then transformed into physical information harm by unsafe control actions (UCAs) identified using Systems-Theoretic Process Analysis (STPA). Because data harm and physical information harm are orthogonal, defense-in-depth can be achieved by identifying control measures that independently limit data harm and physical information harm. This paper furthers the development of the IHT by investigating the defense-in-depth design of cybersecurity measures for sequences of UCAs. The effects of the order and timing of UCAs are examined for several case studies to determine how to represent these sequences using the IHT. These considerations are important for the identification of data harm and physical information harm security measures, and they influence the selection of efficient measures to achieve defense-in-depth. This research enables the benefits of the IHT's simple approach to be realized for increasingly complex cyber-attack scenarios.
Risk and resilience assessments for critical infrastructure focus on myriad objectives, from natural hazard evaluations to optimizing investments. Although research has started to characterize externalities associated with current or possible future states, incorporation of equity priorities at project inception is increasingly being recognized as critical for planning related activities. However, there is no standard methodology that guides development of equity-informed quantitative approaches for infrastructure planning activities. To address this gap, we introduce a logic model that can be tailored to capture nuances about specific geographies and community priorities, effectively incorporating them into different mathematical approaches for quantitative risk assessments. Specifically, the logic model uses a graded, iterative approach to clarify specific equity objectives as well as inform the development of equations being used to support analysis. We demonstrate the utility of this framework using case studies spanning aviation fuel, produced water, and microgrid electricity infrastructures. For each case study, the use of the logic model helps clarify the ways that local priorities and infrastructure needs are used to drive the types of data and quantitative methodologies used in the respective analyses. The explicit consideration of methodological limitations (e.g., data mismatches) and stakeholder engagements serves to increase the transparency of the associated findings as well as effectively integrate community nuances (e.g., ownership of assets) into infrastructure assessments. Such integration will become increasingly important to ensure that planning activities (which occur throughout the lifecycle of the infrastructure projects) lead to long-lasting solutions to meet both energy and sustainable development goals for communities.
We report on a two-step technique for post-bond III-V substrate removal involving precision mechanical milling and selective chemical etching. We show results on GaAs, GaSb, InP, and InAs substrates and from mm-scale chips to wafers.
The near wake flow field associated with hypersonic blunt bodies is characterized by complex physical phenomena resulting in both steady and time dependent pressure loadings on the base of the vehicle. Here, we focus on the unsteady fluid dynamic pressure fluctuation behavior as a vibratory input loading. Typically, these flows are characterized by a locally low-pressure, separated flow region with an unsteady formation of vortical cells that are locally produced and convected downstream into the far-field wake. This periodic production and transport of vortical elements is very-well known from classical incompressible fluid mechanics and is usually termed as the (Von) Karman vortex street. While traditionally discussed within the scope of incompressible flow, the periodic vortex shedding phenomenon is known for compressible flows as well. To support vehicle vibratory loading design computations, we examine a suite of analytical and high-fidelity computational models supported by dedicated experimental measurements. While large scale simulation approaches offer very high-quality results, they are impractical for design-level decisions, implying that analytically derived reduced order models are essential. The major portions of this effort include an examination of the DeChant-Smith Power Spectral Density (PSD) [1] model to better understand both overall Root Mean Square (RMS) magnitude and functional maximum associated with a critical vortex shedding phenomenon. The critical frequency is examined using computational, experiments and an analytical shear layer frequency model. Finally, the PSD magnitude maximum is studied using a theory-based approach connecting the PSD to the spatial correlation that strongly supports the DeChant-Smith PSD model behavior. These results combine to demonstrate that the current employed PSD models provide plausible reduced order closures for turbulent base pressure fluctuations for high Reynolds number flows over range of Mach numbers. Access to a reliable base pressure fluctuation model then permits simulation of bluff body vibratory input.
Plasma sprays can be used to melt particles, which may be deposited on an engineered surface to apply unique properties to the part. Because of the extreme temperatures (>>3000ºC) it is desirable to conduct the process in a way to avoid melting the parts to which the coatings are being applied. A jet of ambient gas is sometimes used to deflect the hot gases, while allowing the melted particles to impact and adhere to the substrate. This is known as a plume quench. While plume quenching is done in practice, to our knowledge there have not been any studies on how to apply a plume quench, and how it may affect the flows. We have recently adapted our fire simulation tool to simulate argon plasma sprays with a variety of metal particles. Two nozzle conditions are considered, with very different gas flow and power conditions. Two particle types are considered, Tantalum and Nickel. For the model, the k-epsilon turbulence model is compared to a more dynamic TFNS turbulence model. Limited data comparisons suggest the higher-fidelity TFNS model is significantly more accurate than the k-epsilon model. Additionally, the plume quench is found to have a noticeable effect for the low inlet flow case, but minimal effect on the high flow case. This suggests the effectiveness of a quench relates to the relative momentum of the intersecting gas jets.
Chang, Chun; Nakagawa, Seiji; Kibikas, William M.; Kneafsey, Timothy; Dobson, Patrick; Samuel, Abraham; Otto, Michael; Bruce, Stephen; Kaargeson-Loe, Nils
Although enhancing permeability is vital for successful development of an Enhanced Geothermal System (EGS) reservoir, high-permeability pathways between injection and production wells can lead to short-circuiting of the flow, resulting in inefficient heat exchange with the reservoir rock. For this reason, the permeability of such excessively permeable paths needs to be reduced. Controlling the reservoir permeability away from wells, however, is challenging, because the injected materials need to form solid plugs only after they reach the target locations. To control the timing of the flow-diverter formation, we are developing a technology to deliver one or more components of the diverter-forming chemicals in microparticles (capsules) with a thin polymer shell. The material properties of the shell are designed so that it can withstand moderately high temperatures (up to ~200°C) of the injected fluid for a short period of time (up to ~30 minutes), but thermally degrades and releases the reactants at higher reservoir temperatures. A microfluidic system has been developed that can continuously produce reactant-encapsulating particles. The diameter of the produced particles is in the range of ~250-650 μm, which can be controlled by using capillary tubes with different diameters and by adjusting the flow rates of the encapsulated fluid and the UV-curable epoxy resin for the shell. Preliminary experiments have demonstrated that (1) microcapsules containing chemical activators for flow-diverter (silicate gel or metal silicate) formation can be produced, (2) the durability of the shell can be made to satisfy the required conditions, and (3) thermal degradation of the shell allows for release of the reaction activators and control of reaction kinetics in silica-based diverters.
As the width and depth of quantum circuits implemented by state-of-the-art quantum processors rapidly increase, circuit analysis and assessment via classical simulation are becoming unfeasible. It is crucial, therefore, to develop new methods to identify significant error sources in large and complex quantum circuits. In this work, we present a technique that pinpoints the sections of a quantum circuit that affect the circuit output the most and thus helps to identify the most significant sources of error. The technique requires no classical verification of the circuit output and is thus a scalable tool for debugging large quantum programs in the form of circuits. We demonstrate the practicality and efficacy of the proposed technique by applying it to example algorithmic circuits implemented on IBM quantum machines.
Motion primitives provide an approach to kinodynamic path planning that does not require online solution of the equations of motion, permitting the use of complex high-fidelity models. The path planning problem with motion primitives is a Markov Decision Process (MDP) with the primitives defining the available actions. Uncertainty in evolution of the primitives means that there is uncertainty in the state resulting from each action. In this work, uncertain motion primitive planning is demonstrated for a high speed glide vehicle. A nonlinear 6- degree-of-freedom model is used to generate the primitive library, and the motion primitive planning problem is formulated so that the transition probabilities in the MDP may have a functional dependence on the state of the system. Single-query solutions to planning problems incorporating operational envelope constraints and no-fly zones are obtained using AO* under chance constraints on the risk of mission failure.
A high altitude electromagnetic pulse (HEMP) or other similar geomagnetic disturbance (GMD) has the potential to severely impact the operation of large-scale electric power grids. By introducing low-frequency common-mode (CM) currents, these events can impact the performance of key system components such as large power transformers. In this work, a solid-state transformer (SST) that can replace susceptible equipment and improve grid resiliency by safely absorbing these CM insults is described. An overview of the proposed SST power electronics and controls architecture is provided, a system model is developed, and the performance of the SST in response to a simulated CM insult is evaluated. Compared to a conventional magnetic transformer, the SST is found to recover quickly from the insult while maintaining nominal ac input/output behavior.
Bayesian inference is a technique that researchers have recently employed to solve inverse problems in structural dynamics and acoustics. More specifically, this technique can identify the spatial correlation of a distributed set of pressure loads generated during vibroacoustic testing. In this context, Bayesian inference augments the experimenter’s prior knowledge of the acoustic field prior to testing with vibration measurements at several locations on the test article to update these pressure correlations. One method to incorporate prior knowledge is to use a theoretical form of the correlations; however, theoretical forms only exist for a few special cases, e.g., a diffuse field or uncorrelated pressures. For more complex loading scenarios, such as those arising in a direct-field acoustic test, utilizing one of these theoretical priors may not be able to accurately reproduce the acoustic loading generated during the experiment. As such, this work leverages the pressure correlations generated from an acoustic simulation as the Bayesian prior to increase the accuracy of the inference for complex loading scenarios.
In this paper, the potential for time series classifiers to identify faults and their location in a DC Microgrid is explored. Two different classification algorithms are considered. First, a minimally random convolutional kernel transformation (MINIROCKET) is applied on the time series fault data. The transformed data is used to train a regularized linear classifier with stochastic gradient descent (SDG). Second, a continuous wavelet transform (CWT) is applied on the fault data and a convolutional neural network (CNN) is trained to learn the characteristic patterns in the CWT coefficients of the transformed data. The data used for training and testing the models are acquired from multiple fault simulations on a 750 VDC Microgrid modeled in PSCAD/EMTDC. The results from both classification algorithms are presented and compared. For an accurate classification of the fault location, the MINIROCKET and SGD Classifier model needed signals/features from several measurement nodes in the system. The CWT and CNN based model accurately identified the fault location with signals from a single measurement node in the system. By performing a self-learning monitoring and decision making analysis, protection relays equipped with time series classification algorithms can quickly detect the location of faults and isolate them to improve the protection operations on DC Microgrids.
Residential solar photovoltaic (PV) systems are interconnected with the distribution grid at low-voltage secondary network locations. However, computational models of these networks are often over-simplified or non-existent, which makes it challenging to determine the operational impacts of new PV installations at those locations. In this work, a model-free locational hosting capacity analysis algorithm is proposed that requires only smart meter measurements at a given location to calculate the maximum PV size that can be accommodated without exceeding voltage constraints. The proposed algorithm was evaluated on two different smart meter datasets measuring over 2,700 total customer locations and was compared against results obtained from conventional model-based methods for the same smart meter datasets. Compared to the model-based results, the model-free algorithm had a mean absolute error (MAE) of less than 0.30 kW, was equally sensitive to measurement noise, and required much less computation time.
The generalized Dryja-Smith-Widlund (GDSW) preconditioner is a two-level overlapping Schwarz domain decomposition (DD) preconditioner that couples a classical one-level overlapping Schwarz preconditioner with an energy-minimizing coarse space. When used to accelerate the convergence rate of Krylov subspace iterative methods, the GDSW preconditioner provides robustness and scalability for the solution of sparse linear systems arising from the discretization of a wide range of partial different equations. In this paper, we present FROSch (Fast and Robust Schwarz), a domain decomposition solver package which implements GDSW-type preconditioners for both CPU and GPU clusters. To improve the solver performance on GPUs, we use a novel decomposition to run multiple MPI processes on each GPU, reducing both solver's computational and storage costs and potentially improving the convergence rate. This allowed us to obtain competitive or faster performance using GPUs compared to using CPUs alone. We demonstrate the performance of FROSch on the Summit supercomputer with NVIDIA V100 GPUs, where we used NVIDIA Multi-Process Service (MPS) to implement our decomposition strategy.The solver has a wide variety of algorithmic and implementation choices, which poses both opportunities and challenges for its GPU implementation. We conduct a thorough experimental study with different solver options including the exact or inexact solution of the local overlapping subdomain problems on a GPU. We also discuss the effect of using the iterative variant of the incomplete LU factorization and sparse-triangular solve as the approximate local solver, and using lower precision for computing the whole FROSch preconditioner. Overall, the solve time was reduced by factors of about 2× using GPUs, while the GPU acceleration of the numerical setup time depend on the solver options and the local matrix sizes.
A method is presented to detect clear-sky periods for plane-of-array, time-averaged irradiance data that is based on the algorithm originally described by Reno and Hansen. We show this new method improves the state-of-the-art by providing accurate detection at longer data intervals, and by detecting clear periods in plane-of-array data, which is novel. We illustrate how accurate determination of clear-sky conditions helps to eliminate data noise and bias in the assessment of long-term performance of PV plants.
Molten Salt Reactor (MSR) systems can be divided into two basic categories: liquid-fueled MSRs in which the fuel is dissolved in the salt, and solid-fueled systems such as the Fluoride-salt-cooled High-temperature Reactor (FHR). The molten salt provides an impediment to fission product release as actinides and many fission products are soluble in molten salt. Nonetheless, under accident conditions, some radionuclides may escape the salt by vaporization and aerosol formation, which may lead to release into the environment. We present recent enhancements to MELCOR to represent the transport of radionuclides in the salt and releases from the salt. Some soluble but volatile radionuclides may vaporize and subsequently condense to aerosol. Insoluble fission products can deposit on structures. Thermochimica, an open-source Gibbs Energy Minimization (GEM) code, has been integrated into MELCOR. With the appropriate thermochemical database, Thermochimica provides the solubility and vapor pressure of species as a function of temperature, pressure, and composition, which are needed to characterize the vaporization rate and the state of the salt with fission products. Since thermochemical databases are still under active development for molten salt systems, thermodynamic data for fission product solubility and vapor pressure may be user specified. This enables preliminary assessments of fission product transport in molten salt systems. In this paper, we discuss modeling of soluble and insoluble fission product releases in a MSR with Thermochimica incorporated into MELCOR. Separate-effects experiments performed as part of the Molten Salt Reactor Experiment in which radioactive aerosol was released are discussed as needed for determining the source term.
Puerto Rico faced a double strike from hurricanes Irma and Maria in 2017. The resulting damage required a comprehensive rebuild of electric infrastructure. There are plans and pilot projects to rebuild with microgrids to increase resilience. This paper provides a techno-economic analysis technique and case study of a potential future community in Puerto Rico that combines probabilistic microgrid design analysis with tiered circuits in building energy modeling. Tiered circuits in buildings allow electric load reduction via remote disconnection of non-critiñl circuits during an emergency. When coupled to a microgrid, tiered circuitry can reduce the chances of a microgrid's storage and generation resources being depleted. The analysis technique is applied to show 1) Approximate cost savings due to a tiered circuit structure and 2) Approximate cost savings gained by simultaneously considering resilience and sustainability constraints in the microgrid optimization. The analysis technique uses a resistive capacitive thermal model with load profiles for four tiers (tier 1-3 and non-critical loads). Three analyses were conducted using: 1) open-source software called Tiered Energy in Buildings and 2) the Microgrid Design Toolkit. For a fossil fuel based microgrid 30% of the total microgrid costs of 1.18 million USD were calculated where the non-tiered case keeps all loads 99.9% available and the tiered case keeps tier 1 at 99.9%, tier 2 at 95%, tier 3 at 80% availability, with no requirement on non-critical loads. The same comparison for a sustainable microgrid showed 8% cost savings on a 5.10 million USD microgrid due to tiered circuits. The results also showed 6-7% cost savings when our analysis technique optimizes sustainability and resilience simultaneously in comparison to doing microgrid resilience analysis and renewables net present value analysis independently. Though highly specific to our case study, similar assessments using our analysis technique can elucidate value of tiered circuits and simultaneous consideration of sustainability and resilience in other locations.
Neural ordinary differential equations (NODEs) have recently regained popularity as large-depth limits of a large class of neural networks. In particular, residual neural networks (ResNets) are equivalent to an explicit Euler discretization of an underlying NODE, where the transition from one layer to the next is one time step of the discretization. The relationship between continuous and discrete neural networks has been of particular interest. Notably, analysis from the ordinary differential equation viewpoint can potentially lead to new insights for understanding the behavior of neural networks in general. In this work, we take inspiration from differential equations to define the concept of stiffness for a ResNet via the interpretation of a ResNet as the discretization of a NODE. Here, we then examine the effects of stiffness on the ability of a ResNet to generalize, via computational studies on example problems coming from climate and chemistry models. We find that penalizing stiffness does have a unique regularizing effect, but we see no benefit to penalizing stiffness over L2 regularization (penalization of network parameter norms) in terms of predictive performance.
The Information Harm Triangle (IHT) is a novel approach that aims to adapt intuitive engineering concepts to simplify defense in depth for instrumentation and control (I&C) systems at nuclear power plants. This approach combines digital harm, real-world harm, and unsafe control actions (UCAs) into a single graph named “Information Harm Triangle.” The IHT is based on the postulation that the consequences of cyberattacks targeting I&C systems can be expressed in terms of two orthogonal components: a component representing the magnitude of data harm (DH) (i.e., digital information harm) and a component representing physical information harm (PIH) (i.e., real-world harm, e.g., an inadvertent plant trip). The magnitude of the severity of the physical consequence is the aspect of risk that is of concern. The sum of these two components represents the total information harm. The IHT intuitively informs risk-informed cybersecurity strategies that employ independent measures that either act to prevent, reduce, or mitigate DH or PIH. Another aspect of the IHT is that the DH can result in cyber-initiated UCAs that result in severe physical consequences. The orthogonality of DH and PIH provides insights into designing effective defense in depth. The IHT can also represent cyberattacks that have the potential to impede, evade, or compromise countermeasures from taking appropriate action to reduce, stop, or mitigate the harm caused by such UCAs. Cyber-initiated UCAs transform DH to PIH.
Helium or neopentane can be used as surrogate gas fill for deuterium (D2) or deuterium-tritium (DT) in laser-plasma interaction studies. Surrogates are convenient to avoid flammability hazards or the integration of cryogenics in an experiment. To test the degree of equivalency between deuterium and helium, experiments were conducted in the Pecos target chamber at Sandia National Laboratories. Observables such as laser propagation and signatures of laser-plasma instabilities (LPI) were recorded for multiple laser and target configurations. It was found that some observables can differ significantly despite the apparent similarity of the gases with respect to molecular charge and weight. While a qualitative behaviour of the interaction may very well be studied by finding a suitable compromise of laser absorption, electron density, and LPI cross sections, a quantitative investigation of expected values for deuterium fills at high laser intensities is not likely to succeed with surrogate gases.
This work proposes a Traveling Wave (TW) detection and identification method that addresses the demanding time and functional constraints that TW-based protection schemes for power distribution systems require. The high-frequency components of continuously sampled voltage signals are extracted using the Discrete Wavelet Transform, and the designed indicator is monitored to detect the TW arrival time. The limitations of the method are explored, such as the effective range of detection and the exposure to TWs originating from non-fault events. Simulations are conducted on the IEEE 34 nodes system, which has been adapted to include capacitor banks and small loads connection events, as well as transformer energization and de-energization events. After the TW detection, a Random Forest classifier has been trained to infer whether the TW is due to a fault or another type of transient. About the results, the proposed method is sensitive to near faults, and faults can be successfully distinguished from other events.
When exposed to mechanical environments such as shock and vibration, electrical connections may experience increased levels of contact resistance associated with the physical characteristics of the electrical interface. A phenomenon known as electrical chatter occurs when these vibrations are large enough to interrupt the electric signals. It is critical to understand the root causes behind these events because electrical chatter may result in unexpected performance or failure of the system. The root causes span a variety of fields, such as structural dynamics, contact mechanics, and tribology. Therefore, a wide range of analyses are required to fully explore the physical phenomenon. This paper intends to provide a better understanding of the relationship between structural dynamics and electrical chatter events. Specifically, electrical contact assembly composed of a cylindrical pin and bifurcated structure were studied using high fidelity simulations. Structural dynamic simulations will be performed with both linear and nonlinear reduced-order models (ROM) to replicate the relevant structural dynamics. Subsequent multi-physics simulations will be discussed to relate the contact mechanics associated with the dynamic interactions between the pin and receptacle to the chatter. Each simulation method was parametrized by data from a variety of dynamic experiments. Both structural dynamics and electrical continuity were observed in both the simulation and experimental approaches, so that the relationship between the two can be established.
Polymers are widely used as damping materials in vibration and impact applications. Liquid crystal elastomers (LCEs) are a unique class of polymers that may offer the potential for enhanced energy absorption capacity under impact conditions over conventional polymers due to their ability to align the nematic phase during loading. Being a relatively new material, the high rate compressive properties of LCEs have been minimally studied. Here, we investigated the high strain rate compression behavior of different solid LCEs, including cast polydomain and 3D-printed, preferentially oriented monodomain samples. Direct ink write (DIW) 3D printed samples allow unique sample designs, namely, a specific orientation of mesogens with respect to the loading direction. Loading the sample in different orientations can induce mesogen rotation during mechanical loading and subsequently different stress-strain responses under impact. We also used a reference polymer, bisphenol-A (BPA) cross-linked resin, to contrast LCE behavior with conventional elastomer behavior.
To understand the gas-surface chemistry above the thermal protection system of a hypersonic vehicle, it is necessary to map out the kinetics of key elementary reaction steps. In this work, extensive periodic density functional theory (DFT) calculations are performed to elucidate the interaction of atomic oxygen and nitrogen with both the basal plane and edge sites of highly oriented pyrolytic graphite (HOPG). Reaction energies and barriers are determined for adsorption, desorption, diffusion, recombination, and several reactions. These DFT results are compared with the most recent finite-rate model for air-carbon ablation. Our DFT results corroborated some of the parameters used in the model but suggest that further refinement may be necessary for others. The calculations reported here will help to establish a predictive kinetic model for the complex reaction network present under hypersonic flight conditions.
The DevOps movement, which aims to accelerate the continuous delivery of high-quality software, has taken a leading role in reshaping the software industry. Likewise, there is growing interest in applying DevOps tools and practices in the domains of computational science and engineering (CSE) to meet the ever-growing demand for scalable simulation and analysis. Translating insights from industry to research computing, however, remains an ongoing challenge; DevOps for science and engineering demands adaptation and innovation in those tools and practices. There is a need to better understand the challenges faced by DevOps practitioners in CSE contexts in bridging this divide. To that end, we conducted a participatory action research study to collect and analyze the experiences of DevOps practitioners at a major US national laboratory through the use of storytelling techniques. We share lessons learned and present opportunities for future investigation into DevOps practice in the CSE domain.
High penetrations of residential solar PV can cause voltage issues on low-voltage (LV) secondary networks. Distribution utility planners often utilize model-based power flow solvers to address these voltage issues and accommodate more PV installations without disrupting the customers already connected to the system. These model-based results are computationally expensive and often prone to errors. In this paper, two novel deep learning-based model-free algorithms are proposed that can predict the change in voltages for PV installations without any inherent network information of the system. These algorithms will only use the real power (P), reactive power (Q), and voltage (V) data from Advanced Metering Infrastructure (AMI) to calculate the change in voltages for an additional PV installation for any customer location in the LV secondary network. Both algorithms are tested on three datasets of two feeders and compared to the conventional model-based methods and existing model-free methods. The proposed methods are also applied to estimate the locational PV hosting capacity for both feeders and have shown better accuracies compared to an existing model-free method. Results show that data filtering or pre-processing can improve the model performance if the testing data point exists in the training dataset used for that model.
Here we consider the shock stand-off distance for blunt forebodies using a simplified differential-based approach with extensions for high enthalpy dissociative chemistry effects. Following Rasmussen [4], self-similar differential equations valid for spherical and cylindrical geometries that are modified to focus on the shock curvature induced vorticity in the immediate region of the shock are solved to provide a calorically perfect estimate for shock standoff distance that yields good agreement with classical theory. While useful as a limiting case, strong shock (high enthalpy) calorically perfect results required modification to include the effects of dissociative thermo-chemistry. Using a dissociative ideal gas model for dissociative equilibrium behavior combined with shock Hugoniot constraints we solve to provide thermodynamic modifications to the shock density jump thereby sensitizing the simpler result for high enthalpy effects. The resulting estimates are then compared to high enthalpy stand-off data from literature, recent dedicated high speed shock tunnel measurements and multi-temperature partitioned implementation CFD data sets. Generally, the theoretical results derived here compared well with these data sources, suggesting that the current formulation provides an approximate but useful estimate for shock stand-off distance.
In this study, we develop an end-to-end deep learning-based inverse design approach to determine the scatterer shape necessary to achieve a target acoustic field. This approach integrates non-uniform rational B-spline (NURBS) into a convolutional autoencoder (CAE) architecture while concurrently leveraging (in a weak sense) the governing physics of the acoustic problem. By utilizing prior physical knowledge and NURBS parameterization to regularize the ill-posed inverse problem, this method does not require enforcing any geometric constraint on the inverse design space, hence allowing the determination of scatterers with potentially any arbitrary shape (within the set allowed by NURBS). A numerical study is presented to showcase the ability of this approach to identify physically-consistent scatterer shapes capable of producing user-defined acoustic fields.
The Sliding Scale of Cybersecurity is a framework for understanding the actions that contribute to cybersecurity. The model consists of five categories that provide varying value towards cybersecurity and incur varying implementation costs. These categories range from offensive cybersecurity measures providing the least value and incurring the greatest cost, to architecture providing the greatest value and incurring the least cost. This paper presents an application of the Sliding Scale of Cybersecurity to the Tiered Cybersecurity Analysis (TCA) of digital instrumentation and control systems for advanced reactors. The TCA consists of three tiers. Tier 1 is design and impact analysis. In Tier 1 it is assumed that the adversary has control over all digital systems, components, and networks in the plant, and that the adversary is only constrained by the physical limitations of the plant design. The plant’s safety design features are examined to determine whether the consequences of an attack by this cyber-enabled adversary are eliminated or mitigated. Accident sequences that are not eliminated or mitigated by security by design features are examined in Tier 2 analysis. In Tier 2, adversary access pathways are identified for the unmitigated accident sequences, and passive measures are implemented to deny system and network access to those pathways wherever feasible. Any systems with remaining susceptible access pathways are then examined in Tier 3. In Tier 3, active defensive cybersecurity architecture features and cybersecurity plan controls are applied to deny the adversary the ability to conduct the tasks needed to cause a severe consequence. Earlier application of the TCA in the design process provides greater opportunity for an efficient graded approach and defense-in-depth.
Penetration of wind energy has increased significantly in the power grid in recent times. Although wind is abundant, environment-friendly, and cheap, it is variable in nature and does not contribute to system inertia as much as conventional synchronous generators. These negative characteristics of wind lead to concerns over the frequency stability of power systems. This paper proposes a planning strategy to improve grid frequency stability by jointly deploying energy storage systems (ESSs) and geographical distribution of wind power. ESSs can provide inertial support to the grid by rapidly injecting active power into the system. At the same time, geographical separation/distribution of wind power can reduce wind power output variability and improve the inertia contribution from wind farms. The ESSs are sized based on the balance inertia needed for frequency stability, obtained using an analytical method and a mixed timing Monte Carlo simulation (MCS) based framework. The effect of the distribution of wind power across geographical regions is incorporated into the framework to study possible reductions in the ESS size while maintaining the system frequency stability. The proposed strategy is implemented on the modified WSCC 9-bus system.
A high bandwidth piezoelectric transducer technology for high data rate communications across metallic barriers is presented and discussed. To properly characterize the channel, a linear time-invariant (LTI) model of the device is obtained using frequency fitting methods on the S-parameter measurements of the communication network. The corresponding impulse response of the channel is derived from the poles and residues used to fit the frequency data. A recursive formulation of the impulse response of complex poles is advanced and analyzed. The channel characteristics were used to estimate the trans-barrier data rate employing orthogonal frequency division multiplex (OFDM) as used in a powerline communication (PLC) standard. An off-the-shelf PLC system is used to communicate through a metallic barrier and data rates exceeding 70 Mbps were achieved as predicted by the model. The methods described here are useful for estimating the physical data rate achievable by trans-barrier communication systems using piezoelectric transducers.
Reinforcement learning (RL) may enable fixedwing unmanned aerial vehicle (UAV) guidance to achieve more agile and complex objectives than typical methods. However, RL has yet struggled to achieve even minimal success on this problem; fixed-wing flight with RL-based guidance has only been demonstrated in literature with reduced state and/or action spaces. In order to achieve full 6-DOF RL-based guidance, this study begins training with imitation learning from classical guidance, a method known as warm-staring (WS), before further training using Proximal Policy Optimization (PPO). We show that warm starting is critical to successful RL performance on this problem. PPO alone achieved a 2% success rate in our experiments. Warm-starting alone achieved 32% success. Warm-starting plus PPO achieved 57% success over all policies, with 40% of policies achieving 94% success.
In the high-pressure regime above 300-500 psig, voltage-breakdown models such as the Paschen's law fail [1]. Below 300 psig the E/p values suggest that the breakdown mechanism, specifically at high E/p values is dominated by the ionization mechanism. As the air pressure is increased, the breakdown mechanism shifts from an ionization dominated regime to an attachment dominated regime at E/p values below 30. Thus, current Paschen equations will over predict the breakdown voltage and the electric field at which high-pressure spark gaps can be operated. Notably, as the attachment mechanism starts to dominate the breakdown physics the breakdown field in a high-pressure spark gap asymptotes at 1-1.2 MV/cm. Using recent data collected at Sandia National Laboratories, we have implemented corrections to breakdown prediction modeling using COMSOL to predict the breakdown voltage that can be achieved in the high-pressure regime, from 500-1500 psig. This research highlights how these corrections to the breakdown prediction models are implemented and the results of the simulations are compared to our data as well as other small gap data. We also compare the model to published literature values and to large gap breakdown in the 0.6-cm to 1.0-cm regime.
Multiple Input Multiple Output (MIMO) vibration testing provides the capability to expose a system to a field environment in a laboratory setting, saving both time and money by mitigating the need to perform multiple and costly large-scale field tests. However, MIMO vibration test design is not straightforward oftentimes relying on engineering judgment and multiple test iterations to determine the proper selection of response Degree of Freedom (DOF) and input locations that yield a successful test. This work investigates two DOF selection techniques for MIMO vibration testing to assist with test design, an iterative algorithm introduced in previous work and an Optimal Experiment Design (OED) approach. The iterative-based approach downselects the control set by removing DOF that have the smallest impact on overall error given a target Cross Power Spectral Density matrix and laboratory Frequency Response Function (FRF) matrix. The Optimal Experiment Design (OED) approach is formulated with the laboratory FRF matrix as a convex optimization problem and solved with a gradient-based optimization algorithm that seeks a set of weighted measurement DOF that minimize a measure of model prediction uncertainty. The DOF selection approaches are used to design MIMO vibration tests using candidate finite element models and simulated target environments. The results are generalized and compared to exemplify the quality of the MIMO test using the selected DOF.
Computational simulations of high-speed flow play an important role in the design of hypersonic vehicles, for which experimental data are scarce; however, high-fidelity simulations of hypersonic flow are computationally expensive. Reduced order models (ROMs) have the potential to make many-query problems, such as design optimization and uncertainty quantification, tractable for this domain. Residual minimization-based ROMs, which formulate the projection onto a reduced basis as an optimization problem, are one promising candidate for model reduction of large-scale fluid problems. This work analyzes whether specific choices of norms and objective functions can improve the performance of ROMs of hypersonic flow. Specifically, we investigate the use of dimensionally consistent inner products and modifications designed for convective problems, including ℓ1 minimization and constrained optimization statements to enforce conservation laws. Particular attention is paid to accuracy for problems with strong shocks, which are common in hypersonic flow and challenging for projection-based ROMs. We demonstrate that these modifications can improve the predictability and efficiency of a ROM, though the impact of such formulations depends on the quantity of interest and problem considered.
In this work, a modular and open-source platform has been developed for integrating hybrid battery energy storage systems that are intended for grid applications. Alongside integration, this platform will facilitate testing and optimal operation of hybrid storage technologies. Here, a hardware testbed and a control software have been designed, where the former comprises commercial Lithium-iron-phosphate (LiFePO4) and Lead Acid (Pb - acid) cells, custom built Dual Active Bridge (DAB) DC-DC converters, and a commercial DC-AC conversion system. In this testbed the batteries have an operating voltage range of 11-15V, the DC-AC conversion stage has a DC link voltage of 24V, and it connects to a 208V3-φ grid. The hardware testbed can be scaled up to higher voltages. The control software is developed in Python, and the firmware for all the hardware components is developed in C. This software implements hybrid charge/discharge protocols that are suitable for each battery technology for preventing cell degradation, and perform uninter-rupted quality checks on selected battery packs. The developed platform provides flexibility, modularity, safety and economic benefits to utility-scale storage integration.
TFLN/silicon photonic modulators featuring active silicon photonic components are reported with a Vπ of 3.6 Vcm. This hybrid architecture utilizes the bottom of the buried oxide as the bonding surface which features minimum topology.
Pulsed dielectric barrier discharges (DBD) in He-H2O and He-H2O-O2 mixtures are studied in near atmospheric conditions using temporally and spatially resolved quantitative 2D imaging of the hydroxyl radical (OH) and hydrogen peroxide (H2O2). The primary goal was to detect and quantify the production of these strongly oxidative species in water-laden helium discharges in a DBD jet configuration, which is of interest for biomedical applications such as disinfection of surfaces and treatment of biological samples. Hydroxyl profiles are obtained by laser-induced fluorescence (LIF) measurements using 282 nm laser excitation. Hydrogen peroxide profiles are measured by photo-fragmentation LIF (PF-LIF), which involves photo-dissociating H2O2 into OH with a 212.8 nm laser sheet and detecting the OH fragments by LIF. The H2O2 profiles are calibrated by measuring PF-LIF profiles in a reference mixture of He seeded with a known amount of H2O2. OH profiles are calibrated by measuring OH-radical decay times and comparing these with predictions from a chemical kinetics model. Two different burst discharge modes with five and ten pulses per burst are studied, both with a burst repetition rate of 50 Hz. In both cases, dynamics of OH and H2O2 distributions in the afterglow of the discharge are investigated. Gas temperatures determined from the OH-LIF spectra indicate that gas heating due to the plasma is insignificant. The addition of 5% O2 in the He admixture decreases the OH densities and increases the H2O2 densities. The increased coupled energy in the ten-pulse discharge increases OH and H2O2 mole fractions, except for the H2O2 in the He-H2O-O2 mixture which is relatively insensitive to the additional pulses.
With the recent surge in big data analytics for hyperdimensional data, there is a renewed interest in dimensionality reduction techniques. In order for these methods to improve performance gains and understanding of the underlying data, a proper metric needs to be identified. This step is often overlooked, and metrics are typically chosen without consideration of the underlying geometry of the data. In this paper, we present a method for incorporating elastic metrics into the t-distributed stochastic neighbour embedding (t-SNE) and Uniform Manifold Approximation and Projection (UMAP). We apply our method to functional data, which is uniquely characterized by rotations, parameterization and scale. If these properties are ignored, they can lead to incorrect analysis and poor classification performance. Through our method, we demonstrate improved performance on shape identification tasks for three benchmark data sets (MPEG-7, Car data set and Plane data set of Thankoor), where we achieve 0.77, 0.95 and 1.00 F1 score, respectively.
Reactive classical molecular dynamics simulations of sodium silicate glasses, xNa2O–(100 − x)SiO2 (x = 10–30), under quasi-static loading, were performed for the analysis of molecular scale fracture mechanisms. Mechanical properties of the sodium silicate glasses were consistent with experimentally reported values, and the amount of crack propagation varied with reported fracture toughness values. The most crack propagation occurred in NS20 systems (20-mol% Na2O) compared with the other simulated compositions. Dissipation via two mechanisms, the first through sodium migration as a lower activation energy process and the second through structural rearrangement as a higher activation energy process, was calculated and accounted for the energy that was not stored elastically or associated with the formation of new fracture surfaces. A correlation between crack propagation and energy dissipation was identified, with systems with higher crack propagation exhibiting less energy dissipation. Sodium silicate glass compositions with lower energy dissipation also exhibited the most sodium movement and structural rearrangement within 10 Å of the crack tip during loading. Therefore, high sodium mobility near the crack tip may enable energy dissipation without requiring formation of structural defects. Therefore, the varying mobilities of the network modifiers near crack tips influence the brittleness and the crack growth rate of modified amorphous oxide systems.
Data processing adds substantial soft costs to distributed energy systems. These costs are incurred primarily as labor necessary to collect, normalize, store and communicate data. The open-source Orange Button data exchange standard comprises data taxonomies, common data sources, and interoperable software tools which together can dramatically reduce these costs and thereby accelerate the deployment of distributed energy systems. We describe the data taxonomies and datasets, and the software enabled by these capabilities.
A large-scale numerical computation of five wind farms was performed as a part of the American WAKE experimeNt (AWAKEN). This high-fidelity computation used the ExaWind/AMR-Wind LES solver to simulate a 100 km × 100 km domain containing 541 turbines under unstable atmospheric conditions matching previous measurements. The turbines were represented by Joukowski and OpenFAST coupled actuator disk models. Results of this qualitative comparison illustrate the interactions of wind farms with large-scale ABL structures in the flow, as well as the extent of downstream wake penetration in the flow and blockage effects around wind farms.
Chemistry tabulation is a common approach in practical simulations of turbulent combustion at engineering scales. Linear interpolants have traditionally been used for accessing precomputed multidimensional tables but suffer from large memory requirements and discontinuous derivatives. Higher-degree interpolants address some of these restrictions but are similarly limited to relatively low-dimensional tabulation. Artificial neural networks (ANNs) can be used to overcome these limitations but cannot guarantee the same accuracy as interpolants and introduce challenges in reproducibility and reliable training. These challenges are enhanced as the physics complexity to be represented within the tabulation increases. In this manuscript, we assess the efficiency, accuracy, and memory requirements of Lagrange polynomials, tensor product B-splines, and ANNs as tabulation strategies. We analyze results in the context of nonadiabatic flamelet modeling where higher dimension counts are necessary. While ANNs do not require structuring of data, providing benefits for complex physics representation, interpolation approaches often rely on some structuring of the table. Interpolation using structured table inputs that are not directly related to the variables transported in a simulation can incur additional query costs. This is demonstrated in the present implementation of heat losses. We show that ANNs, despite being difficult to train and reproduce, can be advantageous for high-dimensional, unstructured datasets relevant to nonadiabatic flamelet models. We also demonstrate that Lagrange polynomials show significant speedup for similar accuracy compared to B-splines.
The current interest in hypersonic flows and the growing importance of plasma applications necessitate the development of diagnostics for high-enthalpy flow environments. Reliable and novel experimental data at relevant conditions will drive engineering and modeling efforts forward significantly. This study demonstrates the usage of nanosecond Coherent Anti-Stokes Raman Scattering (CARS) to measure temperature in an atmospheric, high-temperature (> 5500 K) air plasma. The experimental configuration is of interest as the plasma is close to thermodynamic equilibrium and the setup is a test-bed for heat shield materials. The determination of the non-resonant background at such high-temperatures is explored and rotational-vibrational equilibrium temperatures of the N2 ground state are determined via fits of the theory to measured spectra. Results show that the accuracy of the temperature measurements is affected by slow periodic variations in the plasma, causing sampling error. Moreover, depending on the experimental configuration, the measurements can be affected by two-beam interaction, which causes a bias towards lower temperatures, and stimulated Raman pumping, which causes a bias towards higher temperatures. The successful demonstration of CARS at the present conditions, and the exploration of its sensitivities, paves the way towards more complex measurements, e.g. close to interfaces in high-enthalpy plasma flows.
Awile, Omar; Knight, James C.; Nowotny, Thomas; Aimone, James B.; Diesmann, Markus; Schurmann, Felix
At the turn of the millennium the computational neuroscience community realized that neuroscience was in a software crisis: software development was no longer progressing as expected and reproducibility declined. The International Neuroinformatics Coordinating Facility (INCF) was inaugurated in 2007 as an initiative to improve this situation. The INCF has since pursued its mission to help the development of standards and best practices. In a community paper published this very same year, Brette et al. tried to assess the state of the field and to establish a scientific approach to simulation technology, addressing foundational topics, such as which simulation schemes are best suited for the types of models we see in neuroscience. In 2015, a Frontiers Research Topic “Python in neuroscience” by Muller et al. triggered and documented a revolution in the neuroscience community, namely in the usage of the scripting language Python as a common language for interfacing with simulation codes and connecting between applications. The review by Einevoll et al. documented that simulation tools have since further matured and become reliable research instruments used by many scientific groups for their respective questions. Open source and community standard simulators today allow research groups to focus on their scientific questions and leave the details of the computational work to the community of simulator developers. A parallel development has occurred, which has been barely visible in neuroscientific circles beyond the community of simulator developers: Supercomputers used for large and complex scientific calculations have increased their performance from ~10 TeraFLOPS (1013 floating point operations per second) in the early 2000s to above 1 ExaFLOPS (1018 floating point operations per second) in the year 2022. This represents a 100,000-fold increase in our computational capabilities, or almost 17 doublings of computational capability in 22 years. Moore's law (the observation that it is economically viable to double the number of transistors in an integrated circuit every other 18–24 months) explains a part of this; our ability and willingness to build and operate physically larger computers, explains another part. It should be clear, however, that such a technological advancement requires software adaptations and under the hood, simulators had to reinvent themselves and change substantially to embrace this technological opportunity. It actually is quite remarkable that—apart from the change in semantics for the parallelization—this has mostly happened without the users knowing. The current Research Topic was motivated by the wish to assemble an update on the state of neuroscientific software (mostly simulators) in 2022, to assess whether we can see more clearly which scientific questions can (or cannot) be asked due to our increased capability of simulation, and also to anticipate whether and for how long we can expect this increase of computational capabilities to continue.
Springs play important roles in many mechanisms, including critical safety components employed by Sandia National Laboratories. Due to the nature of these safety component applications, serious concerns arise if their springs become damaged or unhook from their posts. Finite element analysis (FEA) is one technique employed to ensure such adverse scenarios do not occur. Ideally, a very fine spring mesh would be used to make the simulation as accurate as possible with respect to mesh convergence. While this method does yield the best results, it is also the most time consuming and therefore most computationally expensive process. In some situations, reduced order models (ROMs) can be adopted to lower this cost at the expense of some accuracy. This study quantifies the error present between a fine, solid element mesh and a reduced order spring beam model, with the aim of finding the best balance of a low computational cost and high accuracy analysis. Two types of analyses were performed, a quasi-static displacement-controlled pull and a haversine shock. The first used implicit methods to examine basic properties as the elastic limit of the spring material was reached. This analysis was also used to study the convergence and residual tolerance of the models. The second used explicit dynamics methods to investigate spring dynamics and stress/strain properties, as well as examine the impact of the chosen friction coefficient. Both the implicit displacement-controlled pull test and explicit haversine shock test showed good similarities between the hexahedral and beam meshes. The results were especially favorable when comparing reaction force and stress trends and maximums. However, the EQPS results were not quite as favorable. This could be due to differences in how the shear stress is calculated in both models, and future studies will need to investigate the exact causes. The data indicates that the beam model may be less likely to correctly predict spring failure, defined as inappropriate application of tension and/or compressive forces to a larger assembly. Additionally, this study was able to quantify the computational cost advantage of using a reduced order model beam mesh. In the transverse haversine shock case, the hexahedral mesh took over three days with 228 processors to solve, compared to under 10 hours for the ROM using just a single processor. Depending on the required use case for the results, using the beam mesh will significantly improve the speed of work flows, especially when integrated into larger safety component models. However, appropriate use of the ROM should carefully balance these optimized run times with its reduction in accuracy, especially when examining spring failure and outputting variables such as equivalent plastic strain. Current investigations are broadening the scope of this work to include a validation study comparing the beam ROM to physical testing data.
A comprehensive control strategy is necessary to safely and effectively operate particle based concentrating solar power (CSP) technologies. Particle based CSP with thermal energy storage (TES) is an emerging technology with potential to decarbonize power and process heat applications. The high-temperature nature of particle based CSP technologies and daily solar transients present challenges for system control to prevent equipment damage and ensure operator safety. An operational controls strategy for a tower based particle CSP system during steady state and transient conditions with safety interlocks is described in this paper. Control of a solar heated particle recirculation loop, TES, and a supercritical carbon dioxide (sCO2) cooling loop designed to reject 1 MW of thermal power are considered and associated operational limitations and their influence on control strategy are discussed.
We demonstrate coherent anti-Stokes Raman scattering (CARS) detection of the CO and N2 molecules in the reaction layer of a graphite material sample exposed to the 5000-6000 K plume of an inductively-coupled plasma torch operating on air. CO is a dominant product in the surface oxidative reaction of graphite and lighter weight carbon-based thermalprotection-system materials. A standard nanosecond CARS approach using Nd:YAG and a single broadband dye laser with ~200 cm-1 spectral width is employed for demonstration measurements, with the CARS volume located less than 1-mm from an ablating graphite sample. Quantitative measurements of both temperature and the CO/N2 ratio are obtained from model fits to CARS spectra that have been averaged for 5 laser shots. The results indicate that CARS can be used for space- and time-resolved detection of CO in high-temperature ablation tests near atmospheric pressure.
Earth and Space 2022: Space Exploration, Utilization, Engineering, and Construction in Extreme Environments - Selected Papers from the 18th Biennial International Conference on Engineering, Science, Construction, and Operations in Challenging Environments
Analysis of radiation effects on electrical circuits requires computationally efficient compact radiation models. Currently, development of such models is dominated by analytic techniques that rely on empirical assumptions and physical approximations to render the governing equations solvable in closed form. In this paper we demonstrate an alternative numerical approach for the development of a compact delayed photocurrent model for a pn-junction device. Our approach combines a system identification step with a projection-based model order reduction step to obtain a small discrete time dynamical system describing the dynamics of the excess carriers in the device. Application of the model amounts to a few small matrix-vector multiplications having minimal computational cost. We demonstrate the model using a radiation pulse test for a synthetic pn-junction device.
Here we examine models for particle curtain dispersion using drag based formalisms and their connection to streamwise pressure difference closures. Focusing on drag models, we specifically demonstrate that scaling arguments developed in DeMauro et. al. [1] using early time drag modeling can be extended to include late time particle curtain dispersion behavior by weighting the dynamic portion of the drag relative velocity e.g. (Formula Presented) by the inverse of the particle volume fraction to the ¼th power. The additional parameter e.g. α introduced in this scaling is related to the model drag parameters by employing an early-time latetime matching argument. Comparison with the scaled measurements of DeMauro et. al. suggest that the proposed modification is an effective formalism. Next, the connection between drag-based models and streamwise pressure difference-based expressions is explored by formulating simple analytical models that verify an empirical (Daniel and Wagner [2]) upstream-downstream expression. Though simple, these models provide physics-based approached describing shock particle curtain interaction behavior.
With increasing penetration of variable renewable generation, battery energy storage systems (BESS) are becoming important for power system stability due to their operational flexibility. In this paper, we propose a method for determining the minimum BESS rated power that guarantees security constraints in a grid subject to disturbances induced by variable renewable generation. The proposed framework leverages sensitivity-based inverse uncertainty propagation where the dynamical responses of the states are parameterized with respect to random variables. Using this approach, the original nonlinear optimization problem for finding the security-constrained uncertainty interval may be formulated as a quadratically-constrained linear program. The resulting estimated uncertainty interval is utilized to find the BESS rated power required to satisfy grid stability constraints.
Power-flow studies on the 30-MA, 100-ns Z facility at Sandia National Labs have shown that plasmas in the facility's magnetically insulated transmission lines can result in a loss of current to the load.1 During the current pulse, electrode heating causes neutral surface contaminants (water, hydrogen, hydrocarbons, etc.) to desorb, ionize, and form plasmas in the anode-cathode gap.2 Shrinking typical electrode thicknesses (∼1 cm) to thin foils (5-200 μm) produces observable amounts of plasma on smaller pulsed power drivers <1 MA).3 We suspect that as electrode material bulk thickness decreases relative to the skin depth (50-100 μm for a 100-500-ns pulse in aluminum), the thermal energy delivered to the neutral surface contaminants increases, and thus desorb faster from the current carrying surface.
Event-based sensors are a novel sensing technology which capture the dynamics of a scene via pixel-level change detection. This technology operates with high speed (>10 kHz), low latency (10 µs), low power consumption (<1 W), and high dynamic range (120 dB). Compared to conventional, frame-based architectures that consistently report data for each pixel at a given frame rate, event-based sensor pixels only report data if a change in pixel intensity occurred. This affords the possibility of dramatically reducing the data reported in bandwidth-limited environments (e.g., remote sensing) and thus, the data needed to be processed while still recovering significant events. Degraded visual environments, such as those generated by fog, often hinder situational awareness by decreasing optical resolution and transmission range via random scattering of light. To respond to this challenge, we present the deployment of an event-based sensor in a controlled, experimentally generated, well-characterized degraded visual environment (a fog analogue), for detection of a modulated signal and comparison of data collected from an event-based sensor and from a traditional framing sensor.
7th IEEE Electron Devices Technology and Manufacturing Conference: Strengthen the Global Semiconductor Research Collaboration After the Covid-19 Pandemic, EDTM 2023
Accurate characterization of electrical device behavior is a key component of developing accurate electrical models and assessing reliability. Measurements characterizing an electrical device can be produced from current-voltage (I-V) sweeps. We introduce the pairwise midpoint method (PMM) for estimating the mean of a functional data set and apply it to I-V sweeps from a Zener diode. Comparisons indicate that the PMM is a viable method for describing the mean behavior of a functional data set.
An inherited containment vessel design that has been used in the past to contain items in an environmental testing unit was brought to the Explosives Applications Lab to be analyzed and modified. The goal was to modify the vessel to contain an explosive event of 4g TNT equivalence at least once without failure or significant girth expansion while maintaining a seal. A total of ten energetic tests were performed on multiple vessels. In these tests, the 7075-T6 aluminum vessels were instrumented with thin-film resistive strain gages and both static and dynamic pressure gauges to study its ability to withstand an oversize explosive charge of 8g. Additionally, high precision girth (pi tape) measurements were taken before and after each test to measure the plastic growth of the vessel due to the event. Concurrent with this explosive testing, hydrocode modeling of the containment vessel and charge was performed. The modeling results were shown to agree with the results measured in the explosive field testing. Based on the data obtained during this testing, this vessel design can be safely used at least once to contain explosive detonations of 8g at the center of the chamber for a charge that will not result in damaging fragments.
Finding the maximum cut of a graph (MAXCUT) is a classic optimization problem that has motivated parallel algorithm development. While approximate algorithms to MAXCUT offer attractive theoretical guarantees and demonstrate compelling empirical performance, such approximation approaches can shift the dominant computational cost to the stochastic sampling operations. Neuromorphic computing, which uses the organizing principles of the nervous system to inspire new parallel computing architectures, offers a possible solution. One ubiquitous feature of natural brains is stochasticity: the individual elements of biological neural networks possess an intrinsic randomness that serves as a resource enabling their unique computational capacities. By designing circuits and algorithms that make use of randomness similarly to natural brains, we hypothesize that the intrinsic randomness in microelectronics devices could be turned into a valuable component of a neuromorphic architecture enabling more efficient computations. Here, we present neuromorphic circuits that transform the stochastic behavior of a pool of random devices into useful correlations that drive stochastic solutions to MAXCUT. We show that these circuits perform favorably in comparison to software solvers and argue that this neuromorphic hardware implementation provides a path for scaling advantages. This work demonstrates the utility of combining neuromorphic principles with intrinsic randomness as a computational resource for new computational architectures.
Electronic control systems used for quantum computing have become increasingly complex as multiple qubit technologies employ larger numbers of qubits with higher fidelity target. Whereas the control systems for different technologies share some similarities, parameters, such as pulse duration, throughput, real-time feedback, and latency requirements, vary widely depending on the qubit type. In this article, we evaluate the performance of modern system-on-chip (SoC) architectures in meeting the control demands associated with performing quantum gates on trapped-ion qubits, particularly focusing on communication within the SoC. A principal focus of this article is the data transfer latency and throughput of several high-speed on-chip mechanisms on Xilinx multiprocessor SoCs, including those that utilize direct memory access (DMA). They are measured and evaluated to determine an upper bound on the time required to reconfigure a gate parameter. Worst-case and average-case bandwidth requirements for a custom gate sequencer core are compared with the experimental results. The lowest variability, highest throughput data-transfer mechanism is DMA between the real-time processing unit (RPU) and the programmable logic, where bandwidths up to 19.2 GB/s are possible. For context, this enables the reconfiguration of qubit gates in less than 2 μs, comparable to the fastest gate time. Though this article focuses on trapped-ion control systems, the gate abstraction scheme and measured communication rates are applicable to a broad range of quantum computing technologies.
While significant investments have been made in the exploration of ethics in computation, recent advances in high performance computing (HPC) and artificial intelligence (AI) have reignited a discussion for more responsible and ethical computing with respect to the design and development of pervasive sociotechnical systems within the context of existing and evolving societal norms and cultures. The ubiquity of HPC in everyday life presents complex sociotechnical challenges for all who seek to practice responsible computing and ethical technological innovation. The present paper provides guidelines which scientists, researchers, educators, and practitioners alike, can employ to become more aware of one’s personal values system that may unconsciously shape one’s approach to computation and ethics.
We demonstrate the use of low-temperature grown GaAs (LT-GaAs) metasurface as an ultrafast photoconductive switching element gated with 1550 nm laser pulses. The metasurface is designed to enhance a weak two-step photon absorption at 1550 nm, enabling THz pulse detection.
Kolmogorov's theory of turbulence assumes that the small-scale turbulent structures in the energy cascade are universal and are determined by the energy dissipation rate and the kinematic viscosity alone. However, thermal fluctuations, absent from the continuum description, terminate the energy cascade near the Kolmogorov length scale. Here, we propose a simple superposition model to account for the effects of thermal fluctuations on small-scale turbulence statistics. For compressible Taylor-Green vortex flow, we demonstrate that the superposition model in conjunction with data from direct numerical simulation of the Navier-Stokes equations yields spectra and structure functions that agree with the corresponding quantities computed from the direct simulation Monte Carlo method of molecular gas dynamics, verifying the importance of thermal fluctuations in the dissipation range.
Spent nuclear fuel repository simulations are currently not able to incorporate detailed fuel matrix degradation (FMD) process models due to their computational cost, especially when large numbers of waste packages breach. The current paper uses machine learning to develop artificial neural network and k-nearest neighbor regression surrogate models that approximate the detailed FMD process model while being computationally much faster to evaluate. Using fuel cask temperature, dose rate, and the environmental concentrations of CO32−, O2, Fe2+, and H2 as inputs, these surrogates show good agreement with the FMD process model predictions of the UO2 degradation rate for conditions within the range of the training data. A demonstration in a full-scale shale repository reference case simulation shows that the incorporation of the surrogate models captures local and temporal environmental effects on fuel degradation rates while retaining good computational efficiency.
Two relatively under-reported facets of fuel storage fire safety are examined in this work for a 250, 000 gallon two-tank storage system. Ignition probability is linked to the radiative flux from a presumed fire. First, based on observed features of existing designs, fires are expected to be largely contained within a designed footprint that will hold the full spilled contents of the fuel. The influence of the walls and the shape of the tanks on the magnitude of the fire is not a well-described aspect of conventional fire safety assessment utilities. Various resources are herein used to explore the potential hazard for a contained fire of this nature. Second, an explosive attack on the fuel storage has not been widely considered in prior work. This work explores some options for assessing this hazard. The various methods for assessing the constrained conventional fires are found to be within a reasonable degree of agreement. This agreement contrasts with the hazard from an explosive dispersal. Best available assessment techniques are used, which highlight some inadequacies in the existing toolsets for making predictions of this nature. This analysis, using the best available tools, suggests the offset distance for the ignition hazard from a fireball will be on the same order as the offset distance for the blast damage. This suggests the buy-down of risk by considering the fireball is minimal when considering the blast hazards. Assessment tools for the fireball predictions are not particularly mature, and ways to improve them for a higher-fidelity estimate are noted.
The National Aeronautics and Space Administration’s (NASA) Artemis program seeks to establish the first long-term presence on the Moon as part of a larger goal of sending the first astronauts to Mars. To accomplish this, the Artemis program is designed to develop, test, and demonstrate many technologies needed for deep space exploration and supporting life on another planet. Long-term operations on the lunar base include habitation, science, logistics, and in-situ resource utilization (ISRU). In this paper, a Lunar DC microgrid (LDCMG) structure is the backbone of the energy distribution, storage, and utilization infrastructure. The method to analyze the LDCMG power distribution network and ESS design is the Hamiltonian surface shaping and power flow control (HSSPFC). This ISRU system will include a networked three-microgrid system which includes a Photo-voltaic (PV) array (generation) on one sub-microgrid and water extraction (loads) on the other two microgrids. A system's reduced-order model (ROM) will be used to create a closed-form analytical model. Ideal ESS devices will be placed alongside each state of the ROM. The ideal ESS devices determine the response needed to conform to a specific operating scenario and system specifications.
IEEE International Symposium on Applications of Ferroelectrics, ISAF 2023, International Symposium on Integrated Functionalities, ISIF 2023 and Piezoresponse Force Microscopy Workshop, PFM 2023, Proceedings
Radio frequency (RF) magnetic devices are key components in RF front ends. However, they are difficult to miniaturize and remain the bulkiest components in RF systems. Acoustically driven ferromagnetic resonance (ADFMR) offers a route towards the miniaturization of RF magnetic devices. The ADFMR literature thus far has focused predominantly on the dynamics of the coupling process, with relatively little work done on the device optimization. In this work, we present an optimized 2 GHz ADFMR device utilizing relaxed SPUDT transducers in lithium tantalate. We report an insertion loss of -13.7 dB and an ADFMR attenuation constant of -71.7 dB/mm, making this device one of the best performing ADFMR devices to date.
Measurements of gas-phase temperature and pressure in hypersonic flows are important for understanding gas-phase fluctuations which can drive dynamic loading on model surfaces and to study fundamental compressible flow turbulence. To achieve this capability, femtosecond coherent anti-Stokes Raman scattering (fs CARS) is applied in Sandia National Laboratories’ cold-flow hypersonic wind tunnel facility. Measurements were performed for tunnel freestream temperatures of 42–58 K and pressures of 1.5–2.2 Torr. The CARS measurement volume was translated in the flow direction during a 30-second tunnel run using a single computer-controlled translation stage. After broadband femtosecond laser excitation, the rotational Raman coherence was probed twice, once at an early time where the collisional environment has not affected the Raman coherence, and another at a later time after the collisional environment has led to significant dephasing of the Raman coherent. The gas-phase temperature was obtained primarily from the early-probe CARS spectra, while the gas-phase pressure was obtained primarily from the late-probe CARS spectra. Challenges in implementing fs CARS in this facility such as changes in the nonresonant spectrum at different measurement location are discussed.
Laser-induced photoemission of electrons offers opportunities to trigger and control plasmas and discharges [1]. However, the underlying mechanisms are not sufficiently characterized to be fully utilized [2]. We present an investigation to characterize the effects of photoemission on plasma breakdown for different reduced electric fields, laser intensities, and photon energies. We perform Townsend breakdown experiments assisted by high-speed imaging and employ a quantum model of photoemission along with a 0D discharge model [3], [4] to interpret the experimental measurements.
Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering - OMAE
Laros, James H.; Davis, Jacob; Sharman, Krish; Tom, Nathan; Husain, Salman
Experiments were conducted on a wave tank model of a bottom raised oscillating surge wave energy converter (OSWEC) model in regular waves. The OSWEC model shape was a thin rectangular flap, which was allowed to pitch in response to incident waves about a hinge located at the intersection of the flap and the top of the supporting foundation. Torsion springs were added to the hinge in order to position the pitch natural frequency at the center of the wave frequency range of the wave maker. The flap motion as well as the loads at the base of the foundation were measured. The OSWEC was modeled analytically using elliptic functions in order to obtain closed form expressions for added mass and radiation damping coefficients, along with the excitation force and torque. These formulations were derived and reported in a previous publication by the authors. While analytical predictions of the foundation loads agree very well with experiments, large discrepancies are seen in the pitch response close to resonance. These differences are analyzed by conducting a sensitivity study, in which system parameters, including damping and added mass values, are varied. The likely contributors to the differences between predictions and experiments are attributed to tank reflections, standing waves that can occur in long, narrow wave tanks, as well as the thin plate assumption employed in the analytical approach.
Criticality Control Overpack (CCO) containers are being considered for the disposal of defense-related nuclear waste at the Waste Isolation Pilot Plant (WIPP).
Low loss silicon nitride ring resonator reflectors provide feedback to a III/V gain chip, achieving single-mode lasing at 772nm. The Si3N4 is fabricated in a CMOS foundry compatible process that achieves loss values of 0.036dB/cm.
The Reynolds-averaged Navier–Stokes (RANS) equations remain a workhorse technology for simulating compressible fluid flows of practical interest. Due to model-form errors, however, RANS models can yield erroneous predictions that preclude their use on mission-critical problems. This work presents a data-driven turbulence modeling strategy aimed at improving RANS models for compressible fluid flows. The strategy outlined has three core aspects: (1) prediction for the discrepancy in the Reynolds stress tensor and turbulent heat flux via machine learning (ML), (2) estimating uncertainties in ML model outputs via out-of-distribution detection, and (3) multi-step training strategies to improve feature-response consistency. Results are presented across a range of cases publicly available on NASA’s turbulence modeling resource involving wall-bounded flows, jet flows, and hypersonic boundary layer flows with cold walls. We find that one ML turbulence model is able to provide consistent improvements for numerous quantities-of-interest across all cases.
High-altitude electromagnetic pulse events are a growing concern for electric power grid vulnerability assessments and mitigation planning, and accurate modeling of surge arrester mitigations installed on the grid is necessary to predict pulse effects on existing equipment and to plan future mitigation. While some models of surge arresters at high frequency have been proposed, experimental backing for any given model has not been shown. This work examines a ZnO lightning surge arrester modeling approach previously developed for accurate prediction of nanosecond-scale pulse response. Four ZnO metal-oxide varistor pucks with different sizes and voltage ratings were tested for voltage and current response on a conducted electromagnetic pulse testbed. The measured clamping response was compared to SPICE circuit models to compare the electromagnetic pulse response and validate model accuracy. Results showed good agreement between simulation results and the experimental measurements, after accounting for stray testbed inductance between 100 and 250 nH.
Austenitic stainless steels are used in high-pressure hydrogen containment infrastructure for their resistance to hydrogen embrittlement. Applications for the use of austenitic stainless steels include pressure vessels, tubing, piping, valves, fittings and other piping components. Despite their resistance to brittle behavior in the presence of hydrogen, austenitic stainless steels can exhibit degraded fracture performance. The mechanisms of hydrogen-assisted fracture, however, remain elusive, which has motivated continued research on these alloys. There are two principal approaches to evaluate the influence of gaseous hydrogen on mechanical properties: internal and external hydrogen, respectively. The austenite phase has high solubility and low diffusivity of hydrogen at room temperature, which enables introduction of hydrogen into the material through thermal precharging at elevated temperature and pressure; a condition referred to as internal hydrogen. H-precharged material can subsequently be tested in ambient conditions. Alternatively, mechanical testing can be performed while test coupons are immersed in gaseous hydrogen thereby evaluating the effects of external hydrogen on property degradation. The slow diffusivity of hydrogen in austenite at room temperature can often be a limiting factor in external hydrogen tests and may not properly characterize lower bound fracture behavior in components exposed to hydrogen for long time periods. In this study, the differences between internal and external hydrogen environments are evaluated in the context of fracture resistance measurements. Fracture testing was performed on two different forged austenitic stainless steel alloys (304L and XM-11) in three different environments: 1) non-charged and tested in gaseous hydrogen at pressure of 1,000 bar (external H2), 2) hydrogen precharged and tested in air (internal H), 3) hydrogen precharged and tested in 1,000 bar H2 (internal H + external H2). For all environments, elastic-plastic fracture measurements were conducted to establish J-R curves following the methods of ASTM E1820. Following fracture testing, fracture surfaces were examined to reveal predominant fracture mechanisms for the different conditions and to characterize differences (and similarities) in the macroscale fracture processes associated with these environmental conditions.
A high altitude electromagnetic pulse (HEMP) caused by a nuclear explosion has the potential to severely impact the operation of large-scale electric power grids. This paper presents a top-down mitigation design strategy that considers grid-wide dynamic behavior during a simulated HEMP event - and uses optimal control theory to determine the compensation signals required to protect critical grid assets. The approach is applied to both a standalone transformer system and a demonstrative 3-bus grid model. The performance of the top-down approach relative to conventional protection solutions is evaluated, and several optimal control objective functions are explored. Finally, directions for future research are proposed.
This is an investigation on two experimental datasets of laminar hypersonic flows, over a double-cone geometry, acquired in Calspan—University at Buffalo Research Center’s Large Energy National Shock (LENS)-XX expansion tunnel. These datasets have yet to be modeled accurately. A previous paper suggested that this could partly be due to mis-specified inlet conditions. The authors of this paper solved a Bayesian inverse problem to infer the inlet conditions of the LENS-XX test section and found that in one case they lay outside the uncertainty bounds specified in the experimental dataset. However, the inference was performed using approximate surrogate models. In this paper, the experimental datasets are revisited and inversions for the tunnel test-section inlet conditions are performed with a Navier–Stokes simulator. The inversion is deterministic and can provide uncertainty bounds on the inlet conditions under a Gaussian assumption. It was found that deterministic inversion yields inlet conditions that do not agree with what was stated in the experiments. An a posteriori method is also presented to check the validity of the Gaussian assumption for the posterior distribution. This paper contributes to ongoing work on the assessment of datasets from challenging experiments conducted in extreme environments, where the experimental apparatus is pushed to the margins of its design and performance envelopes.
In recent years, high-altitude infrasound sensing has become more prolific, demonstrating an enormous value especially when utilized over regions inaccessible to traditional ground-based sensing. Similar to ground-based infrasound detectors, airborne sensors take advantage of the fact that impulsive atmospheric events such as explosions can generate low frequency acoustic waves, also known as infrasound. Due to negligible attenuation, infrasonic waves can travel over long distances, and provide important clues about their source. Here, we report infrasound detections of the Apollo detonation that was carried on 29 October 2020 as part of the Large Surface Explosion Coupling Experiment in Nevada, USA. Infrasound sensors attached to solar hot air balloons floating in the stratosphere detected the signals generated by the explosion at distances 170–210 km. Three distinct arrival phases seen in the signals are indicative of multipathing caused by the small-scale perturbations in the atmosphere. We also found that the local acoustic environment at these altitudes is more complex than previously thought.
A wall-modeled large-eddy simulation of a Mach 14 boundary layer flow over a flat plate was carried out for the conditions of the Arnold Engineering Development Complex Hypervelocity Tunnel 9. Adequate agreement of the mean velocity and temperature, as well as Reynolds stress profiles with a reference direct numerical simulation is obtained at much reduced grid resolution. The normalized root-mean-square optical path difference obtained from the present wall-modeled large-eddy simulations and reference direct nu- merical simulation are in good agreement with each other but below a prediction obtained from a semi-analytical relationship by Notre Dame University. This motivates an evalua- tion of the underlying assumptions of the Notre Dame model at high Mach number. For the analysis, recourse is taken to previously published wall-modeled large-eddy simulations of a Mach eight turbulent boundary layer. The analysis of the underlying assumptions focuses on the root-mean-square fluctuations of the thermodynamic quantities, on the strong Reynolds analogy, two-point correlations, and the linking equation. It is found that with increasing Mach number, the pressure fluctuations increase and the strong Reynolds anal- ogy over-predicts the temperature fluctuations. In addition, the peak of the correlation length shifts towards the boundary layer edge.
Creation of a Sandia internally developed, shock-hardened Recoverable Data Recorder (RDR) necessitated experimentation by ballistically-firing the device into water targets at velocities up to 5,000 ft/s. The resultant mechanical environments were very severe—routinely achieving peak accelerations in excess of 200 kG and changes in pseudo-velocity greater than 38,000 inch/s. High-quality projectile deceleration datasets were obtained though high-speed imaging during the impact events. The datasets were then used to calibrate and validate computational models in both CTH and EPIC. Hydrodynamic stability in these environments was confirmed to differ from aerodynamic stability; projectile stability is maintained through a phenomenon known as “tail-slapping” or impingement of the rear of the projectile on the cavitation vapor-water interface which envelopes the projectile. As the projectile slows the predominate forces undergo a transition which is outside the codes’ capabilities to calculate accurately, however, CTH and EPIC both predict the projectile trajectory well in the initial hypervelocity regime. Stable projectile designs and the achievable acceleration space are explored through a large parameter sweep of CTH simulations. Front face chamfer angle has the largest influence on stability with low angles being more stable.
Well-skipping radical-radical reactions can provide a chain-propagating pathway for formation of polycyclic radicals implicated in soot inception. Here we use controlled pyrolysis in a microreactor to isolate and examine the role of well-skipping channels in the phenyl (C6H5) + propargyl (C3H3) radical-radical reaction at temperatures of 800–1600 K and pressures near 25 Torr. The temperature and concentration dependence of the closed-shell (C9H8) and radical (C9H7) products are observed using electron-ionization mass spectrometry. The flow in the reactor is simulated using a boundary layer model employing a chemical mechanism based on recent rate coefficient calculations. Comparison between simulation and experiment shows reasonable agreement, within a factor of 3, while suggesting possible improvements to the model. In contrast, eliminating the well-skipping reactions from the chemistry mechanism causes a much larger discrepancy between simulation and experiment in the temperature dependence of the radical concentration, revealing that the well-skipping pathways, especially to form indenyl radical, are significant at temperatures of 1200 K and higher. While most C9H7 forms by well-skipping at 25 Torr, an additional simulation indicates that the well-skipping channels only contribute around 3% of the C9Hx yield at atmospheric pressure, thus indicating a negligible role of the well-skipping pathways at atmospheric and higher pressures.
The Synchronic Web is a highly scalable notary infrastructure that provides tamper-evident data provenance for historical web data. In this document, we describe the applicability of this infrastructure for web archiving across three envisioned stages of adoption. We codify the core mechanism enabling the value proposition: a procedure for splitting and merging cryptographic information fluidly across blockchain-backed ledgers. Finally, we present preliminary performance results that indicate the feasibility of our approach for modern web archiving scales.
Sandia National Laboratories has conducted geomechanical analysis to evaluate the performance of the Strategic Petroleum Reserve by modeling the viscoplastic, or creep, behavior of the salt in which their oil-storage caverns reside. The operation-driven imbalance between fluid pressure within the salt cavern and in-situ stress acting on the surrounding salt can cause the salt to creep, potentially leading to a loss of the cavern volume and consequently deformation of borehole casings. Therefore, a greater understanding of salt creep's behavior on borehole casing needs to be addressed to drive cavern operations decisions. To evaluate potential casing damage mechanisms with variation in geological constraints (e.g. material characteristics of salt or caprock) or physical mechanisms of cavern leakage, we developed a generic model with a layered and domal geometry including nine caverns, rather than use a specific field-site model, to save computational costs. The geomechanical outputs, such as cavern volume changes, vertical strain along the dome and caprock above the cavern and vertical displacement at the surface or cavern top, quantifies the impact of material parameters and cavern locations as well as multiple operations in multiple caverns on an individual cavern stability.
Extreme meteorological events, such as hurricanes and floods, cause significant infrastructure damage and, as a result, prolonged grid outages. To mitigate the negative effect of these outages and enhance the resilience of communities, microgrids consisting of solar photovoltaics (PV), energy storage (ES) technologies, and backup diesel generation are being considered. Furthermore, it is necessary to take into account how the extreme event affects the systems' performance during the outage, often referred to as black-sky conditions. In this paper, an optimization model is introduced to properly size ES and PV technologies to meet various duration of grid outages for selected critical infrastructure while considering black-sky conditions. A case study of the municipality of Villalba, Puerto Rico is presented to identify the several potential microgrid configurations that increase the community's resilience. Sensitivity analyses are performed around the grid outage durations and black-sky conditions to better decide what factors should be considered when scoping potential microgrids for community resilience.
Computational engineering models often contain unknown entities (e.g. parameters, initial and boundary conditions) that require estimation from other measured observable data. Estimating such unknown entities is challenging when they involve spatio-temporal fields because such functional variables often require an infinite-dimensional representation. We address this problem by transforming an unknown functional field using Alpert wavelet bases and truncating the resulting spectrum. Hence the problem reduces to the estimation of few coefficients that can be performed using common optimization methods. We apply this method on a one-dimensional heat transfer problem where we estimate the heat source field varying in both time and space. The observable data is comprised of temperature measured at several thermocouples in the domain. This latter is composed of either copper or stainless steel. The optimization using our method based on wavelets is able to estimate the heat source with an error between 5% and 7%. We analyze the effect of the domain material and number of thermocouples as well as the sensitivity to the initial guess of the heat source. Finally, we estimate the unknown heat source using a different approach based on deep learning techniques where we consider the input and output of a multi-layer perceptron in wavelet form. We find that this deep learning approach is more accurate than the optimization approach with errors below 4%.
Physics-Based Reduced Order Models (ROMs) tend to rely on projection-based reduction. This family of approaches utilizes a series of responses of the full-order model to assemble a suitable basis, subsequently employed to formulate a set of equivalent, low-order equations through projection. However, in a nonlinear setting, physics-based ROMs require an additional approximation to circumvent the bottleneck of projecting and evaluating the nonlinear contributions on the reduced space. This scheme is termed hyper-reduction and enables substantial computational time reduction. The aforementioned hyper-reduction scheme implies a trade-off, relying on a necessary sacrifice on the accuracy of the nonlinear terms’ mapping to achieve rapid or even real-time evaluations of the ROM framework. Since time is essential, especially for digital twins representations in structural health monitoring applications, the hyper-reduction approximation serves as both a blessing and a curse. Our work scrutinizes the possibility of exploiting machine learning (ML) tools in place of hyper-reduction to derive more accurate surrogates of the nonlinear mapping. By retaining the POD-based reduction and introducing the machine learning-boosted surrogate(s) directly on the reduced coordinates, we aim to substitute the projection and update process of the nonlinear terms when integrating forward in time on the low-order dimension. Our approach explores a proof-of-concept case study based on a Nonlinear Auto-regressive neural network with eXogenous Inputs (NARX-NN), trying to potentially derive a superior physics-based ROM in terms of efficiency, suitable for (near) real-time evaluations. The proposed ML-boosted ROM (N3-pROM) is validated in a multi-degree of freedom shear frame under ground motion excitation featuring hysteretic nonlinearities.
Accurately measuring aero-optical properties of non-equilibrium gases is critical for characterizing compressible flow dynamics and plasmas. At thermochemical non-equilibrium conditions, excited molecules begin to dissociate, causing optical distortion and non-constant Gladstone-Dale behavior. These regions typically occur behind a strong shock at high temperatures and pressures. Currently, no experimental data exists in the literature due to the small number of facilities capable of reaching such conditions and a lack of diagnostic techniques that can measure index of refraction across large, nearly-discrete gradients. In this work, a quadrature fringe imaging interferometer is applied at the Sandia free-piston high temperature shock tube for high temperature and pressure Gladstone-Dale measurements. This diagnostic resolves high-gradient density changes using a narrowband analog quadrature and broadband reference fringes. Initial simulations for target conditions show large deviations from constant Gladstone-Dale coefficient models and good matches with high temperature and pressure Gladstone-Dale models above 5000 K. Experimental results at 7653 K and 7.87 bar indicate that the index of refraction approaches high temperature and pressure theory, but significant flow bifurcation effects are noted in reflected shock.
Raffaelle, Patrick R.; Wang, George T.; Shestopalov, Alexander A.
The focus of this study was to demonstrate the vaporphase halogenation of Si(100) and subsequently evaluate the inhibiting ability of the halogenated surfaces toward atomic layer deposition (ALD) of aluminum oxide (Al2O3). Hydrogen-terminated silicon ⟨100⟩ (H−Si(100)) was halogenated using N-chlorosuccinimide (NCS), N-bromosuccinimide (NBS), and N-iodosuccinimide (NIS) in a vacuum-based chemical process. The composition and physical properties of the prepared monolayers were analyzed by using X-ray photoelectron spectroscopy (XPS) and contact angle (CA) goniometry. These measurements confirmed that all three reagents were more effective in halogenating H−Si(100) over OH−Si(100) in the vapor phase. The stability of the modified surfaces in air was also tested, with the chlorinated surface showing the greatest resistance to monolayer degradation and silicon oxide (SiO2) generation within the first 24 h of exposure to air. XPS and atomic force microscopy (AFM) measurements showed that the succinimide-derived Hal-Si(100) surfaces exhibited blocking ability superior to that of H− Si(100), a commonly used ALD resist. This halogenation method provides a dry chemistry alternative for creating halogen-based ALD resists on Si(100) in near-ambient environments.
The research investigates novel techniques to enhance supply chain security via addition of configuration management controls to protect Instrumentation and Control (I&C) systems of a Nuclear Power Plant (NPP). A secure element (SE) is integrated into a proof-of-concept testbed by means of a commercially available smart card, which provides tamper resistant key storage and a cryptographic coprocessor. The secure element simplifies setup and establishment of a secure communications channel between the configuration manager and verification system and the I&C system (running OpenPLC). This secure channel can be used to provide copies of commands and configuration changes of the I&C system for analysis.
Previous research has provided strong evidence that CO2 and H2O gasification reactions can provide non-negligible contributions to the consumption rates of pulverized coal (pc) char during combustion, particularly in oxy-fuel environments. Fully quantifying the contribution of these gasification reactions has proven to be difficult, due to the dearth of knowledge of gasification rates at the elevated particle temperatures associated with typical pc char combustion processes, as well as the complex interaction of oxidation and gasification reactions. Gasification reactions tend to become more important at higher char particle temperatures (because of their high activation energy) and they tend to reduce pc oxidation due to their endothermicity (i.e. cooling effect). The work reported here attempts to quantify the influence of the gasification reaction of CO2 in a rigorous manner by combining experimental measurements of the particle temperatures and consumption rates of size-classified pc char particles in tailored oxy-fuel environments with simulations from a detailed reacting porous particle model. The results demonstrate that a specific gasification reaction rate relative to the oxidation rate (within an accuracy of approximately +/- 20% of the pre-exponential value), is consistent with the experimentally measured char particle temperatures and burnout rates in oxy-fuel combustion environments. Conversely, the results also show, in agreement with past calculations, that it is extremely difficult to construct a set of kinetics that does not substantially overpredict particle temperature increase in strongly oxygen-enriched N2 environments. This latter result is believed to result from deficiencies in standard oxidation mechanisms that fail to account for falloff in char oxidation rates at high temperatures.
A wind tunnel test from AEDC Tunnel 9 of a hypersonic turbulent boundary layer is analyzed using several fidelities of numerical simulation including Wall-Modeled Large Eddy Simulation (WMLES), Large Eddy Simulation (LES), and Direct Numerical Simulation (DNS). The DNS was forced to transition to turbulence using a broad spectrum of planar, slow acoustic waves based on the freestream spectrum measured in the tunnel. Results show the flow transitions in a reasonably natural process developing into turbulent flow. This is due to several 2nd mode wave packets advecting downstream and eventually breaking down into turbulence with modest friction Reynolds numbers. The surface shear stress and heat flux agree well with a transitional RANS simulation. Comparisons of DNS data to experimental data showreasonable agreement with regard to mean surface quantities aswell as amplitudes of boundary layer disturbances. The DNS does show early transition relative to the experimental data. Several interesting aspects of the DNS and other numerical simulations are discussed. The DNS data are also analyzed through several common methods such as cross-correlations and coherence of the fluctuating surface pressure.
This study investigated the durability of four high temperature coatings for use as a Gardon gauge foil coating. Failure modes and effects analysis have identified Gardon gauge foil coating as a critical component for the development of a robust flux gauge for high intensity flux measurements. Degradation of coating optical properties and physical condition alters flux gauge sensitivity, resulting in flux measurement errors. In this paper, four coatings were exposed to solar and thermal cycles to simulate real-world aging. Solar simulator and box furnace facilities at the National Solar Thermal Test Facility (NSTTF) were utilized in separate test campaigns. Coating absorptance and emissivity properties were measured and combined into a figure of merit (FOM) to characterize the optical property stability of each coating, and physical coating degradation was assessed qualitatively using microscope images. Results suggest rapid high temperature cycling did not significantly impact coating optical properties and physical state. In contrast, prolonged exposure of coatings to high temperatures degraded coating optical properties and physical state. Coatings degraded after 1 hour of exposure at temperatures above 400 °C and stabilized after 6-24 hours of exposure. It is concluded that the combination of high temperatures and prolonged exposure provide the energy necessary to sustain coating surface reactions and alter optical and physical coating properties. Results also suggest flux gauge foil coatings could benefit from long duration high temperature curing (>400 °C) prior to sensor calibration to stabilize coating properties and increase measurement reliability in high flux and high temperature applications.
Most recently, stochastic control methods such as deep reinforcement learning (DRL) have proven to be efficient and quick converging methods in providing localized grid voltage control. Because of the random dynamical characteristics of grid reactive loads and bus voltages, such stochastic control methods are particularly useful in accurately predicting future voltage levels and in minimizing associated cost functions. Although DRL is capable of quickly inferring future voltage levels given specific voltage control actions, it is prone to high variance when the learning rate or discount factors are set for rapid convergence in the presence of bus noise. Evolutionary learning is also capable of minimizing cost function and can be leveraged for localized grid control, but it does not infer future voltage levels given specific control inputs and instead simply selects those control actions that result in the best voltage control. For this reason, evolutionary learning is better suited than DRL for voltage control in noisy grid environments. To illustrate this, using a cyber adversary to inject random noise, we compare the use of evolutionary learning and DRL in autonomous voltage control (AVC) under noisy control conditions and show that it is possible to achieve a high mean voltage control using a genetic algorithm (GA). We show that the GA additionally can provide superior AVC to DRL with comparable computational efficiency. We illustrate that the superior noise immunity properties of evolutionary learning make it a good choice for implementing AVC in noisy environments or in the presence of random cyber-attacks.
Uncertainty quantification (UQ) plays a critical role in verifying and validating forward integrated computational materials engineering (ICME) models. Among numerous ICME models, the crystal plasticity finite element method (CPFEM) is a powerful tool that enables one to assess microstructure-sensitive behaviors and thus, bridge material structure to performance. Nevertheless, given its nature of constitutive model form and the randomness of microstructures, CPFEM is exposed to both aleatory uncertainty (microstructural variability), as well as epistemic uncertainty (parametric and model-form error). Therefore, the observations are often corrupted by the microstructure-induced uncertainty, as well as the ICME approximation and numerical errors. In this work, we highlight several ongoing research topics in UQ, optimization, and machine learning applications for CPFEM to efficiently solve forward and inverse problems. The first aspect of this work addresses the UQ of constitutive models for epistemic uncertainty, including both phenomenological and dislocation-density-based constitutive models, where the quantities of interest (QoIs) are related to the initial yield behaviors. We apply a stochastic collocation (SC) method to quantify the uncertainty of the three most commonly used constitutive models in CPFEM, namely phenomenological models (with and without twinning), and dislocation-density-based constitutive models, for three different types of crystal structures, namely face-centered cubic (fcc) copper (Cu), body-centered cubic (bcc) tungsten (W), and hexagonal close packing (hcp) magnesium (Mg). The second aspect of this work addresses the aleatory and epistemic uncertainty with multiple mesh resolutions and multiple constitutive models by the multi-index Monte Carlo method, where the QoI is also related to homogenized materials properties. We present a unified approach that accounts for various fidelity parameters, such as mesh resolutions, integration time-steps, and constitutive models simultaneously. We illustrate how multilevel sampling methods, such as multilevel Monte Carlo (MLMC) and multi-index Monte Carlo (MIMC), can be applied to assess the impact of variations in the microstructure of polycrystalline materials on the predictions of macroscopic mechanical properties. The third aspect of this work addresses the crystallographic texture study of a single void in a cube. Using a parametric reduced-order model (also known as parametric proper orthogonal decomposition) with a global orthonormal basis as a model reduction technique, we demonstrate that the localized dynamic stress and strain fields can be predicted as a spatiotemporal problem.
Distribution systems may experience fast voltage swings in the matter of seconds from distributed energy resources, such as Wind Turbines Generators (WTG) and Photovoltaic (PV) inverters, due to their dependency on variable and intermittent wind speed and solar irradiance. This work proposes a WTG reactive power controller for fast voltage regulation. The controller is tested on a simulation model of a real distribution system. Real wind speed, solar irradiation, and load consumption data is used. The controller is based on a Reinforcement Learning Deep Deterministic Policy Gradient (DDPG) model that determines optimum control actions to avoid significant voltage deviations across the system. The controller has access to voltage measurements at all system buses. Results show that the proposed WTG reactive power controller significantly reduces system-wide voltage deviations across a large number of generation scenarios in order to comply with standardized voltage tolerances.
Albany is a parallel C++ finite element library for solving forward and inverse problems involving partial differential equations (PDEs). In this paper we introduce PyAlbany, a newly developed Python interface to the Albany library. PyAlbany can be used to effectively drive Albany enabling fast and easy analysis and post-processing of applications based on PDEs that are pre-implemented in Albany. PyAlbany relies on the library PyBind11 to bind Python with C++ Albany code. Here we detail the implementation of PyAlbany and showcase its capabilities through a number of examples targeting a heat-diffusion problem. In particular we consider the following: (1) the generation of samples for a Monte Carlo application, (2) a scalability study, (3) a study of parameters on the performance of a linear solver, and finally (4) a tool for performing eigenvalue decompositions of matrix-free operators for a Bayesian inference application.
Several studies have proven how ducted fuel injection (DFI) reduces soot emissions for compression-ignition engines. Nevertheless, no comprehensive study has investigated how DFI performs over a load range in combination with low-net-carbon fuels. In this study, optical-engine experiments were performed with four different fuels—conventional diesel and three low-net-carbon fuels—at low and moderate load, to measure emissions levels and performance. The 1.7-liter single-cylinder optical engine was equipped with a high-speed camera to capture natural luminosity images of the combustion event. Conventional diesel and DFI combustion were investigated at four different dilution levels (to simulate exhaust-gas recirculation effects), from 14 to 21 mol% oxygen in the intake. At a given dilution level, with commercial diesel fuel, DFI reduced soot by 82% at medium load, and 75% at low load without increasing NOx. The results further show how DFI with dilution reduces soot and NOx without compromising engine performance or other emission types, especially when combined with low-net-carbon fuels. DFI with the oxygenated low-net-carbon blend HEA67 simultaneously reduced soot and NOx by as much as 93 % and 82 %, respectively, relative to conventional diesel combustion with commercial diesel fuel. These soot and NOx reductions occurred while lifecycle CO2 was reduced by at least 70 % when using low-net-carbon fuels instead of conventional diesel. All emissions changes were compared with future emissions regulations for different vehicle sectors to investigate how DFI can be used to facilitate achievement of the regulations. Finally, the results show how the DFI cases fall below several future emissions regulation levels, rendering less need for aftertreatment systems and giving a possible lower cost of ownership.
Here, we introduce a mathematically rigorous formulation for a nonlocal interface problem with jumps and propose an asymptotically compatible finite element discretization for the weak form of the interface problem. After proving the well-posedness of the weak form, we demonstrate that solutions to the nonlocal interface problem converge to the corresponding local counterpart when the nonlocal data are appropriately prescribed. Several numerical tests in one and two dimensions show the applicability of our technique, its numerical convergence to exact nonlocal solutions, its convergence to the local limit when the horizons vanish, and its robustness with respect to the patch test.
Here we present a new method for coupled linear elasticity problems whose finite element discretization may lead to spatially non-coincident discretized interfaces. Our approach combines the classical Dirichlet–Neumann coupling formulation with a new set of discretized interface conditions obtained through Taylor series expansions. We show that these conditions ensure linear consistency of the coupled finite element solution. We then formulate an iterative solution method for the coupled discrete system and apply the new coupling approach to two representative settings for which we also provide several numerical illustrations. The first setting is a mesh-tying problem in which both coupled structures have the same Lamé parameters whereas the second setting is an interface problem for which the Lamé parameters in the two coupled structures are different.
Many applications require minimizing the sum of smooth and nonsmooth functions. For example, basis pursuit denoising problems in data science require minimizing a measure of data misfit plus an $\ell^1$-regularizer. Similar problems arise in the optimal control of partial differential equations (PDEs) when sparsity of the control is desired. Here, we develop a novel trust-region method to minimize the sum of a smooth nonconvex function and a nonsmooth convex function. Our method is unique in that it permits and systematically controls the use of inexact objective function and derivative evaluations. When using a quadratic Taylor model for the trust-region subproblem, our algorithm is an inexact, matrix-free proximal Newton-type method that permits indefinite Hessians. We prove global convergence of our method in Hilbert space and demonstrate its efficacy on three examples from data science and PDE-constrained optimization.
Satellite imagery can detect temporary cloud trails or ship tracks formed from aerosols emitted from large ships traversing our oceans, a phenomenon that global climate models cannot directly reproduce. Ship tracks are observable examples of marine cloud brightening, a potential solar climate intervention that shows promise in helping combat climate change. In this paper, we demonstrate a simulation-based approach in learning the behavior of ship tracks based upon a novel stochastic emulation mechanism. Our method uses wind fields to determine the movement of aerosol-cloud tracks and uses a stochastic partial differential equation (SPDE) to model their persistence behavior. This SPDE incorporates both a drift and diffusion term which describes the movement of aerosol particles via wind and their diffusivity through the atmosphere, respectively. We first present our proposed approach with examples using simulated wind fields and ship paths. We then successfully demonstrate our tool by applying the approximate Bayesian computation method-sequential Monte Carlo for data assimilation.
Interfacial segregation and chemical short-range ordering influence the behavior of grain boundaries in complex concentrated alloys. In this study, we use atomistic modeling of a NbMoTaW refractory complex concentrated alloy to provide insight into the interplay between these two phenomena. Hybrid Monte Carlo and molecular dynamics simulations are performed on columnar grain models to identify equilibrium grain boundary structures. Our results reveal extended near-boundary segregation zones that are much larger than traditional segregation regions, which also exhibit chemical patterning that bridges the interfacial and grain interior regions. Furthermore, structural transitions pertaining to an A2-to-B2 transformation are observed within these extended segregation zones. Both grain size and temperature are found to significantly alter the widths of these regions. An analysis of chemical short-range order indicates that not all pairwise elemental interactions are affected by the presence of a grain boundary equally, as only a subset of elemental clustering types are more likely to reside near certain boundaries. The results emphasize the increased chemical complexity that is associated with near-boundary segregation zones and demonstrate the unique nature of interfacial segregation in complex concentrated alloys.
Li-metal batteries (LMBs) employing conversion cathode materials (e.g., FeF3) are a promising way to prepare inexpensive, environmentally friendly batteries with high energy density. Pseudo-solid-state ionogel separators harness the energy density and safety advantages of solid-state LMBs, while alleviating key drawbacks (e.g., poor ionic conductivity and high interfacial resistance). In this work, a pseudo-solid-state conversion battery (Li-FeF3) is presented that achieves stable, high rate (1.0 mA cm–2) cycling at room temperature. The batteries described herein contain gel-infiltrated FeF3 cathodes prepared by exchanging the ionic liquid in a polymer ionogel with a localized high-concentration electrolyte (LHCE). The LHCE gel merges the benefits of a flexible separator (e.g., adaptation to conversion-related volume changes) with the excellent chemical stability and high ionic conductivity (~2 mS cm–1 at 25 °C) of an LHCE. The latter property is in contrast to previous solid-state iron fluoride batteries, where poor ionic conductivities necessitated elevated temperatures to realize practical power levels. Importantly, the stable, room-temperature Li-FeF3 cycling performance obtained with the LHCE gel at high current densities paves the way for exploring a range of architectures including flexible, three-dimensional, and custom shape batteries.
Automatic differentiation (AD) is a well-known technique for evaluating analytic derivatives of calculations implemented on a computer, with numerous software tools available for incorporating AD technology into complex applications. However, a growing challenge for AD is the efficient differentiation of parallel computations implemented on emerging manycore computing architectures such as multicore CPUs, GPUs, and accelerators as these devices become more pervasive. In this work, we explore forward mode, operator overloading-based differentiation of C++ codes on these architectures using the widely available Sacado AD software package. In particular, we leverage Kokkos, a C++ tool providing APIs for implementing parallel computations that is portable to a wide variety of emerging architectures. We describe the challenges that arise when differentiating code for these architectures using Kokkos, and two approaches for overcoming them that ensure optimal memory access patterns as well as expose additional dimensions of fine-grained parallelism in the derivative calculation. We describe the results of several computational experiments that demonstrate the performance of the approach on a few contemporary CPU and GPU architectures. We then conclude with applications of these techniques to the simulation of discretized systems of partial differential equations.
Wind turbine applications that leverage nacelle-mounted Doppler lidar are hampered by several sources of uncertainty in the lidar measurement, affecting both bias and random errors. Two problems encountered especially for nacelle-mounted lidar are solid interference due to intersection of the line of sight with solid objects behind, within, or in front of the measurement volume and spectral noise due primarily to limited photon capture. These two uncertainties, especially that due to solid interference, can be reduced with high-fidelity retrieval techniques (i.e., including both quality assurance/quality control and subsequent parameter estimation). Our work compares three such techniques, including conventional thresholding, advanced filtering, and a novel application of supervised machine learning with ensemble neural networks, based on their ability to reduce uncertainty introduced by the two observed nonideal spectral features while keeping data availability high. The approach leverages data from a field experiment involving a continuous-wave (CW) SpinnerLidar from the Technical University of Denmark (DTU) that provided scans of a wide range of flows both unwaked and waked by a field turbine. Independent measurements from an adjacent meteorological tower within the sampling volume permit experimental validation of the instantaneous velocity uncertainty remaining after retrieval that stems from solid interference and strong spectral noise, which is a validation that has not been performed previously. All three methods perform similarly for non-interfered returns, but the advanced filtering and machine learning techniques perform better when solid interference is present, which allows them to produce overall standard deviations of error between 0.2 and 0.3ms-1, or a 1%-22% improvement versus the conventional thresholding technique, over the rotor height for the unwaked cases. Between the two improved techniques, the advanced filtering produces 3.5% higher overall data availability, while the machine learning offers a faster runtime (i.e., 1/41s to evaluate) that is therefore more commensurate with the requirements of real-time turbine control. The retrieval techniques are described in terms of application to CW lidar, though they are also relevant to pulsed lidar. Previous work by the authors (Brown and Herges, 2020) explored a novel attempt to quantify uncertainty in the output of a high-fidelity lidar retrieval technique using simulated lidar returns; this article provides true uncertainty quantification versus independent measurement and does so for three techniques rather than one.
This report is the revised (Revision 9) Task F specification for DECOVALEX-2023. Task F is a comparison of the models and methods used in deep geologic repository performance assessment. The task proposes to develop a reference case for a mined repository in a fractured crystalline host rock (Task F1) and a reference case for a mined repository in a salt formation (Task F2). Teams may choose to participate in the comparison for either or both reference cases. For each reference case, a common set of conceptual models and parameters describing features, events, and processes that impact performance will be given, and teams will be responsible for determining how best to implement and couple the models. The comparison will be conducted in stages, beginning with a comparison of key outputs of individual process models, followed by a comparison of a single deterministic simulation of the full reference case, and moving on to uncertainty propagation and uncertainty and sensitivity analysis. This report provides background information, a summary of the proposed reference cases, and a staged plan for the analysis.
Clem, Paul G.; Nieves, Cesar A.; Yuan, Mengxue; Ogrinc, Andrew L.; Furman, Eugene; Kim, Seong H.; Lanagan, Michael T.
Ionic conduction in silicate glasses is mainly influenced by the nature, concentration, and mobility of the network-modifying (NWM) cations. The electrical conduction in SLS is dominated by the ionic migration of sodium moving from the anode to the cathode. An activation energy for this conduction process was calculated to be 0.82eV and in good agreement with values previously reported. The conduction process associated to the leakage current and relaxation peak in TSDC for HPFS is attributed to conduction between nonbridging oxygen hole centers (NBOHC). It is suggested that ≡Si-OH = ≡Si-O- + H0 under thermo-electric poling, promoting hole or proton injection from the anode and responsible for the 1.5eV relaxation peak. No previous TSDC data have been found to corroborate this mechanism. The higher activation energy and lower current intensity for the coated HPFS might be attributed to a lower concentration of NBOHC after heat treatment (Si-OH + OH-Si = SiO-Si + H2O). This could explain the TSDC signal around room temperature for the coated HPFS. Another possible explanation could be a redox reaction at the anode region dominating the current response.
This report provides a summary of measurement results used to compare the performance of the PHDS Fulcrum40h and Ortec Detective-X High Purity Germanium (HPGe) detector systems. Specifically, the measurement data collected was used to assess each detector system for gamma efficiency and resolution, gamma angular response and efficiency for an in-situ surface distribution, neutron efficiency, gamma pulse-pileup response, and gamma to neutron crosstalk.
Cemented annulus fractures are a major leakage path in a wellbore system, and their permeability plays an important role in the behavior of fluid flow through a leaky wellbore. The permeability of these fractures is affected by changing conditions including the external stresses acting on the fracture and the fluid pressure within the fracture. Laboratory gas flow experiments were conducted in a triaxial cell to evaluate the permeability of a wellbore cement fracture under a wide range of confining stress and pore pressure conditions. For the first time, an effective stress law that considers the simultaneous effect of confining stress and pore pressure was defined for the wellbore cement fracture permeability. Here the results showed that the effective stress coefficient (λ) for permeability increased linearly with the Terzaghi effective stress ( -p) with an average of λ = 1 in the range of applied pressures. The relationship between the effective stress and fracture permeability was examined using two physical-based models widely used for rock fractures. The results from the experimental work were incorporated into numerical simulations to estimate the impact of effective stress on the interpreted hydraulic aperture and leakage behavior through a fractured annular cement. Accounting for effective stress-dependent permeability through the wellbore length significantly increased the leakage rate at the wellhead compared with the assumption of a constant cemented annulus permeability.
This development of empirical data to support realistic and science-based input to safety regulations and transportation standards is a critical need for the hazardous material (HM) transportation industry. Current regulations and standards are based on the TNT equivalency model. However, real world experience indicates that use of the TNT equivalency model to predict composite overwrapped pressure vessel (COPV) potential energy release is unrealistically conservative. The purpose of this report is to characterize and quantify rupture events involving damaged COPV’s of the type used in HM transportation regulated by the Department of Transportation (DOT). This was accomplished using a series of five tests; 2 COPV tests for compressed natural gas (CNG), 2 COPV tests for hydrogen, and 1 COPV test for nitrogen. Measured overpressures from these tests were compared to predicted overpressures from a TNT equivalence model and blast curves. Comparison between the measurements and predictions shows that the predictions are generally conservative, and that the extent of conservatism is dominated by predictions of the chemical contribution to overpressure from fuel within the COPVs.
National security applications require artificial neural networks (ANNs) that consume less power, are fast and dynamic online learners, are fault tolerant, and can learn from unlabeled and imbalanced data. We explore whether two fundamentally different, traditional learning algorithms from artificial intelligence and the biological brain can be merged. We tackle this problem from two directions. First, we start from a theoretical point of view and show that the spike time dependent plasticity (STDP) learning curve observed in biological networks can be derived using the mathematical framework of backpropagation through time. Second, we show that transmission delays, as observed in biological networks, improve the ability of spiking networks to perform classification when trained using a backpropagation of error (BP) method. These results provide evidence that STDP could be compatible with a BP learning rule. Combining these learning algorithms will likely lead to networks more capable of meeting our national security missions.
Kim, Anthony D.; Curwen, Christopher A.; Wu, Yu; Reno, John L.; Addamane, Sadhvikas J.; Williams, Benjamin S.
Terahertz (THz) external-cavity lasers based on quantum-cascade (QC) metasurfaces are emerging as widely-tunable, single-mode sources with the potential to cover the 1--6 THz range in discrete bands with milliwatt-level output power. By operating on an ultra-short cavity with a length on the order of the wavelength, the QC vertical-external-cavity surface-emitting-laser (VECSEL) architecture enables continuous, broadband tuning while producing high quality beam patterns and scalable power output. The methods and challenges for designing the metasurface at different frequencies are discussed. As the QC-VECSEL is scaled below 2 THz, the primary challenges are reduced gain from the QC active region, increased metasurface quality factor and its effect on tunable bandwidth, and larger power consumption due to a correspondingly scaled metasurface area. At frequencies above 4.5 THz, challenges arise from a reduced metasurface quality factor and the excess absorption that occurs from proximity to the Reststrahlen band. The results of four different devices — with center frequencies 1.8 THz, 2.8 THz, 3.5 THz, and 4.5 THz — are reported. Each device demonstrated at least 200 GHz of continuous single-mode tuning, with the largest being 650 GHz around 3.5 THz. The limitations of the tuning range are well modeled by a Fabry-Pérot cavity which accounts for the reflection phase of the metasurface and the effect of the metasurface quality factor on laser threshold. Lastly, the effect of different output couplers on device performance is studied, demonstrating a significant trade-off between the slope efficiency and tuning bandwidth.
In order to meet 2025 goals for enhanced peak power (100 kW), specific power (50 kW/L), and reduced cost (3.3 $\$$/kW) in a motor that can operate at ≥ 20,000 rpm, improved soft magnetic materials must be developed. Better performing soft magnetic materials will also enable rare earth free electric motors. In fact, replacement of permanent magnets with soft magnetic materials was highlighted in the Electrical and Electronics Technical Team (EETT) Roadmap as a R&D pathway for meeting 2025 targets. Eddy current losses in conventional soft magnetic materials, such as silicon steel, begin to significantly impact motor efficiency as rotational speed increases. Soft magnetic composites (SMCs), which combine magnetic particles with an insulating matrix to boost electrical resistivity (ρ) and decrease eddy current losses, even at higher operating frequencies (or rotational speeds), are an attractive solution. Today, SMCs are being fabricated with values of ρ ranging between 10-3 to 10-1 μohm∙m, which is significantly higher than 3% silicon steel (~0.05 μohm∙m). The isotropic nature of SMCs is ideally suited for motors with 3D flux paths, such as axial flux motors. Additionally, the manufacturing cost of SMCs is low and they are highly amenable to advanced manufacturing and net-shaping into complex geometries, which further reduces manufacturing costs. There is still significant room for advancement in SMCs, and therefore additional improvements in electrical machine performance. For example, despite the inclusion of a non-magnetic insulating material, the electrical resistivities of SMCs are still far below that of soft ferrites (10 – 108 μohm∙m).
More efficient power conversion devices are able to transmit greater electrical power across larger distances to satisfy growing global electrical needs. A critical requirement to achieve more efficient power conversion are the soft magnetic materials used as core materials in transformers, inductors, and motors. To that effect it is well known that the use of non-equilibrium microstructures, which are, for example, nanocrystalline or consist of single phase solid solutions, can yield high saturation magnetic polarization and high electrical resistivity necessary for more efficient soft magnetic materials. In this work, we synthesized CoFe – P soft magnetic alloys containing nanocrystalline, single phase solid solution microstructures and studied the effect of a secondary intermetallic phase on the saturation magnetic polarization and electrical resistivity of the consolidated alloy. Single phase solid solution CoFe – P alloys were prepared through mechanically alloying metal powders and phase decomposition was observed after subsequent consolidation via spark plasma sintering (SPS) at various temperatures. The secondary intermetallic phase was identified as the orthorhombic (CoxFe1−x)2P phase and the magnetic properties of the (CoxFe1−x)2P intermetallic phase were found to be detrimental to the soft magnetic properties of the targeted CoFe – P alloy.
Clays are known for their small particle sizes and complex layer stacking. We show here that the limited dimension of clay particles arises from the lack of long-range order in low-dimensional systems. Because of its weak interlayer interaction, a clay mineral can be treated as two separate low-dimensional systems: a 2D system for individual phyllosilicate layers and a quasi-1D system for layer stacking. The layer stacking or ordering in an interstratified clay can be described by a 1D Ising model while the limited extension of individual phyllosilicate layers can be related to a 2D Berezinskii–Kosterlitz–Thouless transition. This treatment allows for a systematic prediction of clay particle size distributions and layer stacking as controlled by the physical and chemical conditions for mineral growth and transformation. Clay minerals provide a useful model system for studying a transition from a 1D to 3D system in crystal growth and for a nanoscale structural manipulation of a general type of layered materials.