Thermal spray processes benefit from workpiece cooling to prevent overheating of the substrate and to retain metallurgical properties (e.g., temper). Cold-gas “plume quenching” is a plume-targeting cooling technique, where an argon curtain is directed laterally above the substrate surface to re-direct high temperature gases without impacting particle motion. However, there has been little investigation of its effect on the molten particles and the resulting coating properties. This study examined high- and medium- density tantalum and nickel coatings, fabricated by Controlled Atmosphere Plasma Spray with and without plume quenching on aluminum and titanium substrates. To compare the effect of plume quenching, the deposition efficiency was calculated through coating mass gain, and the coating density, stiffness, and adhesion were measured. The tantalum and nickel coatings were largely unaffected by plume quenching with respect to deposition efficiencies, coating density, adhesion, and stiffness. These results indicate that a plume quench could be used without affecting the coating properties for high- and medium-density metals while providing the benefit of substrate cooling that increases with higher plume quench gas flow rates.
When exposed to mechanical environments such as shock and vibration, electrical connections may experience increased levels of contact resistance associated with the physical characteristics of the electrical interface. A phenomenon known as electrical chatter occurs when these vibrations are large enough to interrupt the electric signals. It is critical to understand the root causes behind these events because electrical chatter may result in unexpected performance or failure of the system. The root causes span a variety of fields, such as structural dynamics, contact mechanics, and tribology. Therefore, a wide range of analyses are required to fully explore the physical phenomenon. This paper intends to provide a better understanding of the relationship between structural dynamics and electrical chatter events. Specifically, electrical contact assembly composed of a cylindrical pin and bifurcated structure were studied using high fidelity simulations. Structural dynamic simulations will be performed with both linear and nonlinear reduced-order models (ROM) to replicate the relevant structural dynamics. Subsequent multi-physics simulations will be discussed to relate the contact mechanics associated with the dynamic interactions between the pin and receptacle to the chatter. Each simulation method was parametrized by data from a variety of dynamic experiments. Both structural dynamics and electrical continuity were observed in both the simulation and experimental approaches, so that the relationship between the two can be established.
Extreme meteorological events, such as hurricanes and floods, cause significant infrastructure damage and, as a result, prolonged grid outages. To mitigate the negative effect of these outages and enhance the resilience of communities, microgrids consisting of solar photovoltaics (PV), energy storage (ES) technologies, and backup diesel generation are being considered. Furthermore, it is necessary to take into account how the extreme event affects the systems' performance during the outage, often referred to as black-sky conditions. In this paper, an optimization model is introduced to properly size ES and PV technologies to meet various duration of grid outages for selected critical infrastructure while considering black-sky conditions. A case study of the municipality of Villalba, Puerto Rico is presented to identify the several potential microgrid configurations that increase the community's resilience. Sensitivity analyses are performed around the grid outage durations and black-sky conditions to better decide what factors should be considered when scoping potential microgrids for community resilience.
The research investigates novel techniques to enhance supply chain security via addition of configuration management controls to protect Instrumentation and Control (I&C) systems of a Nuclear Power Plant (NPP). A secure element (SE) is integrated into a proof-of-concept testbed by means of a commercially available smart card, which provides tamper resistant key storage and a cryptographic coprocessor. The secure element simplifies setup and establishment of a secure communications channel between the configuration manager and verification system and the I&C system (running OpenPLC). This secure channel can be used to provide copies of commands and configuration changes of the I&C system for analysis.
High reliability (Hi-Rel) electronics for mission critical applications are handled with extreme care; stress testing upon full assembly can increase a likelihood of degrading these systems before their deployment. Moreover, novel material parts, such as wide bandgap semiconductor devices, tend to have more complicated fabrication processing needs which could ultimately result in larger part variability or potential defects. Therefore, an intelligent screening and inspection technique for electronic parts, in particular gallium nitride (GaN) power transistors, is presented in this paper. We present a machine-learning-based non-intrusive technique that can enhance part-selection decisions to categorize the part samples to the population's expected electrical characteristics. This technique provides relevant information about GaN HEMT device characteristics without having to operate all of these devices at the high current region of the transfer and output characteristics, lowering the risk of damaging the parts prematurely. The proposed non-intrusive technique uses a small signal pulse width modulation (PWM) of various frequencies, ranging from 10 kHz to 500 kHz, injected into the transistor terminals and the corresponding output signals are observed and used as training dataset. Unsupervised clustering techniques with K-means and feature dimensional reduction through principal component analysis (PCA) have been used to correlate a population of GaN HEMT transistors to the expected mean of the devices' electrical characteristic performance.
We present the SEU sensitivity and SEL results from proton and heavy ion testing performed on NVIDIA Xavier NX and AMD Ryzen V1605B GPU devices in both static and dynamic operation.
Here we consider the shock stand-off distance for blunt forebodies using a simplified differential-based approach with extensions for high enthalpy dissociative chemistry effects. Following Rasmussen [4], self-similar differential equations valid for spherical and cylindrical geometries that are modified to focus on the shock curvature induced vorticity in the immediate region of the shock are solved to provide a calorically perfect estimate for shock standoff distance that yields good agreement with classical theory. While useful as a limiting case, strong shock (high enthalpy) calorically perfect results required modification to include the effects of dissociative thermo-chemistry. Using a dissociative ideal gas model for dissociative equilibrium behavior combined with shock Hugoniot constraints we solve to provide thermodynamic modifications to the shock density jump thereby sensitizing the simpler result for high enthalpy effects. The resulting estimates are then compared to high enthalpy stand-off data from literature, recent dedicated high speed shock tunnel measurements and multi-temperature partitioned implementation CFD data sets. Generally, the theoretical results derived here compared well with these data sources, suggesting that the current formulation provides an approximate but useful estimate for shock stand-off distance.
The recently-developed ability to control phosphorous-doping of silicon at an atomic level using scanning tunneling microscopy, a technique known as atomic precision advanced manufacturing (APAM), has allowed us to tailor electronic devices with atomic precision, and thus has emerged as a way to explore new possibilities in Si electronics. In these applications, critical questions include where current flow is actually occurring in or near APAM structures as well as whether leakage currents are present. In general, detection and mapping of current flow in APAM structures are valuable diagnostic tools to obtain reliable devices in digital-enhanced applications. In this paper, we used nitrogen-vacancy (NV) centers in diamond for wide-field magnetic imaging (with a few-mm field of view and micron-scale resolution) of magnetic fields from surface currents flowing in an APAM test device made of a P delta-doped layer on a Si substrate, a standard APAM witness material. We integrated a diamond having a surface NV ensemble with the device (patterned in two parallel mm-sized ribbons), then mapped the magnetic field from the DC current injected in the APAM device in a home-built NV wide-field microscope. The 2D magnetic field maps were used to reconstruct the surface current densities, allowing us to obtain information on current paths, device failures such as choke points where current flow is impeded, and current leakages outside the APAM-defined P-doped regions. Analysis on the current density reconstructed map showed a projected sensitivity of ∼0.03 A m−1, corresponding to a smallest-detectable current in the 200 μm wide APAM ribbon of ∼6 μA. These results demonstrate the failure analysis capability of NV wide-field magnetometry for APAM materials, opening the possibility to investigate other cutting-edge microelectronic devices.
Vulcan is a new pulsed power system at Sandia National Laboratories based on fast Marx technology. Vulcan will serve as an intermediate scale demonstration of a fast Marx system and as a testbed for vacuum insulator testing. Vulcan uses multiple parallel fast Marxes, in a layout we call a Fast Marx Array (FMA), and a pulse forming line (PFL) to generate pulses up to 5 MV with effective pulse lengths for vacuum insulator testing that are relevant to larger facilities like Z. Vulcan consists of two parallel 25 stage Marxes with a total stored energy of up to 20 kJ. Vulcan applies up to 5 MV to a vacuum insulator stack load, thereby enabling testing of large area insulator stacks with areas on the order of 1000 cm2. The PFL design includes an oil output switch to adjust the voltage stress duration applied to the vacuum insulator. We will discuss Vulcan's design, including the FMA, Marx trigger generator, energy diverter, PFL, oil output switch, and results of initial commissioning experiments.
Researchers at Sandia National Laboratories, in conjunction with the Nuclear Energy Institute and Light Water Reactor Sustainability Programs, have conducted testing and analysis to reevaluate and redefine the minimum passible opening size through which a person can effectively pass and navigate. Physical testing with a representative population has been performed on both simple two-dimensional (rectangular and circular cross sections up to 91.4 cm in depth) and more complex three-dimensional (circular cross sections of longer lengths up to 9.1 m and changes in direction) opening configurations. The primary impact of this effort is to define the physical design in which an adversary could successfully pass through a potentially complex opening, as well as to define the designs in which an adversary would not be expected to successfully traverse a complex opening. These data can then be used to support risk-informed decision making.
Here we examine models for particle curtain dispersion using drag based formalisms and their connection to streamwise pressure difference closures. Focusing on drag models, we specifically demonstrate that scaling arguments developed in DeMauro et. al. [1] using early time drag modeling can be extended to include late time particle curtain dispersion behavior by weighting the dynamic portion of the drag relative velocity e.g. (Formula Presented) by the inverse of the particle volume fraction to the ¼th power. The additional parameter e.g. α introduced in this scaling is related to the model drag parameters by employing an early-time latetime matching argument. Comparison with the scaled measurements of DeMauro et. al. suggest that the proposed modification is an effective formalism. Next, the connection between drag-based models and streamwise pressure difference-based expressions is explored by formulating simple analytical models that verify an empirical (Daniel and Wagner [2]) upstream-downstream expression. Though simple, these models provide physics-based approached describing shock particle curtain interaction behavior.
Laser-induced photoemission of electrons offers opportunities to trigger and control plasmas and discharges [1]. However, the underlying mechanisms are not sufficiently characterized to be fully utilized [2]. We present an investigation to characterize the effects of photoemission on plasma breakdown for different reduced electric fields, laser intensities, and photon energies. We perform Townsend breakdown experiments assisted by high-speed imaging and employ a quantum model of photoemission along with a 0D discharge model [3], [4] to interpret the experimental measurements.
Many High Performance Computing (HPC) facilities have developed and deployed frameworks in support of continuous monitoring and operational data analytics (MODA) to help improve efficiency and throughput. Because of the complexity and scale of systems and workflows and the need for low-latency response to address dynamic circumstances, automated feedback and response have the potential to be more effective than current human-in-the-loop approaches which are laborious and error prone. Progress has been limited, however, by factors such as the lack of infrastructure and feedback hooks, and successful deployment is often site- and case-specific. In this position paper we report on the outcomes and plans from a recent Dagstuhl Seminar, seeking to carve a path for community progress in the development of autonomous feedback loops for MODA, based on the established formalism of similar (MAPE-K) loops in autonomous computing and self-adaptive systems. By defining and developing such loops for significant cases experienced across HPC sites, we seek to extract commonalities and develop conventions that will facilitate interoperability and interchangeability with system hardware, software, and applications across different sites, and will motivate vendors and others to provide telemetry interfaces and feedback hooks to enable community development and pervasive deployment of MODA autonomy loops.
We consider the intersection between nonrepeating random FM (RFM) waveforms and practical forms of optimal mismatched filtering (MMF). Specifically, the spectrally-shaped inverse filter (SIF) is a well-known approximation to the least-squares (LS-MMF) that provides significant computational savings. Given that nonrepeating waveforms likewise require unique nonrepeating MMFs, this efficient form is an attractive option. Moreover, both RFM waveforms and the SIF rely on spectrum shaping, which establishes a relationship between the goodness of a particular waveform and the mismatch loss (MML) the corresponding filter can achieve. Both simulated and open-air experimental results are shown to demonstrate performance.
Filamentous fungi can synthesize a variety of nanoparticles (NPs), a process referred to as mycosynthesis that requires little energy input, do not require the use of harsh chemicals, occurs at near neutral pH, and do not produce toxic byproducts. While NP synthesis involves reactions between metal ions and exudates produced by the fungi, the chemical and biochemical parameters underlying this process remain poorly understood. Here, the role of fungal species and precursor salt on the mycosynthesis of zinc oxide (ZnO) NPs is investigated. This data demonstrates that all five fungal species tested are able to produce ZnO structures that can be morphologically classified into i) well-defined NPs, ii) coalesced/dissolving NPs, and iii) micron-sized square plates. Further, species-dependent preferences for these morphologies are observed, suggesting potential differences in the profile or concentration of the biochemical constituents in their individual exudates. This data also demonstrates that mycosynthesis of ZnO NPs is independent of the anion species, with nitrate, sulfate, and chloride showing no effect on NP production. Finally, these results enhance the understanding of factors controlling the mycosynthesis of ceramic NPs, supporting future studies that can enable control over the physical and chemical properties of NPs formed through this “green” synthesis method.
As the electric grid becomes increasingly cyber-physical, it is important to characterize its inherent cyber-physical interdepedencies and explore how that characterization can be leveraged to improve grid operation. It is crucial to investigate what data features are transferred at the system boundaries, how disturbances cascade between the systems, and how planning and/or mitigation measures can leverage that information to increase grid resilience. In this paper, we explore several numerical analysis and graph decomposition techniques that may be suitable for modeling these cyber-physical system interdependencies and for understanding their significance. An augmented WSCC 9-bus cyber-physical system model is used as a small use-case to assess these techniques and their ability in characterizing different events within the cyber-physical system. These initial results are then analyzed to formulate a high-level approach for characterizing cyber-physical interdependencies.
In this work, we introduce and compare the results of several methods for determining the horizon profile at a PV site, and compare their use cases and limitations. The methods in this paper include horizon detection from time-series irradiance or performance data, modeling from GIS topology data, manual theodolite measurements, and camera-based horizon detection. We compare various combinations of these methods using data from 4 Regional Test Center sites in the US, and 3 World Bank sites in Nepal. The results show many differences between these methods, and we recommend the most practical solutions for various use-cases.
The widespread adoption of residential solar PV requires distribution system studies to ensure the addition of solar PV at a customer location does not violate the system constraints, which can be referred to as locational hosting capacity (HC). These model-based analyses are prone to error due to their dependencies on the accuracy of the system information. Model-free approaches to estimate the solar PV hosting capacity for a customer can be a good alternative to this approach as their accuracies do not depend on detailed system information. In this paper, an Adaptive Boosting (AdaBoost) algorithm is deployed to utilize the statistical properties (mean, minimum, maximum, and standard deviation) of the customer's historical data (real power, reactive power, voltage) as inputs to estimate the voltage-constrained PV HC for the customer. A baseline comparison approach is also built that utilizes just the maximum voltage of the customer to predict PV HC. The results show that the ensemble-based AdaBoost algorithm outperformed the proposed baseline approach. The developed methods are also compared and validated by existing state-of-the-art model-free PV HC estimation methods.
A large-scale numerical computation of five wind farms was performed as a part of the American WAKE experimeNt (AWAKEN). This high-fidelity computation used the ExaWind/AMR-Wind LES solver to simulate a 100 km × 100 km domain containing 541 turbines under unstable atmospheric conditions matching previous measurements. The turbines were represented by Joukowski and OpenFAST coupled actuator disk models. Results of this qualitative comparison illustrate the interactions of wind farms with large-scale ABL structures in the flow, as well as the extent of downstream wake penetration in the flow and blockage effects around wind farms.
Sausan, Sarah; Judawisastra, Luthfan H.; Su, Jiann-Cherng; Horne, Roland
This paper presents the ongoing development of a wireline tool designed to detect and quantify inflows from feed zones in geothermal wells based on measurement of chloride. The tool aims to characterize stimulation events in Enhanced Geothermal Systems (EGS) wells at Utah FORGE (Frontier Observatory for Research in Geothermal Energy) and other EGS sites. Successful development of the chloride tool would greatly improve production monitoring of the fractures and enable proactive prescription of additional stimulations over the life of the field, thus helping to improve EGS commercial feasibility. The recent development of the chloride tool involves an Ion Specific Electrodes (ISE) probe and a reference electrode, assembled through a labor-intensive process, and designed to withstand downhole conditions for field deployment. Through laboratory experiments and numerical simulations, the tool demonstrated efficacy in identifying changes in chloride concentration, indicating its utility in feed zone detection. However, the impact of impedance on voltage measurements and discrepancies between laboratory and simulation results presented opportunities for further refinement. Notably, simulation results consistently underestimated actual chloride concentration by 30-40%, suggesting the need for compensatory calibration. Comparisons between different simulation software indicated that ANSYS was more accurate in replicating key features observed in laboratory experiments. Moreover, a Machine Learning (ML) approach was used to improve feed zone location detection and inflow rate measurement, utilizing Random Forest and Light Gradient Boosting Machine (LGBM) models, which delivered high performance scores. Thus, the chloride tool's recent development and integration with machine learning approaches offer promising advancements in feed zone identification and quantification.
Kolmogorov's theory of turbulence assumes that the small-scale turbulent structures in the energy cascade are universal and are determined by the energy dissipation rate and the kinematic viscosity alone. However, thermal fluctuations, absent from the continuum description, terminate the energy cascade near the Kolmogorov length scale. Here, we propose a simple superposition model to account for the effects of thermal fluctuations on small-scale turbulence statistics. For compressible Taylor-Green vortex flow, we demonstrate that the superposition model in conjunction with data from direct numerical simulation of the Navier-Stokes equations yields spectra and structure functions that agree with the corresponding quantities computed from the direct simulation Monte Carlo method of molecular gas dynamics, verifying the importance of thermal fluctuations in the dissipation range.
Quantum computing testbeds exhibit high-fidelity quantum control over small collections of qubits, enabling performance of precise, repeatable operations followed by measurements. Currently, these noisy intermediate-scale devices can support a sufficient number of sequential operations prior to decoherence such that near term algorithms can be performed with proximate accuracy (like chemical accuracy for quantum chemistry problems). While the results of these algorithms are imperfect, these imperfections can help bootstrap quantum computer testbed development. Demonstrations of these algorithms over the past few years, coupled with the idea that imperfect algorithm performance can be caused by several dominant noise sources in the quantum processor, which can be measured and calibrated during algorithm execution or in post-processing, has led to the use of noise mitigation to improve typical computational results. Conversely, benchmark algorithms coupled with noise mitigation can help diagnose the nature of the noise, whether systematic or purely random. Here, we outline the use of coherent noise mitigation techniques as a characterization tool in trapped-ion testbeds. We perform model-fitting of the noisy data to determine the noise source based on realistic physics focused noise models and demonstrate that systematic noise amplification coupled with error mitigation schemes provides useful data for noise model deduction. Further, in order to connect lower level noise model details with application specific performance of near term algorithms, we experimentally construct the loss landscape of a variational algorithm under various injected noise sources coupled with error mitigation techniques. This type of connection enables application-aware hardware code-sign, in which the most important noise sources in specific applications, like quantum chemistry, become foci of improvement in subsequent hardware generations.
Austenitic stainless steels have been extensively tested in hydrogen environments; however, limited information exists for the effects of hydrogen on the fatigue life of high-strength grades of austenitic stainless steels. Moreover, fatigue life testing of finished product forms (such as tubing and welds) is challenging. A novel test method for evaluating the influence of internal hydrogen on fatigue of orbital tube welds was reported, where a cross hole in a tubing specimen is used to establish a stress concentration analogous to circumferentially notched bar fatigue specimens for constant-load, axial fatigue testing. In that study (Kagay et al, ASME PVP2020-8576), annealed 316L tubing with a cross hole displayed similar fatigue performance as more conventional materials test specimens. A similar cross-hole tubing geometry is adopted here to evaluate the fatigue crack initiation and fatigue life of XM-19 austenitic stainless steel with high concentration of internal hydrogen. XM-19 is a nitrogen-strengthened Fe-Cr-Ni-Mn austenitic stainless steel that offers higher strength than conventional 3XX series stainless steels. A uniform hydrogen concentration in the test specimen is achieved by thermal precharging (exposure to high-pressure hydrogen at elevated temperature for two weeks) prior to testing in air to simulate the equilibrium hydrogen concentration near a stress concentration in gaseous hydrogen service. Specimens are also instrumented for direct current potential difference measurements to identify crack initiation. After accounting for the strengthening associated with thermal precharging, the fatigue crack initiation and fatigue life of XM-19 tubing were virtually unchanged by internal hydrogen.
Two-dimensional (2D) layered oxides have recently attracted wide attention owing to the strong coupling among charges, spins, lattice, and strain, which allows great flexibility and opportunities in structure designs as well as multifunctionality exploration. In parallel, plasmonic hybrid nanostructures exhibit exotic localized surface plasmon resonance (LSPR) providing a broad range of applications in nanophotonic devices and sensors. A hybrid material platform combining the unique multifunctional 2D layered oxides and plasmonic nanostructures brings optical tuning into the new level. In this work, a novel self-assembled Bi2MoO6 (BMO) 2D layered oxide incorporated with plasmonic Au nanoinclusions has been demonstrated via one-step pulsed laser deposition (PLD) technique. Comprehensive microstructural characterizations, including scanning transmission electron microscopy (STEM), differential phase contrast imaging (DPC), and STEM tomography, have demonstrated the high epitaxial quality and particle-in-matrix morphology of the BMO-Au nanocomposite film. DPC-STEM imaging clarifies the magnetic domain structures of BMO matrix. Three different BMO structures including layered supercell (LSC) and superlattices have been revealed which is attributed to the variable strain states throughout the BMO-Au film. Owing to the combination of plasmonic Au and layered structure of BMO, the nanocomposite film exhibits a typical LSPR in visible wavelength region and strong anisotropy in terms of its optical and ferromagnetic properties. This study opens a new avenue for developing novel 2D layered complex oxides incorporated with plasmonic metal or semiconductor phases showing great potential for applications in multifunctional nanoelectronics devices. [Figure not available: see fulltext.]
High penetrations of residential solar PV can cause voltage issues on low-voltage (LV) secondary networks. Distribution utility planners often utilize model-based power flow solvers to address these voltage issues and accommodate more PV installations without disrupting the customers already connected to the system. These model-based results are computationally expensive and often prone to errors. In this paper, two novel deep learning-based model-free algorithms are proposed that can predict the change in voltages for PV installations without any inherent network information of the system. These algorithms will only use the real power (P), reactive power (Q), and voltage (V) data from Advanced Metering Infrastructure (AMI) to calculate the change in voltages for an additional PV installation for any customer location in the LV secondary network. Both algorithms are tested on three datasets of two feeders and compared to the conventional model-based methods and existing model-free methods. The proposed methods are also applied to estimate the locational PV hosting capacity for both feeders and have shown better accuracies compared to an existing model-free method. Results show that data filtering or pre-processing can improve the model performance if the testing data point exists in the training dataset used for that model.