The Integrated Tiger Series (ITS) generates a database containing energy deposition data. This data, when stored on an Exodus file, is not typically suitable for analysis within Sierra Mechanics for finite element analysis. The its2sierra tool maps data from the ITS database to the Sierra database.
NasGen provides a path for migration of structural models from NASTRAN bulk data format (BDF) into both an Exodus mesh file and an ASCII input file for Sierra Structural Dynamics (Salinas) and Solid Mechanics (Presto). Many tools at Sandia National Labs (SNL) use the Exodus format. NasGen was written specifically for Salinas and Presto but should be usable with a number of these packages.
This work presents a 3D quantum mechanics based model to address the physics at band structure crossing/anti-crossing points in full band Monte Carlo (FBMC) simulations. The model solves the Krieger and Iafrate (KI) equations in real time using pre-computed coefficients at k-points spatially sampled within the first Brillouin zone. Solving the KI equations in real time makes this model applicable for all electric fields, which enables its use in FBMC device simulations. In this work, a two-level refinement scheme is used to aggressively sample regions in proximity to band crossings for accurate solutions to the KI equations and coarsely sample everywhere else to limit the number of k-points used. The presented sampling method is demonstrated on the band structure of silicon but is effective for the band structure of any semiconductor material. Next, the adaptation of the fully quantum KI model into the semi-classical FBMC method is discussed. Finally, FBMC simulations of hole transport in 4H silicon carbide with and without the KI model are performed. Results along different crystallographic directions for a wide range of electric fields are compared to previously published simulation and experimental values.
Efficient prediction of sampling-intensive thermodynamic properties is needed to evaluate material performance and permit high-throughput materials modeling for a diverse array of technology applications. To alleviate the prohibitive computational expense of high-throughput configurational sampling with density functional theory (DFT), surrogate modeling strategies like cluster expansion are many orders of magnitude more efficient but can be difficult to construct in systems with high compositional complexity. We therefore employ minimal-complexity graph neural network models that accurately predict and can even extrapolate to out-of-train distribution formation energies of DFT-relaxed structures from an ideal (unrelaxed) crystallographic representation. This enables the large-scale sampling necessary for various thermodynamic property predictions that may otherwise be intractable and can be achieved with small training data sets. Two exemplars, optimizing the thermodynamic stability of low-density high-entropy alloys and modulating the plateau pressure of hydrogen in metal alloys, demonstrate the power of this approach, which can be extended to a variety of materials discovery and modeling problems.
We present a high-level architecture for how artificial intelligences might advance and accumulate scientific and technological knowledge, inspired by emerging perspectives on how human intelligences advance and accumulate such knowledge. Agents advance knowledge by exercising a technoscientific method—an interacting combination of scientific and engineering methods. The technoscientific method maximizes a quantity we call “useful learning” via more-creative implausible utility (including the “aha!” moments of discovery), as well as via less-creative plausible utility. Society accumulates the knowledge advanced by agents so that other agents can incorporate and build on to make further advances. The proposed architecture is challenging but potentially complete: its execution might in principle enable artificial intelligences to advance and accumulate an equivalent of the full range of human scientific and technological knowledge.
Both shock and shockless compression experiments were performed on laser powder bed fusion (LPBF) Ti-5Al-5V-5Mo-3Cr (Ti-5553) to peak compressive stresses near 15 GPa. Experiments were performed on the as-built material, containing a purely β (body centered cubic) microstructure, and two differing heat treatments resulting in a dual phase α (hexagonal close packed) and β microstructure. The Hugoniot, Hugoniot elastic limit (HEL), and spallation strength were measured and compared to wrought Ti-6Al-4V (Ti-64). The results indicate the LPBF Ti-5553 Hugoniot response is similar between heat treatments and to Ti-64. The HEL stress observed in the LPBF Ti-5553 was considerably higher than Ti-64, with the as-built, fully β alloy exhibiting the largest values. The spallation strength of the LPBF Ti-5553 was also similar to Ti-64. Clear evidence of initial porosity serving as initiation sites for spallation damage was observed when comparing computed tomography measurements before and after loading. Post-mortem scanning electron microscopy images of the recovered spallation samples showed no evidence of retained phase changes near the spall plane. The spall plane was found to have kinks aligned with the loading direction near areas with large concentrations of twin-like, crystallographic defects in the as-built condition. For the heat-treated samples, the concentrations of twin-like, crystallographic defects were absent, and no preference for failure at the interface between the α and β phases was observed.
In lithium-metal batteries, grains of lithium can become electrically isolated from the anode, lowering battery performance. We state that experiments reveal that rest periods after battery discharge might help to solve this problem.
Replacement of conventional petroleum fuels with renewable fuels reduces net emissions of carbon and greenhouse gases, and affords opportunities for increased domestic energy security. Here, we present alkyl dialkoxyalkanoates (or DAOAs) as a family of synthetic diesel and marine fuel candidates that feature ester and ether functionality. These compounds employ pyruvic acid and fusel alcohols as precursors, which are widely available as metabolic intermediates at high titer and yield. DAOA synthesis proceeds in high yield using a simple, mild chemical transformation performed under air that employs bioderived and/or easily recovered reagents and solvent. The scalability of the synthetic protocol was proven in continuous flow with in situ azeotropic water removal, yielding 375 g of isolated product. Chemical stability of DAOAs against aqueous 0.01 M H2SO4 and accelerated oxidative conditions is demonstrated. The isolated DAOAs were shown to meet or exceed widely accepted technical criteria for sustainable diesel fuels. In particular, butyl 2,2-dibutoxypropanoate (DAOA-2) has indicated cetane number 64, yield soot index 256 YSI per kg, lower heating value 30.9 MJ kg−1 and cloud point < −60 °C and compares favorably to corresponding values for renewable diesel, biodiesel and petroleum diesel.
Pyrimidine has two in-plane CH(δ+)/N̈(δ–)/CH(δ+) binding sites that are complementary to the (δ–/2δ+/δ–) quadrupole moment of CO2. For this study, we recorded broadband microwave spectra over the 7.5–17.5 GHz range for pyrimidine-(CO2)n with n = 1 and 2 formed in a supersonic expansion. Based on fits of the rotational transitions, including nuclear hyperfine splitting due to the two 14N nuclei, we have assigned 313 hyperfine components across 105 rotational transitions for the n = 1 complex and 208 hyperfine components across 105 rotational transitions for the n = 2 complex. The pyrimidine-CO2 complex is planar, with CO2 occupying one of the quadrupolar binding sites, forming a structure in which the CO2 is stabilized in the plane by interactions with the C–H hydrogens adjacent to the nitrogen atom. This structure is closely analogous to that of the pyridine-CO2 complex studied previously by (Doran, J. L. J. Mol. Struct. 2012, 1019, 191–195). The fit to the n = 2 cluster gives rotational constants consistent with a planar cluster of C2v symmetry in which the second CO2 molecule binds in the second quadrupolar binding pocket on the opposite side of the ring. The calculated total binding energy in pyrimidine-CO2 is –13.7 kJ mol–1, including corrections for basis set superposition error and zero-point energy, at the CCSD(T)/ 6-311++G(3df,2p) level, while that in pyrimidine-(CO2)2 is almost exactly double that size, indicating little interaction between the two CO2 molecules in the two binding sites. The enthalpy, entropy, and free energy of binding are also calculated at 300 K within the harmonic oscillator/rigid-rotor model. This model is shown to lack quantitative accuracy when it is applied to the formation of weakly bound complexes.
Accurately modeling large biomolecules such as DNA from first principles is fundamentally challenging due to the steep computational scaling of ab initio quantum chemistry methods. This limitation becomes even more prominent when modeling biomolecules in solution due to the need to include large numbers of solvent molecules. We present a machine-learned electron density model based on a Euclidean neural network framework that includes a built-in understanding of equivariance to model explicitly solvated double-stranded DNA. By training the machine learning model using molecular fragments that sample the key DNA and solvent interactions, we show that the model predicts electron densities of arbitrary systems of solvated DNA accurately, resolves polarization effects that are neglected by classical force fields, and captures the physics of the DNA-solvent interaction at the ab initio level.
As ferroelectric hafnium zirconium oxide (HZO) becomes more widely utilized in ferroelectric microelectronics, integration impacts of intentional and nonintentional dielectric interfaces and their effects upon the ferroelectric film wake-up (WU) and circuit parameters become important to understand. In this work, the effect of the addition of a linear dielectric aluminum oxide, Al2O3, below a ferroelectric Hf0.58Zr0.42O2 film in a capacitor structure for FeRAM applications with niobium nitride (NbN) electrodes was measured. Depolarization fields resulting from the linear dielectric is observed to induce a reduction of the remanent polarization of the ferroelectric. Addition of the aluminum oxide also impacts the WU of the HZO with respect to the cycling voltage applied. Intricately linked to the design of a FeRAM 1C/1T cell, the metal-ferroelectric-insulator-metal (MFIM) devices are observed to significantly shift charge related to the read states based on aluminum oxide thickness and WU cycling voltage. A 33% reduction in the separation of read states are measured, which complicates how a memory cell is designed and illustrates the importance of clean interfaces in devices.
Chromium self-diffusion through stainless steel (SS) matrix and along grain boundaries is an important mechanism controlling SS structural materials corrosion. Cr diffusion in austenitic SS was simulated using canonical ab initio molecular dynamics with realistic models of type-316 SS bulk, with and without Cr vacancies, and a low-energy Σ3 twin boundary typically observed at active corrosion sites. Cr self-diffusion coefficients at 750 and 850 °C calculated using Einstein's diffusion equation are 4.2 × 10−6 and 8.1 × 10−6 Å2 ps−1 in pristine bulk, 3.8 × 10−3 and 5.5 × 10−3 Å2 ps−1 in bulk including Cr vacancies, and 9.5 × 10−2 and 1.0 × 10−1 Å2 ps−1 at a Σ3[1 1 1]60° twin boundary.
Ignitions of solid materials from very high heat fluxes (>200 kW/m2) are differentiated from more common lower flux ignition because the required total energy input can be lower, and the process is much faster. Prior work has characterized ignition thresholds via thermal properties of the solids, flux, and fluence. The historical data, however, neglect to provide similar focus on the initiation of pyrolysis. The initiation of pyrolysis is of key relevancy because it represents an absolute threshold below which ignition is of zero probability. It is also a metric of potentially higher reliability for assessing material response because surface material properties such as absorptivity, conductivity, and density tend to change upon initial pyrolysis due to charring or other transformations. Recent data from concentrated solar flux for a variety of materials and exposures are analyzed here to explore the nature of trends and thresholds for onset of pyrolysis at high heat flux. This work evaluates initiation threshold data and provides a theoretical technique for further model development. The technique appears to be functionally appropriate to evaluate trends to aid in predicting material response to high flux exposures.
Detecting changepoints in functional data has become an important problem as interest in monitoring of climate phenomenon has increased, where the data is functional in nature. The observed data often contains both amplitude ((Formula presented.) -axis) and phase ((Formula presented.) -axis) variability. If not accounted for properly, true changepoints may be undetected, and the estimated underlying mean change functions will be incorrect. In this article, an elastic functional changepoint method is developed which properly accounts for these types of variability. The method can detect amplitude and phase changepoints which current methods in the literature do not, as they focus solely on the amplitude changepoint. This method can easily be implemented using the functions directly or can be computed via functional principal component analysis to ease the computational burden. We apply the method and its nonelastic competitors to both simulated data and observed data to show its efficiency in handling data with phase variation with both amplitude and phase changepoints. We use the method to evaluate potential changes in stratospheric temperature due to the eruption of Mt. Pinatubo in the Philippines in June 1991. Using an epidemic changepoint model, we find evidence of a increase in stratospheric temperature during a period that contains the immediate aftermath of Mt. Pinatubo, with most detected changepoints occurring in the tropics as expected.
The role to which a realistic inflow turbulent boundary layer (TBL) influences transient and mean large-scale pool fire quantities of interest (QoIs) is numerically investigated. High-fidelity, low-Mach large-eddy simulations that activate low-dissipation, unstructured numerics are conducted using an unsteady flamelet combustion modeling approach with mutiphysics coupling to soot and participating media radiation transport. Three inlet profile configurations are exercised for a large-scale, high-aspect rectangular pool that is oriented perpendicular to the flow direction: a time-varying, TBL inflow profile obtained from a periodic precursor simulation, the time-mean of the transient TBL, and a steady power-law inflow profile that replicates the mean TBL crosswind velocity of 10.0 m/s at a vertical height of 10 m. Results include both qualitative transient flame evolution and quantitative flame shape with ground-level temperature and convective/radiative heat flux profiles. While transient fire events, which are driven by burst-sweep TBL coupling, such as blow-off and reattachment are vastly different in the TBL case (contributing to increased root mean square QoI fluctuation prediction and disparate flame lengths), mean surface QoI magnitudes are similar. Quadrant analysis demonstrates that the TBL configuration modifies burst-sweep phenomena at windward pool locations, while leeward recovery is found. Positive fluctuations of convective heat flux correlate with fast moving fluid away from the pool surface due to intermittent combustion events.
Cast Monel alloys are used in many industrial applications that require a combination of good mechanical properties and excellent resistance to corrosion. Despite relative widespread use, there has been limited prior research investigating the fundamental composition–structure–property relationships. In this work, microstructural characterization, thermal analysis, electron probe microanalysis, tensile testing, and Varestraint testing were used to assess the effects of variations in nominal composition on the solidification path, microstructure, mechanical properties, and solidification cracking susceptibility of cast Monel alloys. It was found that Si segregation caused the formation of silicides at the end of solidification in grades containing at least 3 wt pct Si. While increases to Si content led to significant improvements in strengthening due to the precipitation of β1-Ni3Si, the silicide eutectics acted as crack nucleation sites during tensile loading which severely reduced ductility. The solidification cracking susceptibility of low-Si Monel alloys was found to be relatively low. However, increases to Si concentration and the onset of associated eutectic reactions increased the solidification temperature range and drastically reduced cracking resistance. Increases in the Cu and Mn concentrations were found to reduce the solubility limit of Si in austenite which promoted additional eutectic formation and exacerbated the reductions in ductility and/or weldability.
Perovskite solar cells (PSCs) are emerging photovoltaic (PV) technologies capable of matching power conversion efficiencies (PCEs) of current PV technologies in the market at lower manufacturing costs, making perovskite solar modules (PSMs) cost competitive if manufactured at scale and perform with minimal degradation. PSCs with the highest PCEs, to date, are lead halide perovskites. Lead presents potential environmental and human health risks if PSMs are to be commercialized, as the lead in PSMs are more soluble in water compared to other PV technologies. Therefore, prior to commercialization of PSMs, it is important to highlight, identify, and establish the potential environmental and human health risks of PSMs as well as develop methods for assessing the potential risks. Here, we identify and discuss a variety of international standards, U.S. regulations, and permits applicable to PSM deployment that relate to the potential environmental and human health risks associated with PSMs. The potential risks for lead and other hazardous material exposures to humans and the environment are outlined which include water quality, air quality, human health, wildlife, land use, and soil contamination, followed by examples of how developers of other PV technologies have navigated human health and environmental risks previously. Potential experimentation, methodology, and research efforts are proposed to elucidate and characterize potential lead leaching risks and concerns pertaining to fires, in-field module damage, and sampling and leach testing of PSMs at end of life. Lastly, lower technology readiness level solutions to mitigate lead leaching, currently being explored for PSMs, are discussed. PSMs have the potential to become a cost competitive PV technology for the solar industry and taking steps toward understanding, identifying, and creating solutions to mitigate potential environmental and human health risks will aid in improving their commercial viability.
The rate of electric vehicle (EV) adoption, powered by the Li-ion battery, has grown exponentially; largely driven by technological advancements, consumer demand, and global initiatives to reduce carbon emissions. As a result, it is imperative to understand the state of stability (SoS) of the cells inside an EV battery pack. That understanding will enable the warning of or prevention against catastrophic failures that can lead to serious injury or even, loss of life. The present work explores rapid electrochemical impedance spectroscopy (EIS) coupled with gas sensing technology as diagnostics to monitor cells and packs for failure markers. These failure markers can then be used for onboard assessment of SoS. Experimental results explore key changes in single cells and packs undergoing thermal or electrical abuse. Rapid EIS showed longer warning times, followed by VOC sensors, and then H2 sensors. While rapid EIS gives the longest warning time, with the failure marker often appearing before the cell vents, the reliability of identifying impedance changes in single cells within a pack decreases as the pack complexity increases. This provides empirical evidence to support the significant role that cell packaging and battery engineering intricacies play in monitoring the SoS.
A reduction in wake effects in large wind farms through wake-aware control has considerable potential to improve farm efficiency. This work examines the success of several emerging, empirically derived control methods that modify wind turbine wakes (i.e., the pulse method, helix method, and related methods) based on Strouhal numbers on the (Formula presented.). Drawing on previous work in the literature for jet and bluff-body flows, the analyses leverage the normal-mode representation of wake instabilities to characterize the large-scale wake meandering observed in actuated wakes. Idealized large-eddy simulations (LES) using an actuator-line representation of the turbine blades indicate that the (Formula presented.) and (Formula presented.) modes, which correspond to the pulse and helix forcing strategies, respectively, have faster initial growth rates than higher-order modes, suggesting these lower-order modes are more appropriate for wake control. Exciting these lower-order modes with periodic pitching of the blades produces increased modal growth, higher entrainment into the wake, and faster wake recovery. Modal energy gain and the entrainment rate both increase with streamwise distance from the rotor until the intermediate wake. This suggests that the wake meandering dynamics, which share close ties with the relatively well-characterized meandering dynamics in jet and bluff-body flows, are an essential component of the success of wind turbine wake control methods. A spatial linear stability analysis is also performed on the wake flows and yields insights on the modal evolution. In the context of the normal-mode representation of wake instabilities, these findings represent the first literature examining the characteristics of the wake meandering stemming from intentional Strouhal-timed wake actuation, and they help guide the ongoing work to understand the fluid-dynamic origins of the success of the pulse, helix, and related methods.
Yraguen, Boni F.; Steinberg, Adam M.; Nilsen, Christopher W.; Biles, Drummond E.; Mueller, Charles J.
Ducted fuel injection (DFI) is a strategy to improve fuel/charge-gas mixing in direct-injection compression-ignition engines. DFI involves injecting fuel along the axis of a small tube in the combustion chamber, which promotes the formation of locally leaner mixtures in the autoignition zone relative to conventional diesel combustion. Previous work has demonstrated that DFI is effective at curtailing engine-out soot emissions across a wide range of operating conditions. This study extends previous investigations, presenting engine-out emissions and efficiency trends between ducted two-orifice and ducted four-orifice injector tip configurations. For each configuration, parameters investigated include injection pressure, injection duration, intake manifold pressure, intake manifold temperature, start of combustion timing, and intake-oxygen mole fraction. For both configurations and across all parameters, DFI reduced engine-out soot emissions compared to conventional diesel combustion, with little effect on other emissions and engine efficiency. Emissions trends for both configurations were qualitatively the same across the parameters investigated. The four-duct configuration had higher thermal efficiency and indicated-specific engine-out nitrogen oxide emissions but lower indicated-specific engine-out hydrocarbon and carbon monoxide emissions than the two-duct assembly. Both configurations achieved indicated-specific engine-out emissions for both soot and nitrogen oxides that comply with current on- and off-road heavy-duty regulations in the United States without exhaust-gas aftertreatment at an intake-oxygen mole fraction of 12%. High-speed in-cylinder imaging of natural soot luminosity shows that some conditions include a second soot-production phase late in the cycle. The probability of these late-cycle events is sensitive to both the number of ducted sprays and the operating conditions.
Recent findings suggest that ions are strongly correlated in atmospheric pressure plasmas if the ionization fraction is sufficiently high ( ≳ 10 − 5 ). A consequence is that ionization causes disorder-induced heating (DIH), which triggers a significant rise in ion temperature on a picosecond timescale. This is followed by a rise in the neutral gas temperature on a longer timescale of up to nanoseconds due to ion-neutral temperature relaxation. The sequence of DIH and ion-neutral temperature relaxation suggests a new mechanism for ultrafast neutral gas heating. Previous work considered only the case of an instantaneous ionization pulse, whereas the ionization pulse extends over nanoseconds in many experiments. Here, molecular dynamics simulations are used to analyze the evolution of ion and neutral gas temperatures for a gradual ionization over several nanoseconds. The results are compared with published experimental results from a nanosecond pulsed discharge, showing good agreement with a measurement of fast neutral gas heating.
We can improve network pruning by leveraging the loss-topography extraction techniques used by projective integral updates for variational inference. Low-variance Hessians facilitate more aggressive pruning by providing better loss approximations when a parameter is removed.
As machine learning models for radioisotope quantification become more powerful, likewise the need for high-quality synthetic training data grows as well. For problem spaces that involve estimating the relative isotopic proportions of various sources in gamma spectra it is necessary to generate training data that accurately represents the variance of proportions encountered. In this report, we aim to provide guidance on how to target a desired variance of proportions which are randomly when using the PyRIID Seed Mixer, which samples from a Dirichlet distribution. We provide a method for properly parameterizing the Dirichlet distribution in order to maintain a constant variance across an arbitrary number of dimensions, where each dimension represents a distinct source template being mixed. We demonstrate that our method successfully parameterizes the Dirichlet distribution to target a specific variance of proportions, provided that several conditions are met. This allows us to follow a principled technique for controlling how random mixture proportions are generated which are then used downstream in the synthesis process to produce the final, noisy gamma spectra.
A mini split cooling system will be used to maintain temperature requirements in a mobile secure transport system. While the split cooling system was designed to be used in static residential or commercial applications, it was selected for this transportation application due to a unique set of security requirements. However, the system’s ability maintain reliability and survive prolonged long-term shock and vibration is a significant concern. The mitigation strategy is to select vibration isolation mounts and perform lifetime shock and vibration testing to demonstrate survivability. The goal of this study is to generate a finite element model of the system and perform modal analysis to inform selection of vibration mounts to minimize the amount of vibrational energy transferred to the split cooling system. The scope of this report is limited to study of the condensing unit only, and geometric variation of the assembly will not be allowed.
The U.S. Department of Energy (DOE) Office of Cybersecurity, Energy Security, and Emergency Response (CESER), and Office of Electricity (OE) commissioned the National Renewable Energy Laboratory (NREL) to develop a method and tool to enable electric utilities to understand and manage the risk of cybersecurity events that can lead to physical effects like blackouts. This tool, called Cyber100 Compass, uses cybersecurity data elicited from cybersecurity experts, then incorporates that data into a tool designed to be usable by cybersecurity non-experts who understand the system itself. The tool estimates dollar-valued risks for a current or postulated future electric power digital control configuration, in order to enable utility risk planners to prioritize among proposed cybersecurity risk mitigation options. With the development of the Cyber100 Compass tool for quantification of future cyber-physical security risks, NREL has taken an initial bold step in the direction of enabling and indeed encouraging electric utilities to address the potential for cybersecurity incidents to produce detrimental physical effects related to electric power delivery. As part of the Cyber100 Compass development process, DOE funded NREL to seek out an independent technical review of the risk methodology embodied in the tool. NREL requested this review from Sandia National Laboratories, and made available to Sandia a very late version of the project report, as well as NREL personnel to provide clarification and to respond to questions. This paper provides the result of the independent review activity.
In this report we discuss training a deep learning seismic signal detection model on 3-component stations from the International Monitoring System (IMS) using the PhaseNet architecture. Using 14 years of associated signals from the International Data Centre’s (IDC) Late Event Bulletin (LEB), we auto-curated training data consisting of signal windows containing associated arrivals, and noise windows that contain no LEB-associated signals. We trained several models using different waveform window durations (30 seconds and 100 seconds), with and without bandpass filtering. We evaluated the effectiveness of our models using associated signals from the Unconstrained Global Event Bulletin (UGEB) and found that several of our models outperformed the signal detections from the IDC’s Selected Event List 3 (SEL3) arrival table. The SEL3 bulletin evaluated on the UGEB dataset with 100-second waveform windows registered a precision and recall of .15 and .48, respectively, versus .19 and .59 for our filtered-data model. For the 30-second waveform window dataset, the SEL3 bulletin achieved a precision and recall of .31 and .47, respectively, versus .32 and .60 for our filtered-data model. Finally, our models detected signals from all source-to-receiver distances, suggesting it is feasible to use a single PhaseNet model for the IMS network.
Binary classification using machine learning is needed to address engineering problems such as identifying passing/failing parts based on measured features from aging hardware. In these classifications, providing the uncertainty of each prediction is essential to support engineering decision making. One popular classifier is the support vector machine (SVM). There are many variations, with the simplest being a linear division between two classes with a hyperplane. Kernel methods can be implement
The On-Line Waste Library is a website that contains information regarding United States Department of Energy-managed high-level waste, spent nuclear fuel, and other wastes that are likely candidates for deep geologic disposal, with links to supporting documents for the data. This report provides supporting information for the data for which an already published source was not available.
While modeling a generic pulse transformer, we became interested in the possibility of electric sparks between winding layers in a solid encapsulant. We significantly modified a previously developed ALEGRA MHD model of a generic spark in lexan. The cumulative modifications are significant enough to report here. Possibly the most significant modification was a change in how the simulated spark is initiated from a thin initial channel. The change was from imposing an initial hot temperature to imposing a conductivity floor. The reasons and comparisons of results are included. The second significant change was to replace a fixed current rise rate with an external circuit model. We built a model specifically mimicking the distributed inductance and stray capacitance between the coil turns closest to the modeled spark. Excursions from nominal values examine the sensitivity of resulting behaviors to extreme capacitance and inductance values.
This report documents the results of a long-term (5.79 year) exposure of 4-point bend corrosion test samples in the inlet and outlet vents of four spent nuclear fuel dry storage systems at the Maine Yankee Independent Spent Fuel Storage Installation. The goal of the test was to evaluate the corrosiveness of salt aerosols in a realistic near-marine environment, providing a data set for improved understanding of stress corrosion cracking of spent nuclear fuel dry storage canisters. Examination of the samples after extraction showed minor corrosion was present, mostly on rough-ground surfaces. However, dye penetrant testing showed that no SCC cracks were present. Dust collected on coupons co-located with the corrosion specimens was analyzed by scanning electron microscopy and leached to determine the soluble salts present. The dust was mostly organic material (pollen and stellate trichomes), with lesser detrital mineral grains. Salts present were a mix of sea-salts and continental salts, with chloride dominating the anions, but significant amounts of nitrate were also present. Both corrosion samples and dust samples showed evidence of wetting, indicating entry of water into the vents. The results of this field test suggest that the environment at Maine Yankee is not highly aggressive, although extrapolation from the periodically wetted vent samples to the hot, dry, canister surface may be difficult. No stress corrosion cracks were observed, but minor corrosion was present despite high nitrate concentrations in the salts. These observations may help address the ongoing question of the importance of nitrate in suppressing corrosion and SCC.
Commercial nuclear power plants typically use nuclear fuel that is enriched to less than five weight percent in the isotope 235U. However, recently several vendors have proposed new nuclear power plant designs that would use fuel with 235U enrichments between five weight percent and 19.75 weight percent. Nuclear fuel with this level of 235U enrichment is known as “high assay low-enriched uranium.” Once it has been irradiated in a nuclear reactor and becomes used (or spent) nuclear fuel, it will be stored, transported, and disposed of. However, irradiated high assay low-enriched uranium differs from typical irradiated nuclear fuel in several ways, and these differences may have economic effects on its storage, transport, and disposal, compared to typical irradiated nuclear fuel. This report describes those differences and qualitatively discusses their potential economic effects on storage, transport, and disposal.
Derivative computation is a key component of optimization, sensitivity analysis, uncertainty quantification, and the solving of nonlinear problems. Automatic differentiation (AD) is a powerful technique for evaluating such derivatives, and in recent years, has been integrated into programming environments such as Jax, PyTorch, and TensorFlow to support derivative computations needed for training of machine learning models, facilitating wide-spread use of these technologies. The C++ language has become the de facto standard for scientific computing due to numerous factors, yet language complexity has made the wide-spread adoption of AD technologies for C++ difficult, hampering the incorporation of powerful differentiable programming approaches into C++ scientific simulations. This is exacerbated by the increasing emergence of architectures, such as GPUs, with limited memory capabilities and requiring massive thread-level concurrency. C++ AD tools must effectively use these environments to bring novel scientific simulations to next-generation DOE experimental and observational facilities. In this project, we investigated source transformation-based automatic differentiation using LLVM compiler infrastructure to automatically generate portable and efficient gradient computations of Kokkos-based code. We have demonstrated that our proposed strategy is feasible by investigating the usage of a prototype LLVM-based source transformation tool to generate gradients of simple functions made of sequences of simple Kokkos parallel regions. Speedups of up to 500x compared to Sacado were observed on NVIDIA V100 GPU.
Sandia National Laboratories (SNL) performed a high-altitude nuclear electromagnetic pulse (HEMP) critical generation station component vulnerability test campaign with a focus on high-frequency, conducted early-time (E1) HEMP for the Department of Energy (DOE) Office of Cybersecurity, Energy Security, and Emergency Response (CESER). This report provides vulnerability test results to investigate component response and/or damage thresholds to reasonable HEMP threat levels that will help to inform site vulnerability assessments, mitigation planning, and modeling calibrations. This work details testing of North American Electric (NAE) magnetic motor starters to determine the effects of conducted HEMP environments. Motor starters are the control elements that provide power to motors throughout a generating plant; a starter going offline would cause loss of power to critical pumps and compressors, which could lead to component damage or unplanned plant outages. Additionally, failed starters would be unable to support plant startup. Six industrial motor starters were tested: two 2 horsepower (HP) starters with breaker disconnects and typical protection equipment, two 20 HP starters with breaker disconnects, and two 20 HP starters with fused disconnects. Each starter was placed in a circuit with a generator and inductive motor matching the starter rating. The conducted EMP insult was injected on the power cables passing through the motor starter, with separate tests for the generator and motor sides of the starter.
Accurately locating sources of seismic and infrasonic energy is integral to global monitoring of earthquakes and explosions. Infrasound arrivals times can be used to calculate the origins of events that generate acoustic energy. Picking times of emergent infrasound arrivals, however, can be difficult and prone to uncertainty. Reverse time migration (RTM) is a waveform based location method that does not rely on picked arrival times. Here we use RTM to locate a known chemical explosion that generated acoustic and acoustic-to-seismic signals on 26 and 108 receivers, respectively. All location predictions are less than 24 km from the known location with time errors of less than three minutes. We find strong overall agreement between our results and those of existing RTM and arrival time based methods. Our initial results suggest that RTM is a promising method of event location using acoustic arrivals recorded on both infrasound and seismic instrumentation.
A dry etching process to transfer the pattern of a photonic integrated circuit design for high-speed laser communications is described. The laser stack under consideration is a 3.2-µm-thick InGaAs/InAlAs/InAlGaAs epitaxial structure grown by molecular beam epitaxy. The etching was performed using Cl2-based inductively-coupled-plasma and reactive-ion-etching (ICP-RIE) reactors. Four different recipes are presented in two similar ICP-RIE reactors, with special attention paid to the etched features formed with various hard mask compositions, in-situ passivations, and process temperatures. The results indicate that it is possible to produce high-aspect-ratio features with sub-micron separation on this multilayer structure. Additionally, the results of the etching highlight the tradeoffs involved with the corresponding recipes.
The final quality of any AI/ML system is directly related to the quality of the input data used to train the system. In this case, we are trying to build a reliable image classifier that can correctly identify electrical components in x-ray images. The classification confidence is directly related to the quality of the labels in the training data, which are used in developing the AI/ML classifier. Incorrect or incomplete labels can substantially hinder the performance of the system during the training process, as it tries to compensate for variations that should not exist. Image labels are entered by subject matter experts, and in general can be assumed to be correct. However, this is not a guarantee, so developing ways to measure label quality and help identify or reject bad labels is important, especially as the database continues to grow. Given the current size of the database, a full manual review of each component is not feasible. This report will highlight the current state of the “RECON” x-ray image database and summarize several recent developments to try to help ensure high quality labeling both now and in the future. Questions that we hope to answer with this development include: 1) Are there any components with incorrect labels? 2) Can we suggest labels for components that are marked “Unknown”? 3) What kind of overall confidence do we have in the quality of the existing labels? 4) What systems or procedures can we put in place to maximize label quality?
Quantifying the radioactive sources present in gamma spectra is an ever-present and growing national security mission and a time-consuming process for human analysts. While machine learning models exist that are trained to estimate radioisotope proportions in gamma spectra, few address the eventual need to provide explanatory outputs beyond the estimation task. In this work, we develop two machine learning models for a NaI detector measurements: one to perform the estimation task, and the other to characterize the first model’s ability to provide reasonable estimates. To ensure the first model exhibits a behavior that can be characterized by the second model, the first model is trained using a custom, semi-supervised loss function which constrains proportion estimates to be explainable in terms of a spectral reconstruction. The second auxiliary model is an out-of-distribution detection function (a type of meta-model) leveraging the proportion estimates of the first model to identify when a spectrum is sufficiently unique from the training domain and thus is out-of-scope for the model. In demonstrating the efficacy of this approach, we encourage the use of meta-models to better explain ML outputs used in radiation detection and increase trust.
Makaju, Rebika; Kassar, Hafsa; Daloglu, Sabahattin M.; Huynh, Anna; Laroche, Dominique; Levchenko, Alex; Addamane, Sadhvikas J.
Coulomb drag experiments have been an essential tool to study strongly interacting low-dimensional systems. Historically, this effect has been explained in terms of momentum transfer between electrons in the active and the passive layer. We report Coulomb drag measurements between laterally coupled GaAs/AlGaAs quantum wires in the multiple one-dimensional (1D) sub-band regime that break Onsager's reciprocity upon both layer and current direction reversal, in contrast to prior 1D Coulomb drag results. The drag signal shows nonlinear current-voltage (I-V) characteristics, which are well characterized by a third-order polynomial fit. These findings are qualitatively consistent with a rectified drag signal induced by charge fluctuations. However, the nonmonotonic temperature dependence of this drag signal suggests that strong electron-electron interactions, expected within the Tomonaga-Luttinger liquid framework, remain important and standard interaction models are insufficient to capture the qualitative nature of rectified 1D Coulomb drag.
Yang, Ji; Wang, Lu; Wan, Jiawei; El Gabaly, Farid; Fernandes Cauduro, Andre L.; Chen, Jeng-Lung; Hsu, Liang-Ching; Lee, Daewon; Zhao, Xiao; Zheng, Haimei; Salmeron, Miquel; Dong, Zhun; Lin, Hongfei; Somorjai, Gabor A.; Prendergast, David; Jiang, De-En; Singh, Seema; Su, Ji
Developing atomically synergistic bifunctional catalysts relies on the creation of colocalized active atoms to facilitate distinct elementary steps in catalytic cycles. Herein, we show that the atomically-synergistic binuclear-site catalyst (ABC) consisting of Znδ+ -O-Cr6+ on zeolite SSZ-13 displays unique catalytic properties for iso-stoichiometric co-conversion of ethane and CO2. Ethylene selectivity and utilization of converted CO2 can reach 100 % and 99.0% under 500 °C at ethane conversion of 9.6%, respectively. In-situ/ex-situ spectroscopic studies and DFT calculations reveal atomic synergies between acidic Zn and redox Cr sites. Znδ+ (0 < δ < 2) sites facilitate β-C-H bond cleavage in ethane and the formation of Zn-Hδ- hydride, thereby the enhanced basicity promotes CO2 adsorption/activation and prevents ethane C-C bond scission. The redox Cr site accelerates CO2 dissociation by replenishing lattice oxygen and facilitates H2O formation/desorption. This study presents the advantages of the ABC concept, paving the way for the rational design of novel advanced catalysts.
In the framework of SFERA-III WP10 Task3, ENEA has organized the 3D-shape round-robin (RR); the purpose is to compare the main geometrical parameters of 3D shape measurement of parabolic-trough (PT) reflective panels evaluated with the instruments adopted by each participant among: ENEA, DLR, F-ISE, NREL, and SANDIA. The last two institutions are outside of the EU, but benefited from the Transnational Access institute to visit several European laboratories, including the ENEA Casaccia research center where they accomplished some measurements with a portable experimental set-up. RR is based on the inter-laboratory circulation of 3 inner plus 3 outer PT panels. The start of the RR was delayed by the covid pandemic, then the circulation of the specimen-set and their measurement took more than one year. At the time of drafting this deliverable at the end of SFERA-III project, NREL has not yet completed the analysis of the measurements, making available only the deviations of the slopes. Therefore here will be reported only the preliminary results. The full comparison will be published as soon as possible, maybe in the open access venue Open Research Europe.
The high-pressure compaction of three dimensional granular packings is simulated using a bonded particle model (BPM) to capture linear elastic deformation. In the model, grains are represented by a collection of point particles connected by bonds. A simple multibody interaction is introduced to control Poisson's ratio and the arrangement of particles on the surface of a grain is varied to model both high- and low-frictional grains. At low pressures, the growth in packing fraction and coordination number follow the expected behavior near jamming and exhibit friction dependence. As the pressure increases, deviations from the low-pressure power-law scaling emerge after the packing fraction grows by approximately 0.1 and results from simulations with different friction coefficients converge. These results are compared to predictions from traditional discrete element method simulations which, depending on the definition of packing fraction and coordination number, may only differ by a factor of two. As grains deform under compaction, the average volumetric strain and asphericity, a measure of the change in the shape of grains, are found to grow as power laws and depend heavily on the Poisson's ratio of the constituent solid. Larger Poisson's ratios are associated with less volumetric strain and more asphericity and the apparent power-law exponent of the asphericity may vary. The elastic properties of the packed grains are also calculated as a function of packing fraction. In particular, we find the Poisson's ratio near jamming is 1/2 but decreases to around 1/4 before rising again as systems densify.
Granular matter takes many paths to pack in natural and industrial processes. The path influences the packing microstructure, particularly for frictional grains. We perform discrete element modeling simulations of different paths to construct packings of frictional spheres. Specifically, we explore four stress-controlled protocols implementing packing expansions and compressions in various combinations thereof. We characterize the eventual packed states through their dependence of the packing fraction and coordination number on packing pressure, identifying non-monotonicities with pressure that correlate with the fraction of frictional contacts. These stress-controlled, bulk-like particle simulations access very low-pressure packings, namely, the marginally stable limit, and demonstrate the strong protocol dependence of frictional granular matter.
Characterization techniques for powder feedstocks used in additive manufacturing (AM) have long been relied upon to describe the inputs to an AM workflow. However, functional gaps remain between tests to measure intrinsic and extrinsic properties with the direct performance within AM equipment. Furthermore, the common practice of reusing powder through multiple build cycles introduces effects and changes to feedstock performance that are otherwise difficult to measure quantitatively. Here, standardization and the development of new test methods have not kept pace with the rapid evolution of the AM industry and its reliance on highly coupled process-structure–property-performance relationships.
Here, this paper presents the conceptual design of a tension leg platform (TLP) for the ARCUS “towerless” vertical-axis wind turbine (VAWT). VAWTs are ideal for floating offshore sites and have several advantages over horizontal-axis wind turbines (HAWT) including reduced top mass, lower center of gravity, increased energy capture, and in turn lower cost. The towerless ARCUS VAWT drives these advantages further through increased structural efficiency and by enabling more optimized TLP designs with simplified installation procedures. For hull sizing, we have studied three turbine sizes with corresponding power ratings of 5.1 MW, 10.4 MW and 22.3 MW. The largest turbine was identified as having the greatest potential to reduce the levelized cost of energy (LCOE) and is the reference size used for the further detailed design process. The conceptual design of the VAWT TLP has been awarded with an ABS Approval in Principle Certificate. This paper contains brief analysis results and design findings for a TLP designed to house a VAWT, including the following topics: • Applicable Design Codes • Metocean Conditions • ARCUS Turbine Loads • Design Load Cases and Requirements - Pre-service TLP Stability - In-place TLP Global Performance • Platform Configurations, Hull Structure Scantling Design, Weight and CG Estimation, and General Arrangement Drawings • Hull Ballast Plan for both Pre-service and In-place Conditions • Pre-service Quayside Integration, Transportation and Wet Tow Stability Analysis • Global Performance Analysis for Motions and Tendon tensions • Summary of cost components and system levelized cost of energy
3D integration of multiple microelectronic devices improves size, weight, and power while increasing the number of interconnections between components. One integration method involves the use of metal bump bonds to connect devices and components on a common interposer platform. Significant variations in the coefficient of thermal expansion in such systems lead to stresses that can cause thermomechanical and electrical failures. More advanced characterization and failure analysis techniques are necessary to assess the bond quality between components. Frequency domain thermoreflectance (FDTR) is a nondestructive, noncontact testing method used to determine thermal properties in a sample by fitting the phase lag between an applied heat flux and the surface temperature response. The typical use of FDTR data involves fitting for thermal properties in geometries with a high degree of symmetry. In this work, finite element method simulations are performed using high performance computing codes to facilitate the modeling of samples with arbitrary geometric complexity. A gradient-based optimization technique is also presented to determine unknown thermal properties in a discretized domain. Using experimental FDTR data from a GaN-diamond sample, thermal conductivity is then determined in an unknown layer to provide a spatial map of bond quality at various points in the sample.
Ince, Fatih F.; Frost, Mega; Shima, Darryl; Addamane, Sadhvikas J.; Canedy, Chadwick L.; Bewley, William W.; Tomasulo, Stephanie; Kim, Chul S.; Vurgaftman, Igor; Meyer, Jerry R.; Balakrishnan, Ganesh
The epitaxial development and characterization of metamorphic “GaSb-on-silicon” buffers as substrates for antimonide devices is presented. The approach involves the growth of a spontaneously and fully relaxed GaSb metamorphic buffer in a primary epitaxial reactor, and use of the resulting “GaSb-on-silicon” wafer to grow subsequent layers in a secondary epitaxial reactor. The buffer growth involves four steps—silicon substrate preparation for oxide removal, nucleation of AlSb on silicon, growth of the GaSb buffer, and finally capping of the buffer to prevent oxidation. This approach on miscut silicon substrates leads to a buffer with negligible antiphase domain density. The growth of this buffer is based on inducing interfacial misfit dislocations between an AlSb nucleation layer and the underlying silicon substrate, which results in a fully relaxed GaSb buffer. A 1 μm thick GaSb layer buffer grown on silicon has ~9.2 × 107 dislocations/cm2. The complete lack of strain in the epitaxial structure allows subsequent growths to be accurately lattice matched, thus making the approach ideal for use as a substrate. Here we characterize the GaSb-on-silicon wafer using high-resolution x-ray diffraction and transmission electron microscopy. The concept’s feasibility is demonstrated by growing interband cascade light emitting devices on the GaSb-on-silicon wafer. The performance of the resulting LEDs on silicon approaches that of counterparts grown lattice matched on GaSb.
Because of the high-risk nature of emergencies and illegal activities at sea, it is critical that algorithms designed to detect anomalies from maritime traffic data be robust. However, there exist no publicly available maritime traffic data sets with real-world expert-labeled anomalies. As a result, most anomaly detection algorithms for maritime traffic are validated without ground truth. We introduce the HawaiiCoast_GT data set, the first ever publicly available automatic identification system (AIS) data set with a large corresponding set of true anomalous incidents. This data set—cleaned and curated from raw Bureau of Ocean Energy Management (BOEM) and National Oceanic and Atmospheric Administration (NOAA) automatic identification system (AIS) data—covers Hawaii’s coastal waters for four years (2017–2020) and contains 88,749,176 AIS points for a total of 2622 unique vessels. This includes 208 labeled tracks corresponding to 154 rigorously documented real-world incidents.
We demonstrate an order of magnitude reduction in the sensitivity to optical crosstalk for neighboring trapped-ion qubits during simultaneous single-qubit gates driven with individual addressing beams. Gates are implemented via two-photon Raman transitions, where crosstalk is mitigated by offsetting the drive frequencies for each qubit to avoid first-order crosstalk effects from inter-beam two-photon resonance. The technique is simple to implement, and we find that phase-dependent crosstalk due to optical interference is reduced on the most impacted neighbor from a maximal fractional rotation error of 0.185 ( 4 ) without crosstalk mitigation to ≤ 0.006 with the mitigation strategy. Furthermore, we characterize first-order crosstalk in the two-qubit gate and avoid the resulting rotation errors for the arbitrary-axis Mølmer-Sørensen gate via a phase-agnostic composite gate. Finally, we demonstrate holistic system performance by constructing a composite CNOT gate using the improved single-qubit gates and phase-agnostic two-qubit gate. This work is done on the Quantum Scientific Computing Open User Testbed; however, our methods are widely applicable for individual addressing Raman gates and impose no significant overhead, enabling immediate improvement for quantum processors that incorporate this technique.
Alkali metals are among the most desirable negative electrodes for long duration energy storage due to their extremely high capacities. Currently, only high-temperature (>250 °C) batteries have successfully used alkali electrodes in commercial applications, due to limitations imposed by solid electrolytes, such as low conductivity at moderate temperatures and susceptibility to dendrites. Toward enabling the next generation of grid-scale, long duration batteries, we aim to develop molten sodium (Na) systems that operate with commercially attractive performance metrics including high current density (>100 mA cm-2), low temperature (<200 °C), and long discharge times (>12 h). In this work, we focus on the performance of NaSICON solid electrolytes in sodium symmetric cells at 110 °C. Specifically, we use a tin (Sn) coating on NaSICON to reduce interfacial resistance by a factor of 10, enabling molten Na symmetric cell operation with “discharge” durations up to 23 h at 100 mA cm-2 and 110 °C. Unidirectional galvanostatic testing shows a 70% overpotential reduction, and electrochemical impedance spectroscopy (EIS) highlights the reduction in interfacial resistance due to the Sn coating. Detailed scanning electron microscopy (SEM) and energy-dispersive spectroscopy (EDS) show that Sn-coated NaSICON enables current densities of up to 500 mA cm-2 at 110 °C by suppressing dendrite formation at the plating interface (Mode I). This analysis also provides a mechanistic understanding of dendrite formation at current densities up to 1000 mA cm-2, highlighting the importance of effective coatings that will enable advanced battery technologies for long-term energy storage.
Here, we show that a laser at threshold can be utilized to generate the class of coherent and transform-limited waveforms (vt — z)mei(kz—ωt) at optical frequencies. We derive these properties analytically and demonstrate them in semiclassical time-domain laser simulations. We then utilize these waveforms to expand other waveforms with high modulation frequencies and demonstrate theoretically the feasibility of complex-frequency coherent absorption at optical frequencies, with efficient energy transduction and cavity loading. This approach has potential applications in quantum computing, photonic circuits, and biomedicine.
As global temperatures continue to rise, climate mitigation strategies such as stratospheric aerosol injections (SAI) are increasingly discussed, but the downstream effects of these strategies are not well understood. As such, there is interest in developing statistical methods to quantify the evolution of climate variable relationships during the time period surrounding an SAI. Feature importance applied to echo state network (ESN) models has been proposed as a way to understand the effects of SAI using a data-driven model. This approach depends on the ESN fitting the data well. If not, the feature importance may place importance on features that are not representative of the underlying relationships. Typically, time series prediction models such as ESNs are assessed using out-of-sample performance metrics that divide the times series into separate training and testing sets. However, this model assessment approach is geared towards forecasting applications and not scenarios such as the motivating SAI example where the objective is using a data driven model to capture variable relationships. Here, in this paper, we demonstrate a novel use of climate model replicates to investigate the applicability of the commonly used repeated hold-out model assessment approach for the SAI application. Simulations of an SAI are generated using a simplified climate model, and different initialization conditions are used to provide independent training and testing sets containing the same SAI event. The climate model replicates enable out-of-sample measures of model performance, which are compared to the single time series hold-out validation approach. For our case study, it is found that the repeated hold-out sample performance is comparable, but conservative, to the replicate out-of-sample performance when the training set contains enough time after the aerosol injection.
Optimization of the radiation pattern from a Bremsstrahlung target for a given application is possible by controlling the electron beam that impacts the high-atomic-number target. In this work, the electron beam is generated by a 13MV vacuum diode that terminates a coaxial magnetically insulted transmission line (MITL) on the HERMES-III machine at Sandia National Labs. Work by Sanford introduced a geometry for vacuum diodes that can control the flow within bounds. The "indented anode", as coined by Sanford, can straighten out the electron beam in a high-current diode that would otherwise be prone to beam pinching. A straighter beam will produce a more forwardly directed radiation pattern while a pinching electron beam will yield a focal point or hot spot on axis and a more diffuse radiation pattern. Either one of these may be desirable depending on the application. This work serves as a first attempt to optimize the radiation pattern in the former sense of collimating the radiation pattern given a limited parameter space. The optimization is attempted first using electromagnetic particle-in-cell simulations in the EMPIRE code suite. The setup of the models used in EMPIRE is discussed along with some basic theory behind some of the models used in the simulations such as anode heating and secondary ions. Theoretical work performed by Allen Garner and his students at Purdue is included here, which concerns the impact of collisions in these vacuum diodes. The EMPIRE simulations consider both an aggressive and a conservative design. The aggressive design is inherently riskier while the conservative design is chosen as something that, while still a risk, is more likely to perform as expected. The ultimate goal of this work was to validate the EMPIRE code results with experimental data. While the experiment that tested the diode designs proposed by the simulation results fell outside of the fiscal boundaries of this project (and for that reason the results of which are not included in this report), the hardware for the experiment was designed and drafted within those same fiscal boundaries, and is thus included in this report. However, there was yet another experiment performed in this project that tested a key feature of the diode: the hemispherical cathode. Those results are documented here as well, which show that the cathode tip is an important aspect to controlling the diode flow. A short series of simulations on this diode were also performed after the experiment in order to gain a better understanding of the effect of ions. on the flow pattern and faceplate dose profile.
We investigate the interplay between the quantum Hall (QH) effect and superconductivity in InAs surface quantum well (SQW)/NbTiN heterostructures using a quantum point contact (QPC). We use QPC to control the proximity of the edge states to the superconductor. By measuring the upstream and downstream resistances of the device, we investigate the efficiency of Andreev conversion at the InAs/NbTiN interface. Our experimental data is analyzed using the Landauer-Büttiker formalism, generalized to allow for Andreev reflection processes. We show that by varying the voltage of the QPC, VQPC, the average Andreev reflection, A, at the QH-SC interface can be tuned from 50% to ∼10%. The evolution of A with VQPC extracted from the measurements exhibits plateaus separated by regions for which A varies continuously with VQPC. The presence of plateaus suggests that for some ranges of VQPC the QPC might be pinching off almost completely from the QH-SC interface some of the edge modes. Our work shows an experimental setup to control and advance the understanding of the complex interplay between superconductivity and QH effect in two-dimensional gas systems.
Thermochemical air separation to produce high-purity N2 was demonstrated in a vertical tube reactor via a two-step reduction–oxidation cycle with an A-site substituted perovskite Ba0.15Sr0.85FeO3–δ (BSF1585). BSF1585 particles were synthesized and characterized in terms of their chemical, morphological, and thermophysical properties. A thermodynamic cycle model and sensitivity analysis using computational heat and mass transfer models of the reactor were used to select the system operating parameters for a concentrating solar thermal-driven process. Thermal reduction up to 800 °C in air and temperature-swing air separation from 800 °C to minimum temperatures between 400 and 600 °C were performed in the reactor containing a 35 g packed bed of BSF1585. The reactor was characterized for dispersion, and air separation was characterized via mass spectrometry. Gas measurements indicated that the reactor produced N2 with O2 impurity concentrations as low as 0.02 % for > 30 min of operation. A parametric study of air flow rates suggested that differences in observed and thermodynamically predicted O2 impurities were due to imperfect gas transport in the bed. Temperature swing reduction/oxidation cycling experiments between 800 and 400 °C in air were conducted with no statistically significant degradation in N2 purity over 50 cycles.
Efficient solution of the Vlasov equation, which can be up to six-dimensional, is key to the simulation of many difficult problems in plasma physics. The discontinuous Petrov-Galerkin (DPG) finite element methodology provides a framework for the development of stable (in the sense of Ladyzhenskaya–Babuška–Brezzi conditions) finite element formulations, with built-in mechanisms for adaptivity. While DPG has been studied extensively in the context of steady-state problems and to a lesser extent with space-time discretizations of transient problems, relatively little attention has been paid to time-marching approaches. In the present work, we study a first application of time-marching DPG to the Vlasov equation, using backward Euler for a Vlasov-Poisson discretization. We demonstrate adaptive mesh refinement for two problems: the two-stream instability problem, and a cold diode problem. We believe the present work is novel both in its application of unstructured adaptive mesh refinement (as opposed to block-structured adaptivity, which has been studied previously) in the context of Vlasov-Poisson, as well as in its application of DPG to the Vlasov-Poisson system. We also discuss extensive additions to the Camellia library in support of both the present formulation as well as extensions to higher dimensions, Maxwell equations, and space-time formulations.
Calcite (CaCO3) is one of the most common minerals in geologic and engineered systems. It is often in contact with aqueous solutions, causing chemically assisted fracture that is critical to understanding the stability of subsurface systems and manmade structures. Calcite fracture was evaluated with reactive molecular dynamics simulations, including the impacts of crack tip geometry (notch), the presence of water, and surface hydroxyl groups. Chemo-mechanical weakening was assessed by comparing the loads where fracture began to propagate. Our analyses show that in the presence of a notch, the load at which crack growth begins is lower, compared to the effect of water or surface hydroxyls. Additionally, the breaking of two adjacent Ca-O bonds is the kinetic limitation for crack initiation, since transiently broken bonds can reform, not resulting in crack growth. In aqueous environments, fresh (not hydroxylated) calcite surfaces exhibited water strengthening. Manual addition of H+ and/or OH- species on the (104) calcite surface resulted in chemo-mechanical weakening of calcite by 9%. Achieving full hydroxylation of the calcite surface was thermodynamically and kinetically limited, with only 0.17-0.01 OH/nm2 surface hydroxylation observed on the (104) surface at the end of the simulations. The limited reactivity of pure water with the calcite surface restricts the chemo-mechanical effects and suggests that reactions between physiosorbed water and localized structural defects may be dominating the chemo-mechanical process in the studies where water weakening has been reported.
Color centers in diamond are one of the most promising tools for quantum information science. Of particular interest is the use of single-crystal diamond membranes with nanoscale-thickness as hosts for color centers. Indeed, such structures guarantee a better integration with a variety of other quantum materials or devices, which can aid the development of diamond-based quantum technologies, from nanophotonics to quantum sensing. A common approach for membrane production is what is known as “smart-cut”, a process where membranes are exfoliated from a diamond substrate after the creation of a thin sub-surface amorphous carbon layer by He+ implantation. Due to the high ion fluence required, this process can be time-consuming. In this work, we demonstrated the production of thin diamond membranes by neon implantation of diamond substrates. With the target of obtaining membranes of ~200 nm thickness and finding the critical damage threshold, we implanted different diamonds with 300 keV Ne+ ions at different fluences. We characterized the structural properties of the implanted diamonds and the resulting membranes through SEM, Raman spectroscopy, and photoluminescence spectroscopy. We also found that a SRIM model based on a two-layer diamond/sp2 -carbon target better describes ion implantation, allowing us to estimate the diamond critical damage threshold for Ne+ implantation. Compared to He+ smart-cut, the use of a heavier ion like Ne+ results in a ten-fold decrease in the ion fluence required to obtain diamond membranes and allows to obtain shallower smart-cuts, i.e. thinner membranes, at the same ion energy.
We propose a method to extract the upper laser level’s (ULL’s) excess electronic temperature from the analysis of the maximum light output power (Pmax) and current dynamic range ΔJd = (Jmax – Jth) of terahertz quantum cascade lasers (THz QCLs). We validated this method, both through simulation and experiment, by applying it on THz QCLs supporting a clean three-level system. Detailed knowledge of electronic excess temperatures is of utmost importance in order to achieve high temperature performance of THz QCLs. Our method is simple and can be easily implemented, meaning an extraction of the excess electron temperature can be achieved without intensive experimental effort. This knowledge should pave the way toward improvement of the temperature performance of THz QCLs beyond the state-of-the-art.
We hereby offer a comprehensive analysis of various factors that could potentially enable terahertz quantum cascade lasers (THz QCLs) to achieve room temperature performance. We thoroughly examine and integrate the latest findings from recent studies in the field. Our work goes beyond a mere analysis; it represents a nuanced and comprehensive exploration of the intricate factors influencing the performance of THz QCLs. Through a comprehensive and holistic approach, we propose novel insights that significantly contribute to advancing strategies for improving the temperature performance of THz QCLs. This all-encompassing perspective allows us not only to present a synthesis of existing knowledge but also to offer a fresh and nuanced strategy to improve the temperature performance of THz QCLs. We draw new conclusions from prior works, demonstrating that the key to enhancing THz QCL temperature performance involves not only optimizing interface quality but also strategically managing doping density, its spatial distribution, and profile. This is based on our results from different structures, such as two experimentally demonstrated devices: the spit-well resonant-phonon and the two-well injector direct-phonon schemes for THz QCLs, which allow efficient isolation of the laser levels from excited and continuum states. In these schemes, the doping profile has a setback that lessens the overlap of the doped region with the active laser states. Our work stands as a valuable resource for researchers seeking to gain a deeper understanding of the evolving landscape of THz technology. Furthermore, we present a novel strategy for future endeavors, providing an enhanced framework for continued exploration in this dynamic field. This strategy should pave the way to potentially reach higher temperatures than the latest records reached for Tmax of THz QCLs.
Intermediate verification languages like Why3 and Boogie have made it much easier to build program verifiers, transforming the process into a logic compilation problem rather than a proof automation one. Why3 in particular implements a rich logic for program specification with polymorphism, algebraic data types, recursive functions and predicates, and inductive predicates; it translates this logic to over a dozen solvers and proof assistants. Accordingly, it serves as a backend for many tools, including Frama-C, EasyCrypt, and GNATProve for Ada SPARK. But how can we be sure that these tools are correct? The alternate foundational approach, taken by tools like VST and CakeML, provides strong guarantees by implementing the entire toolchain in a proof assistant, but these tools are harder to build and cannot directly take advantage of SMT solver automation. As a first step toward enabling automated tools with similar foundational guarantees, we give a formal semantics in Coq for the logic fragment of Why3. We show that our semantics are useful by giving a correct-by-construction natural deduction proof system for this logic, using this proof system to verify parts of Why3's standard library, and proving sound two of Why3's transformations used to convert terms and formulas into the simpler logics supported by the backend solvers.
Understanding pure H2 and H2/CH4 adsorption and diffusion in earth materials is one vital step toward a successful and safe H2 storage in depleted gas reservoirs. Despite recent research efforts such understanding is far from complete. In this work we first use Nuclear Magnetic Resonance (NMR) experiments to study the NMR response of injected H2 into Duvernay shale and Berea sandstone samples, representing materials in confining and storage zones. Then we use molecular simulations to investigate H2/CH4 competitive adsorption and diffusion in kerogen, a common component of shale. Our results indicate that in shale there are two H2 populations, i.e., free H2 and adsorbed H2, that yield very distinct NMR responses. However, only free gas presents in sandstone that yields a H2 NMR response similar to that of bulk H2. About 10 % of injected H2 can be lost due to adsorption/desorption hysteresis in shale, and no H2 loss (no hysteresis) is observed in sandstone. Our molecular simulation results support our NMR results that there are two H2 populations in nanoporous materials (kerogen). The simulation results also indicate that CH4 outcompetes H2 in adsorption onto kerogen, due to stronger CH4-kerogen interactions than H2-kerogen interactions. Nevertheless, in a depleted gas reservoir with low CH4 gas pressure, about ∼30 % of residual CH4 can be desorbed upon H2 injection. The simulation results also predict that H2 diffusion in porous kerogen is about one order of magnitude higher than that of CH4 and CO2. This work provides an understanding of H2/CH4 behaviors in deleted gas reservoirs upon H2 injection and predictions of H2 loss and CH4 desorption in H2 storage.
Measurements of the oxidation rates of various forms of carbon (soot, graphite, coal char) have often shown an unexplained attenuation with increasing temperatures in the vicinity of 2000 K, even when accounting for diffusional transport limitations and gas-phase chemical effects (e.g. CO2 dissociation). With the development of oxy-fuel combustion approaches for pulverized coal utilization with carbon capture, high particle temperatures are readily achieved in sufficiently oxygen-enriched environments. In this work, a new semi-global intrinsic kinetics model for high temperature carbon oxidation is created by starting with a previously developed 5-step mechanism that was shown to reproduce all major known trends in carbon oxidation, except for its high temperature kinetic falloff, and incorporating a recently discovered surface oxide decomposition step. The predictions of this new model are benchmarked by deploying the kinetic model in a steady-state reacting particle code (SKIPPY) and comparing the simulated results against a carefully measured set of pulverized coal char combustion temperature measurements over a wide range of oxygen concentrations in N2 and CO2 environments. The results show that the inclusion of the spontaneous surface oxide decomposition reaction step significantly improves predictions at high particle temperatures. Furthermore, the simulations reveal that O atoms released from the oxide decomposition step enhance the radical pool in the near-surface region and within the particle interior itself. Incorporation of literature rates for O and OH reactions with the carbon surface results in a reduction in the predicted radical pool concentrations and a very minor enhancement of the overall carbon oxidation rate.
In low inertia grids, significant frequency deviations can occur as a result of changes in power (load, generation, etc.), These deviations may activate various protection schemes designed to safeguard the system, potentially leading to blackouts. Therefore, assessing the frequency stability of the power system is crucial. The Frequency Security Index (FSI) serves as a metric for evaluating system stability. However, computing the FSI for a specific load change necessitates actual load changes on the system, which is often impractical. This paper introduces a method for calculating the FSI without requiring load changes for all values. A mathematical expression for the FSI is derived, which uses the values of microgrid parameters (such as inertia and damping constant) to compute the FSI for any load change. Subsequently, the parameters that most significantly affect the FSI are identified. Then, the paper introduces a Moving Horizon Estimation (MHE)-based parameter estimation approach, which leverages small perturbations from an energy storage system to estimate the most influential parameters for the FSI. The results show that the FSI calculation with the estimated parameters is more accurate (compared to COI averaged parameters), enabling a more effective state of health monitoring of the microgrid.
We present optoelectronic characterization of ntype silicon pixels with a suite of plasmonic designs intended to generate and detect electron-hole pairs from incident 1550 nm photons.
Folsom, Matthew; Sewell, Steven; Cumming, William; Zimmerman, Jade; Sabin, Andy; Downs, Christine; Hinz, Nick; Winn, Carmen; Schwering, Paul C.
Blind geothermal systems are believed to be common in the Basin and Range province and represent an underutilized source of renewable green energy. Their discovery has historically been by chance but more methodological strategies for exploration of these resources are being developed. One characteristic of blind systems is that they are often overlain by near-surface zones of low-resistivity caused by alteration of the overlying sediments to swelling clays. These zones can be imaged by resistivity-based geophysical techniques to facilitate their discovery and characterization. Here we present a side-by-side comparison of resistivity models produced from helicopter transient electromagnetic (HTEM) and ground-based broadband magnetotelluric (MT) surveys over a previously discovered blind geothermal system with measured shallow temperatures of ~100°C in East Hawthorne, NV. The HTEM and MT data were collected as part of the BRIDGE project, an initiative for improving methodologies for discovering blind geothermal systems. HTEM data were collected and modelled along profiles, and the results suggest the method can resolve the resistivity structure 300 - 500 m deep. A 61-station MT survey was collected on an irregular grid with ~800 m station spacing and modelled in 3D on a rotated mesh aligned with HTEM flight directions. Resistivity models are compared with results from potential fields datasets, shallow temperature surveys, and available temperature gradient data in the area of interest. We find that the superior resolution of the HTEM can reveal near-surface details often missed by MT. However, MT is sensitive to several km deep, can resolve 3D structures, and is thus better suited for single-prospect characterization. We conclude that HTEM is a more practical subregional prospecting tool than is MT, because it is highly scalable and can rapidly discover shallow zones of low resistivity that may indicate the presence of a blind geothermal system. Other factors such as land access and ground disturbance considerations may also be decisive in choosing the best method for a particular prospect. Resistivity methods in general cannot fully characterize the structural setting of a geothermal system, and so we used potential fields and other datasets to guide the creation of a diagrammatic structural model at East Hawthorne.
Battery systems are typically equipped with state of charge (SoC) estimation algorithms. Sensor measurements used to estimate SoC are susceptible to false data injection attacks (FDIAs) that aim to disturb state estimation and, consequently, damage the system. In this paper, SoC estimation methods are re-purposed to detect FDIAs targeting the current and voltage sensors of a battery stack using a combination of an improved input noise aware unscented Kalman filter (INAUKF) and a cumulative sum detector. The root mean squared error of the states estimated by the INAUKF was at least 85% lower than the traditional unscented Kalman filter for all noise levels tested. The proposed method was able to detect FDIA in the current and voltage sensors of a series-connected battery stack in 99.55% of the simulations.
Uncertainty quantification (UQ) plays a vital role in addressing the challenges and limitations encountered in full-waveform inversion (FWI). Most UQ methods require parameter sampling which requires many forward and adjoint solves. This often results in very high computational overhead compared to traditional FWI, which hinders the practicality of the UQ for FWI. In this work, we develop an efficient UQ-FWI framework based on unsupervised variational autoencoder (VAE) to assess the uncertainty of single and multi-parameter FWI. The inversion operator is modeled using an encoder-decoder network. The input to the network is seismic shot gathers and the output are samples (distribution) of model parameters. We then use these samples to estimate the mean and standard deviation of each parameter population, which provide insights on the uncertainty in the inversion process. To speed up the UQ process, we carried out the reconstruction in an unsupervised learning approach. Moreover, we physics-constrained the network by injecting the FWI gradients during the backpropagation process, leading to better reconstruction. The computational cost of the proposed approach is comparable to the traditional autoencoder full-waveform inversion (AE-FWI), which is encouraging to be used to get further insight on the quality of the inversion. We apply this idea for synthetic data to show its potential in assessing uncertainty in multi-parameter FWI.
Additive manufacturing has ushered in a new paradigm of bottom-up materials-by-design of spatially non-uniform materials. Functionally graded materials have locally tailored compositions to provide optimized global properties and performance. In this letter, we propose an opportunity for the application of graded magnetic materials as lens elements for charged particle optics. A Hiperco50/Hymu80 (FeCo-2 V/Fe-80Ni-5Mo) graded magnetic alloy was successfully additively manufactured via Laser Directed Energy Deposition with spatially varying magnetic properties. The compositional gradient is then applied using computational simulations to demonstrate how a tailored material can enhance the magnetic performance of a critical, image-forming component of a transmission electron microscope.
Accuracy-optimized convolutional neural networks (CNNs) have emerged as highly effective models at predicting neural responses in brain areas along the primate ventral stream, but it is largely unknown whether they effectively model neurons in the complementary primate dorsal stream. We explored how well CNNs model the optic flow tuning properties of neurons in dorsal area MSTd and we compared our results with the Non-Negative Matrix Factorization (NNMF) model, which successfully models many tuning properties of MSTd neurons. To better understand the role of computational properties in the NNMF model that give rise to optic flow tuning that resembles that of MSTd neurons, we created additional CNN model variants that implement key NNMF constraints – non-negative weights and sparse coding of optic flow. While the CNNs and NNMF models both accurately estimate the observer's self-motion from purely translational or rotational optic flow, NNMF and the CNNs with nonnegative weights yield substantially less accurate estimates than the other CNNs when tested on more complex optic flow that combines observer translation and rotation. Despite its poor accuracy, NNMF gives rise to tuning properties that align more closely with those observed in primate MSTd than any of the accuracy-optimized CNNs. This work offers a step toward a deeper understanding of the computational properties and constraints that describe the optic flow tuning of primate area MSTd.
Traditional electronics assemblies are typically packaged using physically or chemically blown potted foams to reduce the effects of shock and vibration. These potting materials have several drawbacks including manufacturing reliability, lack of internal preload control, and poor serviceability. A modular foam encapsulation approach combined with additively manufactured (AM) silicone lattice compression structures can address these issues for packaged electronics. These preloaded silicone lattice structures, known as foam replacement structures (FRSs), are an integral part of the encapsulation approach and must be properly characterized to model the assembly stresses and dynamics. In this study, dynamic test data is used to validate finite element models of an electronics assembly with modular encapsulation and a direct ink write (DIW) AM silicone FRS. A variety of DIW compression architectures are characterized, and their nominal stress-strain behavior is represented with hyperfoam constitutive model parameterizations. Modeling is conducted with Sierra finite element software, specifically with a handoff from assembly preloading and uniaxial compression in Sierra/Solid Mechanics to linear modal and vibration analysis in Sierra/Structural Dynamics. This work demonstrates the application of this advanced modeling workflow, and results show good agreement with test data for both static and dynamic quantities of interest, including preload, modal, and vibration response.
In many applications, one can only access the inexact gradients and inexact hessian times vector products. Thus it is essential to consider algorithms that can handle such inexact quantities with a guaranteed convergence to solution. An inexact adaptive and provably convergent semismooth Newton method is considered to solve constrained optimization problems. In particular, dynamic optimization problems, which are known to be highly expensive, are the focus. A memory efficient semismooth Newton algorithm is introduced for these problems. The source of efficiency and inexactness is the randomized matrix sketching. Applications to optimization problems constrained by partial differential equations are also considered.
The importance of user-accessible multiple-input/multiple-output (MIMO) control methods has been highlighted in recent years. Several user-created control laws have been integrated into Rattlesnake, an open-source MIMO vibration controller developed at Sandia National Laboratories. Much of the effort to date has focused on stationary random vibration control. However, there are many field environments which are not well captured by stationary random vibration testing, for example shock, sine, or arbitrary waveform environments. This work details a time waveform replication technique that uses frequency domain deconvolution, including a theoretical overview and implementation details. Example usage is demonstrated using a simple structural dynamics system and complicated control waveforms at multiple degrees of freedom.
National Security Presidential Memorandum-20 defines three tier levels for launch approval of space nuclear systems. The two main factors determining the tier level are the total quantity and type of radioactive sources and the probability of any member of the public receiving doses above certain thresholds. The total quantity of radioactive sources is compared with International Atomic Energy Agency transportation regulations. The dose probability is determined by the product of three terms: 1) the probability of a launch accident occurring; 2) the probability of a release of radioactive material given an accident; and 3) the probability of exceeding the dose threshold to any member of the public given a release. This paper provides a methodology for evaluating these values and applies this methodology to an example mission as a demonstration. For the example mission, a preliminary tier determination of Tier III was concluded.
Single-axis solar trackers are typically simulated under the assumption that all modules on a given section of torque tube are at a single orientation. In reality, various mechanical effects can cause twisting along the torque tube length, creating variation in module orientation along the row. Simulation of the impact of this on photovoltaic system performance reveals that the performance loss resulting from torque tube twisting is significant at twists as small as fractions of a degree per module. The magnitude of the loss depends strongly on the design of the photovoltaic module, but does not vary significantly across climates. Additionally, simple tracker control setting tweaks were found to substantially reduce the loss for certain types of twist.
Although fire events inside nuclear power plants (NPPs) are infrequent, when they occur, they can affect the safe operation of the plant if there is not sufficient protection addressing the risk. As mitigation for fire events, NPPs have comprehensive fire protection systems intended to reduce the likelihood of a fire event and the associated consequences. An electrical arcing fault involving components made of aluminum is one such hazard that could lead to a significant consequence. Because the original evaluation of high-energy arcing faults (HEAF) was performed on components made of copper, there is an interest in understanding the effects of aluminum in these incidents. The nuclear regulatory commission (NRC) has led a series of HEAF experiments at a facility near Philadelphia, PA, in conjunction with the national institute of standards and technology (NIST), European and Japanese partners, and Sandia National Laboratories (SNL). To capture a range of different HEAF events, Sandia has provided high-speed visible and IR videography from multiple angles during this series of experiments. One of the data products provided by Sandia is the combination and synchronization of infrared and visible data from the multiple cameras used in the tests. This multispectral fusion of information (visible, MWIR, and LWIR) allows the customer to visualize the tests and understand when different events happen in the 2 to 4 second duration of a test. The presentation will dissect three experiments and describe the different events occurring during their duration. The presentation will compare the behavior of equipment that contains aluminum components versus the ones containing copper or steel. Finally, data from a switchgear experiment will be presented to complement the bus duct data.
To decarbonize the energy sector, there are international efforts to displace carbon-based fuels with renewable alternatives, such as hydrogen. Storage and transportation of gaseous hydrogen are key components of large-scale deployment of carbon-neutral energy technologies, especially storage at scale and transportation over long distances. Due to the high cost of deploying large-scale infrastructure, the existing pipeline network is a potential means of transporting blended natural gas-hydrogen fuels in the near term and carbon-free hydrogen in the future. Much of the existing infrastructure in North America was deployed prior to 1970 when greater variability existed in steel processing and joining techniques often leading to microstructural inhomogeneities and hard spots, which are local regions of elevated hardness relative to the pipe or weld. Hard spots, particularly in older pipes and welds, are a known threat to structural integrity in the presence of hydrogen. High-strength materials are susceptible to hydrogen-assisted fracture, but the susceptibility of hard spots in otherwise low-strength materials (such as vintage pipelines) has not been systematically examined. Assessment of fracture performance of pipeline steels in gaseous hydrogen is a necessary step to establish an approach for structural integrity assessment of pipeline infrastructure for hydrogen service. This approach must include comprehensive understanding of microstructural anomalies (such as hard spots), especially in vintage materials. In this study, fracture resistance of pipeline steels is measured in gaseous hydrogen with a focus on high strength materials and hardness limits established in common practice and in current pipeline codes (such as ASME B31.12). Elastic-plastic fracture toughness measurements were compared for several steel grades to identify the relationship between hardness and fracture resistance in gaseous hydrogen.
This article aims at discovering the unknown variables in the system through data analysis. The main idea is to use the time of data collection as a surrogate variable and try to identify the unknown variables by modeling gradual and sudden changes in the data. We use Gaussian process modeling and a sparse representation of the sudden changes to efficiently estimate the large number of parameters in the proposed statistical model. The method is tested on a realistic dataset generated using a one-dimensional implementation of a Magnetized Liner Inertial Fusion (MagLIF) simulation model, and encouraging results are obtained.
As deep learning networks increase in size and performance, so do associated computational costs, approaching prohibitive levels. Dendrites offer powerful nonlinear "on-The-wire"computational capabilities, increasing the expressivity of the point neuron while preserving many of the advantages of SNNs. We seek to demonstrate the potential of dendritic computations by combining them with the low-power event-driven computation of Spiking Neural Networks (SNNs) for deep learning applications. To this end, we have developed a library that adds dendritic computation to SNNs within the PyTorch framework, enabling complex deep learning networks that still retain the low power advantages of SNNs. Our library leverages a dendrite CMOS hardware model to inform the software model, which enables nonlinear computation integrated with snnTorch at scale. By leveraging dendrites in a deep learning framework, we examine the capabilities of dendrites via coincidence detection and comparison in a machine learning task with a SNN. Finally, we discuss potential deep learning applications in the context of current state-of-The-Art deep learning methods and energy-efficient neuromorphic hardware.
Closed-loop geothermal systems (CLGSs) rely on circulation of a heat transfer fluid in a closed-loop design without penetrating the reservoir to extract subsurface heat and bring it to the surface. We developed and applied numerical models to study u-shaped and coaxial CLGSs in hot-dry-rock over a more comprehensive parameter space than has been studied before, including water and supercritical CO2 (sCO2) as working fluids. An economic analysis of each realization was performed to evaluate the levelized cost of heat (LCOH) for direct heating application and levelized cost of electricity (LCOE) for electrical power generation. The results of the parameter study, composed of 2.5 million simulations, combined with a plant and economic model comprise the backbone of a publicly accessible web application that can be used to query, analyze, and plot outlet states, thermal and mechanical power output, and LCOH/LCOE, thereby facilitating feasibility studies led by potential developers, geothermal scientists, or the general public (https://gdr.openei.org/submissions/1473). Our results indicate competitive LCOH can be achieved; however, competitive LCOE cannot be achieved without significant reductions in drilling costs. We also present a site-based case study for multi-lateral systems and discuss how our comprehensive single-lateral analyses can be applied to approximate multi-lateral CLGSs. Looking beyond hot-dry-rock, we detail CLGS studies in permeable wet rock, albeit for a more limited parameter space, indicating that reservoir permeability of greater than 250 mD is necessary to significantly improve CLGS power production, and that reservoir temperatures greater than 200 °C, achieved by going to greater depths (∼3–4 km), may significantly enhance power production.
The primary goal of any laboratory test is to expose the unit-under-test to conservative realistic representations of a field environment. Satisfying this objective is not always straightforward due to laboratory equipment constraints. For vibration and shock tests performed on shakers over-testing and unrealistic failures can result because the control is a base acceleration and mechanical shakers have nearly infinite impedance. Force limiting and response limiting are relatively standard practices to reduce over-test risks in random-vibration testing. Shaker controller software generally has response limiting as a built-in capability and it is done without much user intervention since vibration control is a closed loop process. Limiting in shaker shocks is done for the same reasons, but because the duration of a shock is only a few milliseconds, limiting is a pre-planned user in the loop process. Shaker shock response limiting has been used for at least 30 years at Sandia National Laboratories, but it seems to be little known or used in industry. This objective of this paper is to re-introduce response limiting for shaker shocks to the aerospace community. The process is demonstrated on the BARBECUE testbed.
Anelastic strain recovery, the process of measuring the time dependent recovered strain after a core is cut at depth was utilized to make a measure of the in-situ properties stresses at depth at the FORGE (Frontier Observatory for Research in Geothermal Energy) site in Milford Utah. Core was collected from a region of well 16B at approximately 4860-4870 ft. Core was instrumented with strain gages within 10 hours of the core being cut. The relaxation of the cores was measured for approximately one month, and the results analyzed, which showed that the principal stresses were slightly off vertical, and magnitudes are close to equal.
A Marx generator module from the decommissioned RITS pulsed power machine from Sandia National Labs was modified to operate in an existing setup at Texas Tech University. This will ultimately be used as a testbed for laser triggered gas switching. The existing experimental setup at Texas Tech University consists of a large Marx tank, an oil-filled coaxial pulse forming line, an adjustable peaking gap, and load section along with various diagnostics. The setup was previously operated at a lower voltage than the new experiment, so electrostatic modeling was done to ensure viability and drive needed modifications. The oil tank will house the modified RITS Marx. This Marx contains half as many stages as the original RITS module and has an expected output of 1 MV. A trigger Marx generator consisting of 8 stages has been fabricated to trigger the RITS Marx. Charging and triggering of both Marx generators will be controlled through a fiber optic network. The output from the modified RITS Marx will be used to charge the oil-filled coaxial line acting as a low impedance pulse forming line (PFL). Once charged, the self-breaking peaking gap will close, allowing the compressed pulse to be released into the load section. For testing of the Marx module and PFL, a match 10 Ω water load was fabricated. The output pulsewidth is 55 nsec. Diagnostics include two capacitive voltage probes on either side of the peaking gap, a quarter-turn Rogowski coil for load current measurement, and a Pearson coil for calibrations purposes.
Before residential photovoltaic (PV) systems are interconnected with the grid, various planning and impact studies are conducted on detailed models of the system to ensure safety and reliability are maintained. However, these model-based analyses can be time-consuming and error-prone, representing a potential bottleneck as the pace of PV installations accelerates. Data-driven tools and analyses provide an alternate pathway to supplement or replace their model-based counterparts. In this article, a data-driven algorithm is presented for assessing the thermal limitations of PV interconnections. Using input data from residential smart meters, and without any grid models or topology information, the algorithm can determine the nameplate capacity of the service transformer supplying those customers. The algorithm was tested on multiple datasets and predicted service transformer capacity with >98% accuracy, regardless of existing PV installations. This algorithm has various applications from model-free thermal impact analysis for hosting capacity studies to error detection and calibration of existing grid models.
Disposal of commercial spent nuclear fuel in a geologic repository is studied. In situ heater experiments in underground research laboratories provide a realistic representation of subsurface behavior under disposal conditions. This study describes process model development and modeling analysis for a full-scale heater experiment in opalinus clay host rock. The results of thermal-hydrology simulation, solving coupled nonisothermal multiphase flow, and comparison with experimental data are presented. The modeling results closely match the experimental data.
Deep neural networks (DNNs) achieve state-of-the-art performance in video anomaly detection. However, the usage of DNNs is limited in practice due to their computational overhead, generally requiring significant resources and specialized hardware. Further, despite recent progress, current evaluation criteria of video anomaly detection algorithms are flawed, preventing meaningful comparisons among algorithms. In response to these challenges, we propose (1) a compression-based technique referred to as Spatio-Temporal N-Gram Prediction by Partial Matching (STNG PPM) and (2) simple modifications to current evaluation criteria for improved interpretation and broader applicability across algorithms. STNG PMM does not require specialized hardware, has few parameters to tune, and is competitive with DNNs on multiple benchmark data sets in video anomaly detection.
In this work, we evaluate the usefulness of nonsmooth basis functions for representing the periodic response of a nonlinear system subject to contact/impact behavior. As with sine and cosine basis functions for classical Fourier series, which have C∞ smoothness, nonsmooth counterparts with C0 smoothness are defined to develop a nonsmooth functional representation of the solution. Some properties of these basis functions are outlined, such as periodicity, derivatives, and orthogonality, which are useful for functional series applied via the Galerkin method. Least-squares fits of the classical Fourier series and nonsmooth basis functions are presented and compared using goodness-of-fit metrics for time histories from vibro-impact systems with varying contact stiffnesses. This formulation has the potential to significantly reduce the computational cost of harmonic balance solvers for nonsmooth dynamical systems. Rather than requiring many harmonics to capture a system response using classical, smooth Fourier terms, the frequency domain discretization could be captured by a combination of a finite Fourier series supplemented with nonsmooth basis functions to improve convergence of the solution for contact-impact problems.
Gallium nitride (GaN)-based nanoscale vacuum electron devices, which offer advantages of both traditional vacuum tube operation and modern solid-state technology, are attractive for radiation-hard applications due to the inherent radiation hardness of vacuum electron devices and the high radiation tolerance of GaN. Here, we investigate the radiation hardness of top-down fabricated n-GaN nanoscale vacuum electron diodes (NVEDs) irradiated with 2.5-MeV protons (p) at various doses. We observe a slight decrease in forward current and a slight increase in reverse leakage current as a function of cumulative protons fluence due to a dopant compensation effect. The NVEDs overall show excellent radiation hardness with no major change in electrical characteristics up to a cumulative fluence of 5E14 p/cm2, which is significantly higher than the existing state-of-the-art radiation-hardened devices to our knowledge. The results show promise for a new class of GaN-based nanoscale vacuum electron devices for use in harsh radiation environments and space applications.
In this paper, we develop a nested chi-squared likelihood ratio test for selecting among shrinkage-regularized covariance estimators for background modeling in hyperspectral imagery. Critical to many target and anomaly detection algorithms is the modeling and estimation of the underlying background signal present in the data. This is especially important in hyperspectral imagery, wherein the signals of interest often represent only a small fraction of the observed variance, for example when targets of interest are subpixel. This background is often modeled by a local or global multivariate Gaussian distribution, which necessitates estimating a covariance matrix. Maximum likelihood estimation of this matrix often overfits the available data, particularly in high dimensional settings such as hyperspectral imagery, yielding subpar detection results. Instead, shrinkage estimators are often used to regularize the estimate. Shrinkage estimators linearly combine the overfit covariance with an underfit shrinkage target, thereby producing a well-fit estimator. These estimators introduce a shrinkage parameter, which controls the relative weighting between the covariance and shrinkage target. There have been many proposed methods for setting this parameter, but comparing these methods and shrinkage values is often performed with a cross-validation procedure, which can be computationally expensive and highly sample inefficient. Drawing from Bayesian regression methods, we compute the degrees of freedom of a covariance estimate using eigenvalue thresholding and employ a nested chi-squared likelihood ratio test for comparing estimators. This likelihood ratio test requires no cross-validation procedure and enables direct comparison of different shrinkage estimates, which is computationally efficient.
Over the past few years, advancements in closed-loop geothermal systems (CLGS), also called advanced geothermal systems (AGS), have sparked a renewed interest in these types of designs. CLGS have certain advantages over traditional and enhanced geothermal systems (EGS), including not requiring in-situ reservoir permeability, conservation of the circulating fluid, and allowing for different fluids, including working fluids directly driving a turbine at the surface. CLGS may be attractive in environments where water resources are limited, rock contaminants must be avoided, and stimulation treatments are not available (e.g., due to regulatory or technical reasons). Despite these advantages, CLGS have some challenges, including limited surface area for heat transfer and requiring long wellbores and laterals to obtain multi-MW output in conduction-only reservoirs. CLGS have been investigated in conduction-only systems. In this paper, we explore the impact of both forced and natural convection on the levels of heat extraction with a CLGS deployed in a hot wet rock reservoir. We bound potential benefits of convection by investigating liquid reservoirs over a range of natural and forced convective coefficients. Additionally, we investigate the effects of permeability, porosity, and geothermal temperature gradient in the reservoir on CLGS outputs. Reservoir simulations indicate that reservoir permeabilities of at least ~100 mD are required for natural convection to increase the heat output with respect to a conduction-only scenario. The impact increases with increasing reservoir temperature. When subject to a forced convection flow field, Darcy velocities of at least 10-7 m/s are required to obtain an increase in heat output.
Multiple scattering is a common phenomenon in acoustic media that arises from the interaction of the acoustic field with a network of scatterers. This mechanism is dominant in problems such as the design and simulation of acoustic metamaterial structures often used to achieve acoustic control for sound isolation, and remote sensing. In this study, we present a physics-informed neural network (PINN) capable of simulating the propagation of acoustic waves in an infinite domain in the presence of multiple rigid scatterers. This approach integrates a deep neural network architecture with the mathematical description of the physical problem in order to obtain predictions of the acoustic field that are consistent with both governing equations and boundary conditions. The predictions from the PINN are compared with those from a commercial finite element software model in order to assess the performance of the method.
We demonstrate evanescently coupled waveguide integrated silicon photonic avalanche photodiodes designed for single photon detection for quantum applications. Simulation, high responsivity, and record low dark currents for evanescently coupled devices are presented.
Resonant plate shock testing techniques have been used for mechanical shock testing at Sandia for several decades. A mechanical shock qualification test is often done by performing three separate uniaxial tests on a resonant plate to simulate one shock event. Multi-axis mechanical shock activities, in which shock specifications are simultaneously met in different directions during a single shock test event performed in the lab, are not always repeatable and greatly depend on the fixture used during testing. This chapter provides insights into various designs of a concept fixture that includes both resonant plate and angle bracket used for multi-axis shock testing from a modeling and simulation point of view based on the results of finite element modal analysis. Initial model validation and testing performed show substantial excitation of the system under test as the fundamental modes drive the response in all three directions. The response also shows that higher order modes are influencing the system, the axial and transverse response are highly coupled, and tunability is difficult to achieve. By varying the material properties, changing thicknesses, adding masses, and moving the location of the fixture on the resonant plate, the response can be changed significantly. The goal of this work is to identify the parameters that have the greatest influence on the response of the system when using the angle bracket fixture for a mechanical shock test for the intent of tunability of the system.
The Rotor Aerodynamics, Aeroelastics, and Wake (RAAW) project's main objective was collecting data for validation of aerodynamic and aeroelastic codes for large, flexible rotors. These data come from scanning lidars of the inflow and wake, met tower, profiling lidar, blade deflection from photogrammetry, turbine SCADA data (including root bending loads), and hub-mounted SpinnerLidar inflow measurements. The goal of the present work is to analyze various methods to align the SpinnerLidar inflow data in time and space with individual blade loading. These methods would prove a way of analyzing turbine response while estimating the flowfield at each blade and provide a way of improving turbine response understanding using field data in real time, not just from simulations. The hub-mounted SpinnerLidar measures the inflow in the rotor frame meaning the locations of the blades relative to the measurement pattern do not change. The present work outlines some methods for correlating the SpinnerLidar inflow measurements with root bending loads in the rotor frame of reference accounting for both changes in wind speed and rotor speed from the measurement location one diameter upstream to each blade.
We investigate the kinetics and report the time-resolved concentrations of key chemical species in the oxidation of tetrahydrofuran (THF) at 7500 torr and 450-675 K. Experiments are carried out using high-pressure multiplexed photoionization mass spectrometry (MPIMS) combined with tunable vacuum ultraviolet radiation from the Berkely Lab Advanced Light Source. Intermediates and products are quantified using reference photoionization (PI) cross sections, when available, and constrained by a global carbon balance tracking approach at all experimental temperatures simultaneously for the species without reference cross sections. From carbon balancing, we determine time-resolved concentrations for the ROO˙ and ˙OOQOOH radical intermediates, butanedial, and the combined concentration of ketohydroperoxide (KHP) and unsaturated hydroperoxide (UHP) products stemming from the ˙QOOH + O2 reaction. Furthermore, we quantify a product that we tentatively assign as fumaraldehyde, which arises from UHP decomposition via H2O or ˙OH + H loss. The experimentally derived species concentrations are compared with model predictions using the most recent literature THF oxidation mechanism of Fenard et al., (Combust. Flame, 2018, 191, 252-269). Our results indicate that the literature mechanism significantly overestimates THF consumption and the UHP + KHP concentration at our conditions. The model predictions are sensitive to the rate coefficient for the ROO˙ isomerization to ˙QOOH, which is the gateway for radical chain propagating and branching pathways. Comparisons with our recent results for cyclopentane (Demireva et al., Combust. Flame, 2023, 257, 112506) provide insights into the effect of the ether group on reactivity and highlight the need to determine accurate rate coefficients of ROO˙ isomerization and subsequent reactions.
The MACCS code was created by Sandia National Laboratories for the U.S. Nuclear Regulatory Commission and has been used for emergency planning, level 3 probabilistic risk assessments, consequence analyses and other scientific and regulatory research for over half a century. Specializing in modeling the transport of nuclear material into the environment, MACCS accounts for atmospheric transport and dispersion, wet and dry deposition, probabilistic treatment of meteorology, exposure pathways, varying protective actions for the emergency, intermediate and long-term phases, dosimetry, health effects (including but not limited to population dose, acute radiation injury and increased cancer risk), and economic impacts. Routine updates and recent enhancements to the MACCS code, such as the inclusion of a higher fidelity atmospheric transport and dispersion model, the addition of a new economic impact model, and the application of nearfield modeling, have continuously increased the codes capabilities in consequence analysis. Additionally, investigations of MACCS capabilities for advanced reactor applications have shown that MACCS can provide realistic and informative risk assessments for the new generation of reactor designs. Even so, areas of improvement as well as gaps have been identified that if resolved can increase the usefulness of MACCS in any application regarding a release of nuclear material into the environment.
Proceedings of ISMA 2024 International Conference on Noise and Vibration Engineering and Usd 2024 International Conference on Uncertainty in Structural Dynamics
In general, multiple-input/multiple-output (MIMO) vibration testing utilizes a response-controlled test methodology where specifications are in the form of response quantities at various locations distributed on the device under test (DUT). There are some advantages to this approach, namely that DUT response could be measured in some field environment and directly used as MIMO specifications for subsequent MIMO vibration tests on similar DUTs. However, in some cases it may be advantageous to control the MIMO vibration test at the inputs rather than the responses. One such case is free-flight environments, where the DUT is unconstrained, and all loads come from aerodynamic pressures. In this case, the force-controlled test method is much more robust to system changes such as unit-to-unit variability as compared to a response-controlled test method. This could make force-controlled MIMO test specifications more generalizable and easier to derive. This is exactly akin to transfer path analysis, where pseudo-forces are applicable in special circumstances. This paper will explore the force-controlled test concept and demonstrate it with a numerical example, comparing performance under various conditions vs. the traditional response-controlled test method.
Spatial navigation involves the formation of coherent representations of a map-like space, while simultaneously tracking current location in a primarily unsupervised manner. Despite a plethora of neurophysiological experiments revealing spatially-tuned neurons across the mammalian neocortex and subcortical structures, it remains unclear how such representations are acquired in the absence of explicit allocentric targets. Drawing upon the concept of predictive learning, we utilize a biologically plausible learning rule which utilizes sensory-driven observations with internally-driven expectations and learns through a contrastive manner to better predict sensory information. The local and online nature of this approach is ideal for deployment to neuromorphic hardware for edge-applications. We implement this learning rule in a network with the feedforward and feedback pathways known to be necessary for spatial navigation. After training, we find that the receptive fields of the modeled units resemble experimental findings, with allocentric and egocentric representations in the expected order along processing streams. These findings illustrate how a local and self-supervised learning method for predicting sensory information can extract latent structure from the environment.
The siting of nuclear waste is a process that requires consideration of concerns of the public. This report demonstrates the significant potential for natural language processing techniques to gain insights into public narratives around “nuclear waste.” Specifically, the report highlights that the general discourse regarding “nuclear waste” within the news media has fluctuated in prevalence compared to “nuclear” topics broadly over recent years, with commonly mentioned entities reflecting a limited variety of geographies and stakeholders. General sentiments within the “nuclear waste” articles appear to use neutral language, suggesting that a scientific or “facts-only” framing of “waste”-related issues dominates coverage; however, the exact nuances should be further evaluated. The implications of a number of these insights about how nuclear waste is framed in traditional media (e.g., regarding emerging technologies, historical events, and specific organizations) are discussed. This report lays the groundwork for larger, more systematic research using, for example, transformer-based techniques and covariance analysis to better understand relationships among “nuclear waste” and other nuclear topics, sentiments of specific entities, and patterns across space and time (including in a particular region). By identifying priorities and knowledge needs, these data-driven methods can complement and inform engagement strategies that promote dialogue and mutual learning regarding nuclear waste.
The impact of more extreme climate conditions under global warming on soil organic carbon (SOC) dynamics remains unquantified. Here we estimate the response of SOC to climate extreme shifts under 1.5 °C warming by combining a space-for-time substitution approach and global SOC measurements (0–30 cm soil). Most extremes (22 out of 33 assessed extreme types) exacerbate SOC loss under warming globally, but their effects vary among ecosystems. Only decreasing duration of cold spells exerts consistent positive effects, and increasing extreme wet days exerts negative effects in all ecosystems. Temperate grasslands and croplands negatively respond to most extremes, while positive responses are dominant in temperate and boreal forests and deserts. In tundra, 21 extremes show neutral effects, but 11 extremes show negative effects with stronger magnitude than in other ecosystems. Our results reveal distinct, biome-specific effects of climate extremes on SOC dynamics, promoting more reliable SOC projection under climate change.
Multifidelity emulators have found wide-ranging applications in both forward and inverse problems within the computational sciences. Thanks to recent advancements in neural architectures, they provide significant flexibility for integrating information from multiple models, all while retaining substantial efficiency advantages over single-fidelity methods. In this context, existing neural multifidelity emulators operate by separately resolving the linear and nonlinear correlation between equally parameterized high-and low-fidelity approximants. However, many complex models ensembles in science and engineering applications only exhibit a limited degree of linear correlation between models. In such a case, the effectiveness of these approaches is impeded, i.e., larger datasets are needed to obtain satisfactory predictions. In this work, we present a general strategy that seeks to maximize the linear correlation between two models through input encoding. We showcase the effectiveness of our approach through six numerical test problems, and we show the ability of the proposed multifidelity emulator to accurately recover the high-fidelity model response under an increasing number of quasi-random samples. In our experiments, we show that input encoding produces in many cases emulators with significantly simpler nonlinear correlations. Finally, we demonstrate how the input encoding can be leveraged to facilitate the fusion of information between low-and high-fidelity models with dissimilar parametrization, i.e., situations in which the number of inputs is different between low-and high-fidelity models.
For an Energy System to be truly equitable, it should provide affordable and reliable energy services to disadvantaged and underserved populations. Disadvantaged communities often face a combination of economic, social, health, and environmental burdens and may be geographically isolated (e.g., rural communities), which systematically limits their opportunity to fully participate in aspects of economic, social, and civic life.
Operation and control of a galvanically isolated three-phase AC-AC converter for solid state transformer applications is described. The converter regulates bidirectional power transfer by phase shifting voltages applied on either side of a high-frequency transformer. The circuit structure and control system are symmetrical around the transformer. Each side operates independently, enabling conversion between AC systems with differing voltage magnitude, phase angle, and frequency. This is achieved in a single conversion stage with low component count and high efficiency. The modulation strategy is discussed in detail and expressions describing the relationship between phase shift and power transfer are presented. Converter operation is demonstrated in a 3 kW hardware prototype.
The ability to accurately predict the structure and dynamics of pool fires using computational simulations is of great interest in a wide variety of applications, including accidental and wildland fires. However, the presence of physical processes spanning a broad range of spatial and temporal scales poses a significant challenge for simulations of such fires, particularly at conditions near the transition between laminar and turbulent flow. In this study, we examine the transition to turbulence in methane pool fires using high-resolution simulations with multi-step finite rate chemistry, where adaptive mesh refinement (AMR) is used to directly resolve small-scale flow phenomena. We perform three simulations of methane pool fires, each with increasing diameter, corresponding to increasing inlet Reynolds and Richardson numbers. As the diameter increases, the flow transitions from organized vortex roll-up via the puffing instability to much more chaotic mixing associated with finger formation along the shear layer and core collapse near the inlet. These effects combine to create additional mixing close to the inlet, thereby enhancing fuel consumption and causing more rapid acceleration of the fluid above the pool. We also make comparisons between the transition to turbulence and core collapse in the present pool fires and in inert helium plumes, which are often used as surrogates for the study of buoyant reacting flows.
Rotational testbeds are ubiquitous in the test and evaluation of inertial sensors and systems. However, the use of rotational testbeds is typically restricted to static states employed over long integration windows to allow for data aggregation. These methods ignore the transitions between states, and data aggregation masks potentially useful signals. In this paper, we discuss the development of modular equations for the description of the inertial inputs to a test sensor using any rotational testbed. Implementing our equations in software, specific force and angular rates can be computed from idealized table motion or measured encoder data. Results are presented using simulated data and measured data. The measured data was acquired using a three axis rate table and a MEMS IMU. The experimental results validate our model equations and demonstrate the benefits of modeling sensor inputs at the sensor rates to compensate for testbed errors.
A single Synthetic Aperture Radar (SAR) image is a 2-Dimensional projection of a 3-Dimensional scene, with very limited ability to estimate surface topography. However, with multiple SAR images collected from suitably different geometries, they may be compared with multilateration calculations to estimate characteristics of the missing dimension. The ability to employ effective multilateration algorithms is highly dependent on the geometry of the data collections, and can be cast as a least-squares exercise. A measure of Dilution of Precision (DOP) can be used to compare the relative merits of various collection geometries.
Network interface controllers (NICs) with general-purpose compute capabilities ('SmartNICs') present an opportunity for reducing host application overheads by offloading non-critical tasks to the NIC. In addition to moving computation, offloading requires that associated data is also transferred to the NIC. To meet this need, we introduce a high-performance, general-purpose data movement service that facilitates the of-floading of tasks to SmartNICs: The SmartNIC Data Movement Service (SDMS). SDMS provides near-line-rate transfer band-widths between the host and NIC. Moreover, SDMS's In-transit Data Placement (IDP) feature can reduce (or even eliminate) the cost of serializing data on the NIC by performing the necessary data formatting during the transfer. To illustrate these capabilities, we provide an in-depth case study using SDMS to offload data management operations related to Apache Arrow, a popular data format standard. For single-column tables, SDMS can achieve more than 87% of baseline throughput for data buffers that are 128 KiB or larger (and more than 95% of baseline throughput for buffers that are 1 MiB or larger) while also nearly eliminating the host and SmartNIC overhead associated with Arrow operations.
The development of multi-axis force sensing ca-pabilities in elastomeric materials has enabled new types of human motion measurement with many potential applications. In this work, we present a new soft insole that enables mobile measurement of ground reaction forces (GRFs) outside of a lab-oratory setting. This insole is based on hybrid shear and normal force detecting (SAND) tactile elements (taxels) consisting of optical sensors optimized for shear sensing and piezoresistive pressure sensors dedicated to normal force measurement. We develop polynomial regression and deep neural network (DNN) GRF prediction models and compare their performance to ground-truth force plate data during two walking experiments. Utilizing a 4-layer DNN, we demonstrate accurate prediction of the anterior-posterior (AP), medial-lateral (ML) and vertical components of the GRF with normalized mean absolute errors (NMAE) of <5.1 %, 4.1 %, and 4.5%, respectively. We also demonstrate the durability of the hybrid SAND insole construction through more than 20,000 cycles of use.
We demonstrate high-efficiency emission at wavelengths longer than 540 nm from InGaN quantum wells regrown on periodic arrays of GaN nanostructures and explore their incorporation into nanophotonic resonators for semiconductor laser development.
We use complete polarization tomography of photon pairs generated in semiconductor metasurfaces via spontaneous parametric down-conversion to show how bound states in the continuum resonances affect the polarization state of the emitted photons.
We present a materials study of AlGaInP grown on GaAs leveraging deep-level optical spectroscopy and time resolved photoluminescence. Our materials may serve as the basis for wide-bandgap analogs of silicon photomultipliers optimized for short wavelength sensing.
Hargis, Joshua W.; Egeln, Anthony; Houim, Ryan; Guildenbecher, Daniel R.
Visualization of flow structures within post-detonation fireballs has been performed for benchmark validation of numerical simulations. Custom pressed PETN explosives with a 12-mm diameter hemispherical form factor were used to produce a spherically symmetric post-detonation flow with low soot yield. Hydroxyl-radical planar laser induce fluorescence (OH-PLIF) was employed to visualize the structure ranging from approximately 10μs to 35μs after shock breakout from the explosive pellet. Fireball simulations were performed using the HyBurn Computational Fluid Dynamics (CFD) package. Experimental OH-PLIF results were compared to synthetic OH-PLIF from post-processing of CFD simulations. From the comparison of experimental and synthetic OH-PLIF images, CFD is shown to replicate much of the flow structure observed in the experiments, revealing potential differences in turbulent length scales and OH kinetics. Results provide significant advancement in experimental resolution of these harsh turbulent combustion environments and validate physical models thereof.
Dendrites enable neurons to perform nonlinear operations. Existing silicon dendrite circuits sufficiently model passive and active characteristics, but do not exploit shunting inhibition as an active mechanism. We present a dendrite circuit implemented on a reconfigurable analog platform that uses active inhibitory conductance signals to modulate the circuit's membrane potential. We explore the potential use of this circuit for direction selectivity by emulating recent observations demonstrating a role for shunting inhibition in a directionally-selective Drosophila (Fruit Fly) neuron.
With the amount of neuromorphic tools and frame-works growing in number, we recognize a need to increase interoperability within our field. As an illustration of this, we explore linking two independently constructed tools. Specifically, we detail the construction of an a execution backend based on STACS: Simulation Tool for Asynchronous Cortical Streams for the Fugu spiking neural algorithms framework. STACS extends the computational scope of Fugu, enabling fast simulation of large-scale neural networks. Combining these two tools is shown to be mutually beneficial, ultimately enabling more functionality than either tool on its own. We discuss design considerations, in-cluding recognizing the advantages of straightforward standards. Further, we provide some benchmark results showing drastic improvements in execution time.
Tabulated chemistry models are widely used to simulate large-scale turbulent fires in applications including energy generation and fire safety. Tabulation via piecewise Cartesian interpolation suffers from the curse-of-dimensionality, leading to a prohibitive exponential growth in parameters and memory usage as more dimensions are considered. Artificial neural networks (ANNs) have attracted attention for constructing surrogates for chemistry models due to their ability to perform high-dimensional approximation. However, due to well-known pathologies regarding the realization of suboptimal local minima during training, in practice they do not converge and provide unreliable accuracy. Partition of unity networks (POUnets) are a recently introduced family of ANNs which preserve notions of convergence while performing high-dimensional approximation, discovering a mesh-free partition of space which may be used to perform optimal polynomial approximation. We assess their performance with respect to accuracy and model complexity in reconstructing unstructured flamelet data representative of nonadiabatic pool fire models. Our results show that POUnets can provide the desirable accuracy of classical spline-based interpolants with the low memory footprint of traditional ANNs while converging faster to significantly lower errors than ANNs. For example, we observe POUnets obtaining target accuracies in two dimensions with 40 to 50 times less memory and roughly double the compression in three dimensions. We also address the practical matter of efficiently training accurate POUnets by studying convergence over key hyperparameters, the impact of partition/basis formulation, and the sensitivity to initialization.
Sandia National Laboratories (SNL) has completed a comparative evaluation of three design assessment approaches for a 2-liter (2L) capacity containment vessel (CV) of a novel plutonium air transport (PAT) package designed to survive the hypothetical accident condition (HAC) test sequence defined in Title 10 of the United States (US) Code of Federal Regulations (CFR) Part 71.74(a), which includes a 129 meter per second (m/s) impact of the package into an essentially unyielding target. CVs for hazardous materials transportation packages certified in the US are typically designed per the requirements defined in the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (B&PVC) Section III Division 3 Subsection WB “Class TC Transportation Containments.” For accident conditions, the level D service limits and analysis approaches specified in paragraph WB-3224 are applicable. Data derived from finite element analyses of the 129 m/s impact of the 2L-PAT package were utilized to assess the adequacy of the CV design. Three different CV assessment approaches were investigated and compared, one based on stress intensity limits defined in subparagraph WB-3224.2 for plastic analyses (the stress-based approach), a second based on strain limits defined in subparagraph WB-3224.3, subarticle WB-3700, and Section III Nonmandatory Appendix FF for the alternate strain-based acceptance criteria approach (the strain-based approach), and a third based on failure strain limits derived from a ductile fracture model with dependencies on the stress and strain state of the material, and their histories (the Xue-Wierzbicki (X-W) failure-integral-based approach). This paper gives a brief overview of the 2L-PAT package design, describes the finite element model used to determine stresses and strains in the CV generated by the 129 m/s impact HAC, summarizes the three assessment approaches investigated, discusses the analyses that were performed and the results of those analyses, and provides a comparison between the outcomes of the three assessment approaches.
A new Adaptive Mesh Refinement (AMR) keyword was added to the CTH1 hydrocode developed at Sandia National Laboratories (SNL). The new indicator keyword, "ratec*ycle", allows the user to specify the minimum number of computational cycles before an AMR block is allowed to be un-refined. This option is designed to allow the analyst to control how quickly a block is un-refined to avoid introducing anomalous waves in their solution due to information propagating across mesh resolution changes. For example, in reactive flow simulations it is often desirable to accurately capture the expansion region behind the reaction front. The effect of this new option was examined using the XHVRB2, 3 model for XTX8003 to model the propagation of the detonation wave in explosives in small channels, and also for a simpler explosive model driving a steel case. The effect on computational cost as a function of this new option was also examined.
Characterizing shielding effectiveness (SE) of enclosures is important in aerospace, military, and consumer applications. Direct SE measurement of an enclosure or chassis may be considered an exact characterization, but there are several sources of possible variability in such measurements, e.g., mechanical tolerances, the absence of components during test that exist in a final assembly, movement of components and cables, and perturbations due to probes and associated cabling. In [1] , internal stirrers were investigated as a way to sample the variation of SE of small enclosures when populated with random metallic objects. Here, we explore this idea as a way to quantify variability and sensitivity of an SE measurement, not only indicating the uncertainty of the SE measurement, but also delineating frequency ranges where either deterministic or statistical simulations should be applied.
We demonstrate an InAs-based terahertz (THz) metasurface emitter that can generate and focus THz pulses using a binary-phase Fresnel zone plate concept. The metalens emitter successfully generates a focused THz beam without additional THz optics.
The diesel-piloted dual-fuel compression ignition combustion strategy is well-suited to accelerate the decarbonization of transportation by adopting hydrogen as a renewable energy carrier into the existing internal combustion engine with minimal engine modifications. Despite the simplicity of engine modification, many questions remain unanswered regarding the optimal pilot injection strategy for reliable ignition with minimum pilot fuel consumption. The present study uses a single-cylinder heavy-duty optical engine to explore the phenomenology and underlying mechanisms governing the pilot fuel ignition and the subsequent combustion of a premixed hydrogen-air charge. The engine is operated in a dual-fuel mode with hydrogen premixed into the engine intake charge with a direct pilot injection of n-heptane as a diesel pilot fuel surrogate. Optical diagnostics used to visualize in-cylinder combustion phenomena include high-speed IR imaging of the pilot fuel spray evolution as well as high-speed HCHO* and OH* chemiluminescence as indicators of low-temperature and high-temperature heat release, respectively. Three pilot injection strategies are compared to explore the effects of pilot fuel mass, injection pressure, and injection duration on the probability and repeatability of successful ignition. The thermodynamic and imaging data analysis supported by zero-dimensional chemical kinetics simulations revealed a complex interplay between the physical and chemical processes governing the pilot fuel ignition process in a hydrogen containing charge. Hydrogen strongly inhibits the ignition of pilot fuel mixtures and therefore requires longer injection duration to create zones with sufficiently high pilot fuel concentration for successful ignition. Results show that ignition typically tends to rely on stochastic pockets with high pilot fuel concentration, which results in poor repeatability of combustion and frequent misfiring. This work has improved the understanding on how the unique chemical properties of hydrogen pose a challenge for maximization of hydrogen's energy share in hydrogen dual-fuel engines and highlights a potential mitigation pathway.
Hail poses a significant threat to photovoltaic (PV) systems due to the potential for both cell and glass cracking. This work experimentally investigates hail-related failures in Glass/Backsheet and Glass/Glass PV modules with varying ice ball diameters and velocities. Post-impact Electroluminescence (EL) imaging revealed the damage extent and location, while high-speed Digital Image Correlation (DIC) measured the out-of-plane module displacements. The findings indicate that impacts of 20 J or less result in negligible damage to the modules tested. The thinner glass in Glass/Glass modules cracked at lower impact energies (-25 J) than Glass/Backsheet modules (-40 J). Furthermore, both module types showed cell and glass cracking at lower energies when impacted at the module's edges compared to central impacts. At the time of presentation, we will use DIC to determine if out-of-plane displacements are responsible for the impact location discrepancy and provide more insights into the mechanical response of hail impacted modules. This study provides essential insights into the correlation between impact energy, impact location, displacements, and resulting damage. The findings may inform critical decisions regarding module type, site selection, and module design to contribute to more reliable PV systems.
In this work, the frequency response of a simplified shaft-bearing assembly is studied using numerical continuation. Roller-bearing clearances give rise to contact behavior in the system, and past research has focused on the nonlinear normal modes of the system and its response to shock-type loads. A harmonic balance method (HBM) solver is applied instead of a time integration solver, and numerical continuation is used to map out the system’s solution branches in response to a harmonic excitation. Stability analysis is used to understand the bifurcation behavior and possibly identify numerical or system-inherent anomalies seen in past research. Continuation is also performed with respect to the forcing magnitude, resulting in what are known as S-curves, in an effort to detect isolated solution branches in the system response.
The Rydberg dipole blockade has emerged as the standard mechanism to induce entanglement between neutral-Atom qubits. In these protocols, laser fields that couple qubit states to Rydberg states are modulated to implement entangling gates. Here we present an alternative protocol to implement entangling gates via Rydberg dressing and a microwave-field-driven spin-flip blockade [Y.-Y. Jau, Nat. Phys. 12, 71 (2016)1745-247310.1038/nphys3487]. We consider the specific example of qubits encoded in the clock states of cesium. An auxiliary hyperfine state is optically dressed so that it acquires partial Rydberg character. It thus acts as a proxy Rydberg state, with a nonlinear light shift that plays the role of blockade strength. A microwave-frequency field coupling a qubit state to this dressed auxiliary state can be modulated to implement entangling gates. Logic gate protocols designed for the optical regime can be imported to this microwave regime, for which experimental control methods are more robust. We show that unlike the strong dipole-blockade regime usually employed in Rydberg experiments, going to a moderate-spin-flip-blockade regime results in faster gates and smaller Rydberg decay. We study various regimes of operations that can yield high-fidelity two-qubit entangling gates and characterize their analytical behavior. In addition to the inherent robustness of microwave control, we can design these gates to be more robust to laser amplitude and frequency noises at the cost of a small increase in Rydberg decay.
Fault location, isolation, and service restoration of a self-healing, self-Assembling microgrid operating off-grid from distributed inverter-based resources (IBRs) can be a unique challenge because of the fault current limitations and uncertainties regarding which sources are operational at any given time. The situation can become even more challenging if data sharing between the various microgrid controllers, relays, and sources is not available. This paper presents an innovative robust partitioning approach, which is used as part of a larger self-Assembling microgrid concept utilizing local measurements only. This robust partitioning approach splits a microgrid into sub-microgrids to isolate the fault to just one of the sub-microgrids, allowing the others to continue normal operation. A case study is implemented in the IEEE 123-bus distribution test system in Simulink to show the effectiveness of this approach. The results indicate that including the robust partitions leads to less loss of load and shorter overall restoration times.
Interim dry storage of spent nuclear fuel involves storing the fuel in welded stainless-steel canisters. Under certain conditions, the canisters could be subjected to environments that may promote stress corrosion cracking leading to a risk of breach and release of aerosol-sized particulate from the interior of the canister to the external environment through the crack. Research is currently under way by several laboratories to better understand the formation and propagation of stress corrosion cracks, however little work has been done to quantitatively assess the potential aerosol release. The purpose of the present work is to introduce a reliable generic numerical model for prediction of aerosol transport, deposition, and plugging in leak paths similar to stress corrosion cracks, while accounting for potential plugging from particle deposition. The model is dynamic (changing leak path geometry due to plugging) and it relies on the numerical solution of the aerosol transport equation in one dimension using finite differences. The model’s capabilities were also incorporated into a Graphical User Interface (GUI) that was developed to enhance user accessibility. Model validation efforts presented in this paper compare the model’s predictions with recent experimental data from Sandia National Laboratories (SNL) and results available in literature. We expect this model to improve the accuracy of consequence assessments and reduce the uncertainty of radiological consequence estimations in the remote event of a through-wall breach in dry cask storage systems.
Underground caverns in salt formations are promising geologic features to store hydrogen (H2) because of salt's extremely low permeability and self-healing behavior.Successful salt-cavern H2 storage schemes must maximize the efficiency of cyclic injection-production while minimizing H2 loss through adjacent damaged salt.The salt cavern storage community, however, has not fully understood the geomechanical behaviors of salt rocks driven by quick operation cycles of H2 injection-production, which may significantly impact the cost-effective storage-recovery performance.Our field-scale generic model captures the impact of combined drag and back stressing on the salt creep behavior corresponding to cycles of compression and extension, which may lead to substantial loss of cavern volumes over time and diminish the cavern performance for H2 storage.Our preliminary findings address that it is essential to develop a new salt constitutive model based on geomechanical tests of site-specific salt rock to probe the cyclic behaviors of salt both beneath and above the dilatancy boundary, including reverse (inverse transient) creep, the Bauschinger effect and fatigue.
This paper provides a summary of planning work for experiments that will be necessary to address the long-term model validation needs required to meet offshore wind energy deployment goals. Conceptual experiments are identified and laid out in a validation hierarchy for both wind turbine and wind plant applications. Instrumentation needs that will be required for the offshore validation experiments to be impactful are then listed. The document concludes with a nominal vision for how these experiments can be accomplished.
Accurate understanding of the behavior of commercial-off-the-shelf electrical devices is important in many applications. This paper discusses methods for the principled statistical analysis of electrical device data. We present several recent successful efforts and describe two current areas of research that we anticipate will produce widely applicable methods. Because much electrical device data is naturally treated as functional, and because such data introduces some complications in analysis, we focus on methods for functional data analysis.