Before residential photovoltaic (PV) systems are interconnected with the grid, various planning and impact studies are conducted on detailed models of the system to ensure safety and reliability are maintained. However, these model-based analyses can be time-consuming and error-prone, representing a potential bottleneck as the pace of PV installations accelerates. Data-driven tools and analyses provide an alternate pathway to supplement or replace their model-based counterparts. In this article, a data-driven algorithm is presented for assessing the thermal limitations of PV interconnections. Using input data from residential smart meters, and without any grid models or topology information, the algorithm can determine the nameplate capacity of the service transformer supplying those customers. The algorithm was tested on multiple datasets and predicted service transformer capacity with >98% accuracy, regardless of existing PV installations. This algorithm has various applications from model-free thermal impact analysis for hosting capacity studies to error detection and calibration of existing grid models.
Comparison of pure sinusoidal vibration to random vibration or combinations of the two is an important and useful subject for dynamic testing. The objective of this chapter is to succinctly document the technical background for converting a sine-sweep test specification into an equivalent random vibration test specification. The information can also be used in reverse, i.e., to compare a random vibe spec with a sine-sweep, although that is less common in practice. Because of inherent assumptions involved in such conversions, it is always preferable to test to original specifications and conduct this conversion when other options are impractical. This chapter outlines the theoretical premise and relevant equations. An example of implementation with hypothetical but realistic data is provided that captures the conversion of a sinusoid to an equivalent ASD. The example also demonstrates how to account for the rate of sine-sweep to the duration of the random vibration. A significant content of this chapter is the discussion on the statistical distribution of peaks in a narrow-band random signal and the consequences of that on the damage imparted to a structure. Numerical simulations were carried out to capture the effect of various combinations of narrow-band random and pure sinusoid superimposed on each other. The consequences of this are captured to provide guidance on accuracy and conservatism.
Contact mechanics, or the modeling of the impenetrability of solid objects, is fundamental to computational solid mechanics (CSM) applications yet is oftentimes the most challenging in terms of computational efficiency and performance. These challenges arise from the irregularity and highly dynamic nature of contact simulation, particularly with algorithms designed for distributed memory architectures. First among these challenges is the inherent load imbalance when distributing contact load across compute nodes. This imbalance is highly problem dependent, and relates to the surface area of contact manifolds and the volume around them, rather than the distribution of the mesh over compute nodes, meaning the application load can vary drastically over different phases. The dynamic nature of contact problems motivates the use of distributed asynchronous many-tasking (AMT) frameworks to efficiently handle irregular workloads. In this paper, we present our work on distBVH, a distributed contact solution using the DARMA/vt library for asynchronous tasking that is also capable of running on-node Kokkos-based kernels. We explore how distBVH addresses the various challenges of CSM contact problems. We evaluate the use of many of DARMA/vt’s dynamic load balancers and demonstrate how our load balancing approach can provide significant performance improvements on various computational solid mechanics benchmarks. Additionally, we show how our approach can take advantage of DARMA/vt for tasking and efficient on-node kernels using Kokkos to scale over hundreds of processing elements.
More than 90% of utility-scale photovoltaic (PV) power plants in the US use single-axis trackers (SATs) due to their potential for substantially higher power production over fixed-array systems. However, they are subject to software misconfigurations and mechanical failures, leading to suboptimal tracking accuracy. If failures are left undetected, the overall power yield of the PV power plant is reduced significantly. Robust detection and diagnosis of SAT faults is needed to minimize downtime and ensure continuous and efficient operation. This work presents analytic tools based on machine learning to detect deviations in SAT tracking performance and classify SAT faults.
Anelastic strain recovery, the process of measuring the time dependent recovered strain after a core is cut at depth was utilized to make a measure of the in-situ properties stresses at depth at the FORGE (Frontier Observatory for Research in Geothermal Energy) site in Milford Utah. Core was collected from a region of well 16B at approximately 4860-4870 ft. Core was instrumented with strain gages within 10 hours of the core being cut. The relaxation of the cores was measured for approximately one month, and the results analyzed, which showed that the principal stresses were slightly off vertical, and magnitudes are close to equal.
In low inertia grids, significant frequency deviations can occur as a result of changes in power (load, generation, etc.), These deviations may activate various protection schemes designed to safeguard the system, potentially leading to blackouts. Therefore, assessing the frequency stability of the power system is crucial. The Frequency Security Index (FSI) serves as a metric for evaluating system stability. However, computing the FSI for a specific load change necessitates actual load changes on the system, which is often impractical. This paper introduces a method for calculating the FSI without requiring load changes for all values. A mathematical expression for the FSI is derived, which uses the values of microgrid parameters (such as inertia and damping constant) to compute the FSI for any load change. Subsequently, the parameters that most significantly affect the FSI are identified. Then, the paper introduces a Moving Horizon Estimation (MHE)-based parameter estimation approach, which leverages small perturbations from an energy storage system to estimate the most influential parameters for the FSI. The results show that the FSI calculation with the estimated parameters is more accurate (compared to COI averaged parameters), enabling a more effective state of health monitoring of the microgrid.
Additive manufacturing has ushered in a new paradigm of bottom-up materials-by-design of spatially non-uniform materials. Functionally graded materials have locally tailored compositions to provide optimized global properties and performance. In this letter, we propose an opportunity for the application of graded magnetic materials as lens elements for charged particle optics. A Hiperco50/Hymu80 (FeCo-2 V/Fe-80Ni-5Mo) graded magnetic alloy was successfully additively manufactured via Laser Directed Energy Deposition with spatially varying magnetic properties. The compositional gradient is then applied using computational simulations to demonstrate how a tailored material can enhance the magnetic performance of a critical, image-forming component of a transmission electron microscope.
Physical experiments are often expensive and time-consuming. Test engineers must certify the compatibility of aircraft and their weapon systems before they can be deployed in the field, but the testing required is time consuming, expensive, and resource limited. Adopting Bayesian adaptive designs is a promising way to borrow from the successes seen in the clinical trials domain. The use of predictive probability (PP) to stop testing early and make faster decisions is particularly appealing given the aforementioned constraints. Given the high-consequence nature of the tests performed in the national security space, a strong understanding of new methods is required before being deployed. Although PP has been thoroughly studied for binary data, there is less work with continuous data, where many reliability studies are interested in certifying the specification limits of components. A simulation study evaluating the robustness of this approach indicates early stopping based on PP is reasonably robust to minor assumption violations, especially when only a few interim analyses are conducted. The simulation study also compares PP to conditional power, showing its relative strengths and weaknesses. A post-hoc analysis exploring whether release requirements of a weapon system from an aircraft are within specification with desired reliability resulted in stopping the experiment early and saving 33% of the experimental runs.
We present a materials study of AlGaInP grown on GaAs leveraging deep-level optical spectroscopy and time resolved photoluminescence. Our materials may serve as the basis for wide-bandgap analogs of silicon photomultipliers optimized for short wavelength sensing.
In this paper, we develop a nested chi-squared likelihood ratio test for selecting among shrinkage-regularized covariance estimators for background modeling in hyperspectral imagery. Critical to many target and anomaly detection algorithms is the modeling and estimation of the underlying background signal present in the data. This is especially important in hyperspectral imagery, wherein the signals of interest often represent only a small fraction of the observed variance, for example when targets of interest are subpixel. This background is often modeled by a local or global multivariate Gaussian distribution, which necessitates estimating a covariance matrix. Maximum likelihood estimation of this matrix often overfits the available data, particularly in high dimensional settings such as hyperspectral imagery, yielding subpar detection results. Instead, shrinkage estimators are often used to regularize the estimate. Shrinkage estimators linearly combine the overfit covariance with an underfit shrinkage target, thereby producing a well-fit estimator. These estimators introduce a shrinkage parameter, which controls the relative weighting between the covariance and shrinkage target. There have been many proposed methods for setting this parameter, but comparing these methods and shrinkage values is often performed with a cross-validation procedure, which can be computationally expensive and highly sample inefficient. Drawing from Bayesian regression methods, we compute the degrees of freedom of a covariance estimate using eigenvalue thresholding and employ a nested chi-squared likelihood ratio test for comparing estimators. This likelihood ratio test requires no cross-validation procedure and enables direct comparison of different shrinkage estimates, which is computationally efficient.
Highlights Novel protocol for extracting knowledge from previously performed Finite Element corrosion simulations using machine learning. Obtain accurate predictions for corrosion current 5 orders of magnitude faster than Finite Element simulations. Accurate machine learning based model capable of performing an effective and efficient search over the multi-dimensional input space to identify areas/zones where corrosion is more (or less) noticeable.
The use of surrogate models in computational mechanics is an area of high interest due to the potential for significant savings in computational cost. However, assessment and presentation of evidence for surrogate model credibility has yet to reach a standard form. The present study utilizes a deep neural network as a surrogate for a computational fluid dynamics simulation in order to predict the coefficients of lift and drag on a NACA 0012 airfoil for various Reynolds numbers and angles of attack. Using best practices, the credibility of the underlying simulation predictions and of the surrogate model predictions are analyzed. Conclusions are drawn which should better inform future uses of surrogate models in the context of their credibility.
Full-scale testing of pipes is costly and requires significant infrastructure investments. Subscale testing offers the potential to substantially reduce experimental costs and provides testing flexibility when transferrable test conditions and specimens can be established. To this end, a subscale pipe testing platform was developed to pressure cycle 60 mm diameter pipes (Nominal Pipe Size 2) to failure with gaseous hydrogen. Engineered defects were machined into the inner surface or outer surface to represent pre-existing flaws. The pipes were pressure cycled to failure with gaseous hydrogen at pressures to match operating stresses in large diameter pipes (e.g., stresses comparable to similar fractions of the specified minimum yield stress in transmission pipelines). Additionally, the pipe specimens were instrumented to identify crack initiation, such that crack growth could be compared to fracture mechanics predictions. Predictions leverage an extensive body of materials testing in gaseous hydrogen (e.g., ASME B31.12 Code Case 220) and the recently developed probabilistic fracture mechanics framework for hydrogen (Hydrogen Extremely Low Probability of Rupture, HELPR). In this work, we evaluate the failure response of these subscale pipe specimens and assess the conservatism of fracture mechanics-based design strategies (e.g., API 579/ASME FFS). This paper describes the subscale hydrogen testing capability, compares experimental outcomes to predictions from the probabilistic hydrogen fracture framework (HELPR), and discusses the complement to full-scale testing.
Tabulated chemistry models are widely used to simulate large-scale turbulent fires in applications including energy generation and fire safety. Tabulation via piecewise Cartesian interpolation suffers from the curse-of-dimensionality, leading to a prohibitive exponential growth in parameters and memory usage as more dimensions are considered. Artificial neural networks (ANNs) have attracted attention for constructing surrogates for chemistry models due to their ability to perform high-dimensional approximation. However, due to well-known pathologies regarding the realization of suboptimal local minima during training, in practice they do not converge and provide unreliable accuracy. Partition of unity networks (POUnets) are a recently introduced family of ANNs which preserve notions of convergence while performing high-dimensional approximation, discovering a mesh-free partition of space which may be used to perform optimal polynomial approximation. We assess their performance with respect to accuracy and model complexity in reconstructing unstructured flamelet data representative of nonadiabatic pool fire models. Our results show that POUnets can provide the desirable accuracy of classical spline-based interpolants with the low memory footprint of traditional ANNs while converging faster to significantly lower errors than ANNs. For example, we observe POUnets obtaining target accuracies in two dimensions with 40 to 50 times less memory and roughly double the compression in three dimensions. We also address the practical matter of efficiently training accurate POUnets by studying convergence over key hyperparameters, the impact of partition/basis formulation, and the sensitivity to initialization.
Multiple scattering is a common phenomenon in acoustic media that arises from the interaction of the acoustic field with a network of scatterers. This mechanism is dominant in problems such as the design and simulation of acoustic metamaterial structures often used to achieve acoustic control for sound isolation, and remote sensing. In this study, we present a physics-informed neural network (PINN) capable of simulating the propagation of acoustic waves in an infinite domain in the presence of multiple rigid scatterers. This approach integrates a deep neural network architecture with the mathematical description of the physical problem in order to obtain predictions of the acoustic field that are consistent with both governing equations and boundary conditions. The predictions from the PINN are compared with those from a commercial finite element software model in order to assess the performance of the method.
Traditional electronics assemblies are typically packaged using physically or chemically blown potted foams to reduce the effects of shock and vibration. These potting materials have several drawbacks including manufacturing reliability, lack of internal preload control, and poor serviceability. A modular foam encapsulation approach combined with additively manufactured (AM) silicone lattice compression structures can address these issues for packaged electronics. These preloaded silicone lattice structures, known as foam replacement structures (FRSs), are an integral part of the encapsulation approach and must be properly characterized to model the assembly stresses and dynamics. In this study, dynamic test data is used to validate finite element models of an electronics assembly with modular encapsulation and a direct ink write (DIW) AM silicone FRS. A variety of DIW compression architectures are characterized, and their nominal stress-strain behavior is represented with hyperfoam constitutive model parameterizations. Modeling is conducted with Sierra finite element software, specifically with a handoff from assembly preloading and uniaxial compression in Sierra/Solid Mechanics to linear modal and vibration analysis in Sierra/Structural Dynamics. This work demonstrates the application of this advanced modeling workflow, and results show good agreement with test data for both static and dynamic quantities of interest, including preload, modal, and vibration response.
Natural gas pipelines could be an important pathway to transport gaseous hydrogen (GH2) as a cleaner alternative to fossil fuels. However, a comprehensive understanding of hydrogen-assisted fatigue and fracture resistance in pipeline steels is needed, including an assessment of the diverse microstructures present in natural gas infrastructure. In thus study, we focus on modern steel pipe and consider both welded pipe and seamless pipe. In-situ fatigue crack growth (FCG) and fracture tests were conducted on compact tension samples extracted from the base metal, seam-weld, and heat affected zone of an X70 pipe steel in high-purity GH2 (210 bar pressure). Additionally, a seamless X65 pipeline microstructure (with comparable strength) was evaluated to compare the different microstructure of seamless pipe. The different microstructures had comparable FCG rates in GH2, with crack growth rates up to 30 times faster in hydrogen compared to air. In contrast, the fracture resistance in GH2 depended on the characteristics of the microstructure varying in the range of approximately 80 to 110 MPa√m.
Neuromorphic computing platforms hold the promise to dramatically reduce power requirements for calculations that are computationally intensive. One such application space is scientific machine learning (SciML). Techniques in this space use neural networks to approximate solutions of scientific problems. For instance, the popular physics-informed neural network (PINN) approximates the solution to a partial differential equation by using a trained feed-forward neural network, and injecting the knowledge of the physics through the loss function. Recent efforts have demonstrated how to convert a trained PINN to a spiking network architecture. In this work, we discuss our approach to quantization and implementation required to migrate these spiking PINNs to Intel's Loihi 2 neuromorphic hardware. We explore the effect of quantization on the model accuracy, as well as the energy and throughput characteristics of the implementation. It is our intent that this serve as a starting point for additional SciML implementations on neuromorphic hardware.
Proceedings of ISMA 2024 International Conference on Noise and Vibration Engineering and Usd 2024 International Conference on Uncertainty in Structural Dynamics
In general, multiple-input/multiple-output (MIMO) vibration testing utilizes a response-controlled test methodology where specifications are in the form of response quantities at various locations distributed on the device under test (DUT). There are some advantages to this approach, namely that DUT response could be measured in some field environment and directly used as MIMO specifications for subsequent MIMO vibration tests on similar DUTs. However, in some cases it may be advantageous to control the MIMO vibration test at the inputs rather than the responses. One such case is free-flight environments, where the DUT is unconstrained, and all loads come from aerodynamic pressures. In this case, the force-controlled test method is much more robust to system changes such as unit-to-unit variability as compared to a response-controlled test method. This could make force-controlled MIMO test specifications more generalizable and easier to derive. This is exactly akin to transfer path analysis, where pseudo-forces are applicable in special circumstances. This paper will explore the force-controlled test concept and demonstrate it with a numerical example, comparing performance under various conditions vs. the traditional response-controlled test method.
The lack of large, relevant and labeled datasets for synthetic aperture radar (SAR) automatic target recognition (ATR) poses a challenge for deep neural network approaches. In the case of SAR ATR, transfer learning offers promise where models are pre-trained on either synthetic SAR, alternatively collected SAR, or non-SAR source data and then fine-tuned on a smaller target SAR dataset. The concept being that the neural network can learn fundamental features from the more abundant source domain resulting in high accuracy and robust models when fine-tuned on a smaller target domain. One open question with this transfer learning strategy is how to choose source datasets that will improve accuracy of a target SAR dataset when the model is fine-tuned. Here, we apply a set of model and dataset transferability analysis techniques to investigate the efficacy of transfer learning for SAR ATR. In particular, we examine Optimal Transport Dataset Distance (OTDD), Log Maximum Evidence (LogMe), Log Expected Empirical Prediction (LEEP), Gaussian Bhattacharyya Coefficient (GBC), and H-Score. These methods consider properties such as task relatedness, statistical analysis of learned embedding properties, as well as distribution distances of the source and target domains. We apply these transferability metrics to ResNet18 models trained on a set of Non-SAR as well as SAR datasets. Overall, we present an investigation into quantitatively analyzing transferability for SAR ATR.
The siting of nuclear waste is a process that requires consideration of concerns of the public. This report demonstrates the significant potential for natural language processing techniques to gain insights into public narratives around “nuclear waste.” Specifically, the report highlights that the general discourse regarding “nuclear waste” within the news media has fluctuated in prevalence compared to “nuclear” topics broadly over recent years, with commonly mentioned entities reflecting a limited variety of geographies and stakeholders. General sentiments within the “nuclear waste” articles appear to use neutral language, suggesting that a scientific or “facts-only” framing of “waste”-related issues dominates coverage; however, the exact nuances should be further evaluated. The implications of a number of these insights about how nuclear waste is framed in traditional media (e.g., regarding emerging technologies, historical events, and specific organizations) are discussed. This report lays the groundwork for larger, more systematic research using, for example, transformer-based techniques and covariance analysis to better understand relationships among “nuclear waste” and other nuclear topics, sentiments of specific entities, and patterns across space and time (including in a particular region). By identifying priorities and knowledge needs, these data-driven methods can complement and inform engagement strategies that promote dialogue and mutual learning regarding nuclear waste.
Heat waves are increasing in severity, duration, and frequency. The Multi-Scenario Extreme Weather Simulator (MEWS) models this using historical data, climate model outputs, and heat wave multipliers. In this study, MEWS is applied for planning of a community resilience hub in Hau’ula, Hawaii. The hub will have normal operations and resilience operations modes. Both these modes were modeled using EnergyPlus. The resilience operations mode includes cutting off air conditioning for many spaces to decrease power requirements during emergencies. Results were simulated for 300 future weather files generated by MEWS for 2020, 2040, 2060, and 2080. Shared socioeconomic pathways 2–4.5, 3–7.0 and 5–8.5 were used. The resilience operations mode results show two to six times increase of hours of exceedance beyond 32.2 °C from present conditions, depending on climate scenario and future year. The resulting decrease in thermal resilience enables an average decrease of energy use intensity of 26% with little sensitivity to climate change. The decreased thermal resilience predicted in the future is undesirable, but was not severe enough to require a more energy-intensive resilience mode. Instead, planning is needed to assure vulnerable individuals are given prioritized access to air-conditioned parts of the hub if worst-case heat waves occur.
Numerical simulations were performed in 3D Cartesian coordinates to examine the post-detonation processes produced by the detonation of a 12 mm-diameter hemispherical PETN explosive charge in air. The simulations captured air dissociation by the Mach 20+ shock, chemical equilibration, and afterburning using finite-rate chemical kinetics with a skeletal chemical reaction mechanism. The Becker-Kistiakowsky-Wilson real-gas equation of state is used for the gas-phase. A simplified programmed burn model is used to seamlessly couple the detonation propagation through the explosive charge to the post-detonation reaction processes inside the fireball. Four charge sizes were considered, including diameters of 12 mm, 38 mm, 120 mm, and 1200 mm. The computed blast, shock structures, and chemical composition within the fireball agree with literature. The evolution of the flow at early times is shown to be gas dynamic driven and nearly self-similar when the time and space was scaled. The flow fields were azimuthally averaged and a mixing layer analysis was performed. The results show differences in the temperature and chemical composition with increasing charge size, implying a transition from a chemical kinetic-limited to a mixing-limited regime.
Network Operation Centers (NOCs) and Security Operation Centers (SOCs) play a critical role in addressing a wide range of threats in critical infrastructure systems such as the electric grid. However, when considering the electric grid and related industrial control systems (ICSs), visibility into the information technology (IT), operational technology (OT), and underlying physical process systems are often disconnected and standalone. As the electric grid becomes increasingly cyber-physical and faces dynamic, cyber-physical threats, it is vital that cyber-physical situational awareness (CPSA) across the interconnected system is achieved. In this paper, we review existing NOC and SOC capabilities and visualizations, motivate the need for CPSA, and define design principles with example visualizations for a next-generation grid cyber-physical integrated SOC (CP-ISOC).
Characterizing shielding effectiveness (SE) of enclosures is important in aerospace, military, and consumer applications. Direct SE measurement of an enclosure or chassis may be considered an exact characterization, but there are several sources of possible variability in such measurements, e.g., mechanical tolerances, the absence of components during test that exist in a final assembly, movement of components and cables, and perturbations due to probes and associated cabling. In [1] , internal stirrers were investigated as a way to sample the variation of SE of small enclosures when populated with random metallic objects. Here, we explore this idea as a way to quantify variability and sensitivity of an SE measurement, not only indicating the uncertainty of the SE measurement, but also delineating frequency ranges where either deterministic or statistical simulations should be applied.
Diesel generators (gensets) are often the lowest-cost electric generation for reliable supply in remote microgrids. The development of converter-dominated diesel-backed microgrids requires accurate dynamic modeling to ensure power quality and system stability. Dynamic response derived using original genset system models often does not match those observed in field experiments. This paper presents the experimental system identification of a frequency dynamics model for a 400 kVA diesel genset. The genset is perturbed via active power load changes and a linearized dynamics model is fit based on power and frequency measurements using moving horizon estimation (MHE). The method is first simulated using a detailed genset model developed in MATLAB/Simulink. The simulation model is then validated against the frequency response obtained from a real 400 kVA genset system at the Power System Integration (PSI) Lab at the University of Alaska Fairbanks (UAF). The simulation and experimental results had model errors of 3.17% and 11.65%, respectively. The resulting genset model can then be used in microgrid frequency dynamic studies, such as for the integration of renewable energy sources.
Different data pipelines and statistical methods are applied to photovoltaic (PV) performance datasets to quantify the performance loss rate (PLR). Since the real values of PLR are unknown, a variety of unvalidated values are reported. As such, the PV industry commonly assumes PLR based on statistically extracted ranges from the literature. However, the accuracy and uncertainty of PLR depend on several parameters including seasonality, local climatic conditions, and the response of a particular PV technology. In addition, the specific data pipeline and statistical method used affect the accuracy and uncertainty. To provide insights, a framework of (≈200 million) synthetic simulations of PV performance datasets using data from different climates is developed. Time series with known PLR and data quality are synthesized, and large parametric studies are conducted to examine the accuracy and uncertainty of different statistical approaches over the contiguous US, with an emphasis on the publicly available and “standardized” library, RdTools. In the results, it is confirmed that PLRs from RdTools are unbiased on average, but the accuracy and uncertainty of individual PLR estimates vary with climate zone, data quality, PV technology, and choice of analysis workflow. Best practices and improvement recommendations based on the findings of this study are provided.
Over the past few years, advancements in closed-loop geothermal systems (CLGS), also called advanced geothermal systems (AGS), have sparked a renewed interest in these types of designs. CLGS have certain advantages over traditional and enhanced geothermal systems (EGS), including not requiring in-situ reservoir permeability, conservation of the circulating fluid, and allowing for different fluids, including working fluids directly driving a turbine at the surface. CLGS may be attractive in environments where water resources are limited, rock contaminants must be avoided, and stimulation treatments are not available (e.g., due to regulatory or technical reasons). Despite these advantages, CLGS have some challenges, including limited surface area for heat transfer and requiring long wellbores and laterals to obtain multi-MW output in conduction-only reservoirs. CLGS have been investigated in conduction-only systems. In this paper, we explore the impact of both forced and natural convection on the levels of heat extraction with a CLGS deployed in a hot wet rock reservoir. We bound potential benefits of convection by investigating liquid reservoirs over a range of natural and forced convective coefficients. Additionally, we investigate the effects of permeability, porosity, and geothermal temperature gradient in the reservoir on CLGS outputs. Reservoir simulations indicate that reservoir permeabilities of at least ~100 mD are required for natural convection to increase the heat output with respect to a conduction-only scenario. The impact increases with increasing reservoir temperature. When subject to a forced convection flow field, Darcy velocities of at least 10-7 m/s are required to obtain an increase in heat output.
Resonant plate shock testing techniques have been used for mechanical shock testing at Sandia for several decades. A mechanical shock qualification test is often done by performing three separate uniaxial tests on a resonant plate to simulate one shock event. Multi-axis mechanical shock activities, in which shock specifications are simultaneously met in different directions during a single shock test event performed in the lab, are not always repeatable and greatly depend on the fixture used during testing. This chapter provides insights into various designs of a concept fixture that includes both resonant plate and angle bracket used for multi-axis shock testing from a modeling and simulation point of view based on the results of finite element modal analysis. Initial model validation and testing performed show substantial excitation of the system under test as the fundamental modes drive the response in all three directions. The response also shows that higher order modes are influencing the system, the axial and transverse response are highly coupled, and tunability is difficult to achieve. By varying the material properties, changing thicknesses, adding masses, and moving the location of the fixture on the resonant plate, the response can be changed significantly. The goal of this work is to identify the parameters that have the greatest influence on the response of the system when using the angle bracket fixture for a mechanical shock test for the intent of tunability of the system.
Intermolecular Coulombic decay (ICD) in liquid water is a relatively novel type of nonlocal electronic decay mechanism, competing with the traditional mechanism of proton transfer between neighboring water molecules. Key features of ICD are its ultrafast non-radiative decay process and ultralong-range for excess energy transfer from the excited atom/molecule to its neighbors. Since detecting unambiguous ICD signatures in bulk liquid water is technically challenging, small water clusters have often been utilized to gain insights into ICD and other ionization processes in aqueous environment. Here, we present results from quantum mechanical calculations of the electronic structures of neutral to multiply-ionized water monomer, dimer, trimer, and tetramer. Core-level electrons of water are also considered here since recent studies demonstrated that emission site and energy of the electrons released during resonant-Auger-ICD cascade can be controlled by coupling ICD to resonant core excitation. Previous studies of ICD and electronic structures of neutral and ionized small water clusters and liquid water are briefly discussed.
With the amount of neuromorphic tools and frame-works growing in number, we recognize a need to increase interoperability within our field. As an illustration of this, we explore linking two independently constructed tools. Specifically, we detail the construction of an a execution backend based on STACS: Simulation Tool for Asynchronous Cortical Streams for the Fugu spiking neural algorithms framework. STACS extends the computational scope of Fugu, enabling fast simulation of large-scale neural networks. Combining these two tools is shown to be mutually beneficial, ultimately enabling more functionality than either tool on its own. We discuss design considerations, in-cluding recognizing the advantages of straightforward standards. Further, we provide some benchmark results showing drastic improvements in execution time.
This article aims at discovering the unknown variables in the system through data analysis. The main idea is to use the time of data collection as a surrogate variable and try to identify the unknown variables by modeling gradual and sudden changes in the data. We use Gaussian process modeling and a sparse representation of the sudden changes to efficiently estimate the large number of parameters in the proposed statistical model. The method is tested on a realistic dataset generated using a one-dimensional implementation of a Magnetized Liner Inertial Fusion (MagLIF) simulation model, and encouraging results are obtained.
Plenoptic background-oriented schlieren is a diagnostictechnique that enables the measure-ment of three-dimensional refractive gradients by a combination of background-oriented schlieren and a plenoptic light field camera. This plenoptic camera is a modification of a traditional camera via the insertion of an array of microlenses between the imaging lens and digital sensor. This allows the collection of both spatial and angular information on the incoming light rays and therefore provides three-dimensional information about the imaged scene. Background-oriented schlieren requires a relatively simple experimental configurationincludingonlyacameraviewing a patterned background through the density field of interest. By using a plenoptic camera to capture background-oriented schlieren images the optical distortion created by density gradients in three dimensions can be measured. This chapter is intended to review critical developments in plenoptic background-oriented schlieren imaging and provide an outlook for future applications of this measurement technique.
Geogenic gases often reside in intergranular pore space, fluid inclusions, and within mineral grains. In particular, helium-4 (4He) is generated by alpha decay of uranium and thorium in rocks. The emitted 4He nuclei can be trapped in the rock matrix or in fluid inclusions. Recent work has shown that releases of helium occur during plastic deformation of crustal rocks above atmospheric concentrations that are detectable in the field. However, it is unclear how rock type and deformation modalities affect the cumulative gas released. This work seeks to address how different deformation modalities observed in several rock types affect release of helium. Axial compression tests with granite, rhyolite, tuff, dolostone, and sandstone - under vacuum conditions - were conducted to measure the transient release of helium from each sample during crushing. It was found that, when crushed up to 97500 N, each rock type released helium at a rate quantifiable using a helium mass spectrometer leak detector. For plutonic rock like granite, helium flow rate spikes with the application of force as the samples elastically deform until fracture, then decays slowly until grain breakdown comminution begins to occur. Both the rhyolite and tuff do not experience such large spikes in helium flow rate, with the rhyolites fracturing at much lower force and the tuffs compacting instead of fracturing due to their high porosity. Both rhyolite and tuff instead experience a lesser but steady helium release as they are crushed. The cumulative helium release for the volcanic tuffs varies as much as two orders of magnitude but is fairly consistent for the denser rhyolite and granite tested. The results indicate that there is a large degassing of helium as rocks are elastically and inelastically deformed prior to fracturing. For more porous and less brittle rocks, the cumulative release will depend more on the degree of deformation applied. These results are compared with known U/Th radioisotopes in the rocks to relate the trapped helium as either produced in the rock or from secondary migration of 4He.
This chapter will show the results of a study where component-based transfer path analysis was used to translate vibration environments between versions of the round-robin structure. This was done to evaluate a hybrid approach where the responses were measured experimentally, but the frequency response functions were derived analytically. This work will describe the test setup, force estimation process, response prediction (on the new system), and show comparisons between the predicted and measured responses. Observations will also be made on the applicability of this hybrid approach in more complex systems.
To decarbonize the energy sector, there are international efforts to displace carbon-based fuels with renewable alternatives, such as hydrogen. Storage and transportation of gaseous hydrogen are key components of large-scale deployment of carbon-neutral energy technologies, especially storage at scale and transportation over long distances. Due to the high cost of deploying large-scale infrastructure, the existing pipeline network is a potential means of transporting blended natural gas-hydrogen fuels in the near term and carbon-free hydrogen in the future. Much of the existing infrastructure in North America was deployed prior to 1970 when greater variability existed in steel processing and joining techniques often leading to microstructural inhomogeneities and hard spots, which are local regions of elevated hardness relative to the pipe or weld. Hard spots, particularly in older pipes and welds, are a known threat to structural integrity in the presence of hydrogen. High-strength materials are susceptible to hydrogen-assisted fracture, but the susceptibility of hard spots in otherwise low-strength materials (such as vintage pipelines) has not been systematically examined. Assessment of fracture performance of pipeline steels in gaseous hydrogen is a necessary step to establish an approach for structural integrity assessment of pipeline infrastructure for hydrogen service. This approach must include comprehensive understanding of microstructural anomalies (such as hard spots), especially in vintage materials. In this study, fracture resistance of pipeline steels is measured in gaseous hydrogen with a focus on high strength materials and hardness limits established in common practice and in current pipeline codes (such as ASME B31.12). Elastic-plastic fracture toughness measurements were compared for several steel grades to identify the relationship between hardness and fracture resistance in gaseous hydrogen.
Hargis, Joshua W.; Egeln, Anthony; Houim, Ryan; Guildenbecher, Daniel R.
Visualization of flow structures within post-detonation fireballs has been performed for benchmark validation of numerical simulations. Custom pressed PETN explosives with a 12-mm diameter hemispherical form factor were used to produce a spherically symmetric post-detonation flow with low soot yield. Hydroxyl-radical planar laser induce fluorescence (OH-PLIF) was employed to visualize the structure ranging from approximately 10μs to 35μs after shock breakout from the explosive pellet. Fireball simulations were performed using the HyBurn Computational Fluid Dynamics (CFD) package. Experimental OH-PLIF results were compared to synthetic OH-PLIF from post-processing of CFD simulations. From the comparison of experimental and synthetic OH-PLIF images, CFD is shown to replicate much of the flow structure observed in the experiments, revealing potential differences in turbulent length scales and OH kinetics. Results provide significant advancement in experimental resolution of these harsh turbulent combustion environments and validate physical models thereof.
Multifidelity (MF) uncertainty quantification (UQ) seeks to leverage and fuse information from a collection of models to achieve greater statistical accuracy with respect to a single-fidelity counterpart, while maintaining an efficient use of computational resources. Despite many recent advancements in MF UQ, several challenges remain and these often limit its practical impact in certain application areas. In this manuscript, we focus on the challenges introduced by nondeterministic models to sampling MF UQ estimators. Nondeterministic models produce different responses for the same inputs, which means their outputs are effectively noisy. MF UQ is complicated by this noise since many state-of-the-art approaches rely on statistics, e.g., the correlation among models, to optimally fuse information and allocate computational resources. We demonstrate how the statistics of the quantities of interest, which impact the design, effectiveness, and use of existing MF UQ techniques, change as functions of the noise. With this in hand, we extend the unifying approximate control variate framework to account for nondeterminism, providing for the first time a rigorous means of comparing the effect of nondeterminism on different multifidelity estimators and analyzing their performance with respect to one another. Numerical examples are presented throughout the manuscript to illustrate and discuss the consequences of the presented theoretical results.
Fault location, isolation, and service restoration of a self-healing, self-Assembling microgrid operating off-grid from distributed inverter-based resources (IBRs) can be a unique challenge because of the fault current limitations and uncertainties regarding which sources are operational at any given time. The situation can become even more challenging if data sharing between the various microgrid controllers, relays, and sources is not available. This paper presents an innovative robust partitioning approach, which is used as part of a larger self-Assembling microgrid concept utilizing local measurements only. This robust partitioning approach splits a microgrid into sub-microgrids to isolate the fault to just one of the sub-microgrids, allowing the others to continue normal operation. A case study is implemented in the IEEE 123-bus distribution test system in Simulink to show the effectiveness of this approach. The results indicate that including the robust partitions leads to less loss of load and shorter overall restoration times.
Battery systems are typically equipped with state of charge (SoC) estimation algorithms. Sensor measurements used to estimate SoC are susceptible to false data injection attacks (FDIAs) that aim to disturb state estimation and, consequently, damage the system. In this paper, SoC estimation methods are re-purposed to detect FDIAs targeting the current and voltage sensors of a battery stack using a combination of an improved input noise aware unscented Kalman filter (INAUKF) and a cumulative sum detector. The root mean squared error of the states estimated by the INAUKF was at least 85% lower than the traditional unscented Kalman filter for all noise levels tested. The proposed method was able to detect FDIA in the current and voltage sensors of a series-connected battery stack in 99.55% of the simulations.
Analytic relations that describe crack growth are vital for modeling experiments and building a theoretical understanding of fracture. Upon constructing an idealized model system for the crack and applying the principles of statistical thermodynamics, it is possible to formulate the rate of thermally activated crack growth as a function of load, but the result is analytically intractable. Here, an asymptotically correct theory is used to obtain analytic approximations of the crack growth rate from the fundamental theoretical formulation. These crack growth rate relations are compared to those that exist in the literature and are validated with respect to Monte Carlo calculations and experiments. The success of this approach is encouraging for future modeling endeavors that might consider more complicated fracture mechanisms, such as inhomogeneity or a reactive environment.
Polymeric materials are commonplace in the natural gas infrastructure as distribution pipes, coatings, seals, and gaskets. Under the auspices of the U.S. Department of Energy HyBlend program, one of the means to reduce greenhouse gas emissions is with replacing natural gas, either partially or completely, with hydrogen. This approach makes it imperative that we conduct near-term and long-term materials compatibility research in these relevant environments. Insights into the effects of hydrogen and hydrogen gas blends on polymer integrity can be gained through both ex-situ and in-situ analytical methods. Our work represented here highlights a study of the behavior of pipeline polyethylene (PE) materials, including HDPE (Dow 2490 and GDB50) and MDPE (Ineos and legacy Dupont Aldyl A), when exposed to hydrogen by means of in-situ X-ray scattering and ex-situ Raman spectroscopy techniques. These methods complement each other in analyzing polymer microstructure. Data collected revealed that the aforementioned polymers did not show significant changes in crystallinity or morphology under the exposure conditions tested. These studies help establish techniques to study real-time effects of hydrogen gas on polymer structure and chemistry, which is directly related to pipeline mechanical strength and longevity of service.
Dendrites enable neurons to perform nonlinear operations. Existing silicon dendrite circuits sufficiently model passive and active characteristics, but do not exploit shunting inhibition as an active mechanism. We present a dendrite circuit implemented on a reconfigurable analog platform that uses active inhibitory conductance signals to modulate the circuit's membrane potential. We explore the potential use of this circuit for direction selectivity by emulating recent observations demonstrating a role for shunting inhibition in a directionally-selective Drosophila (Fruit Fly) neuron.
This paper presents a low-power staggered tunning LNA with a variable threshold limiter designed for impulse-radio ultra-wideband (IR-UWB). This amplifier has a high gain of 36.5 dB with a 3dB S21 frequency of 6.8 - 9.4 GHz while consuming 4.28 mW from a 0.8 V supply. The input return loss is better than 8 dB across the band and 10 dB from 6.9 GHz. The output return loss is better than 18 dB across the operating bandwidth. At 9 GHz, the minimum noise figure measured is 4.85 dB. The OP1dB compression point is measured at 7.8 GHz as -5.5 dBm. We also present an adjustable threshold limiter, providing additional protection over a typical diode limiter circuit. This amplifier is fabricated in a 45nm PD-SOI process. To the author's knowledge, this LNA demonstrates the highest linear figure of merit (FoM) in C and X band work.
Accurate measurement of frequency response functions is essential for system identification, model updating, and structural health monitoring. However, sensor noise and leakage cause variance and systematic errors in estimated FRFs. Low-noise sensors, windowing techniques, and intelligent experiment design can mitigate these effects but are often limited by practical considerations. This chapter is a guide to implementation of local modeling methods for FRF estimation, which have been extensively researched but are seldom used in practice. Theoretical background is presented, and a procedure for automatically selecting a parameterization and model order is proposed. Computational improvements are discussed that make local modeling feasible for systems with many input and output channels. The methods discussed herein are validated on a simulation example and two experimental examples: a multi-input, multi-output system with three inputs and 84 outputs and a nonlinear beam assembly. They are shown to significantly outperform the traditional H1 and HSVD estimators.
Our present electric power grid maximizes spinning inertia of fossil fuel generators (inherent energy storage) to meet stability and performance requirements. Our goal is to begin to investigate the replacement of the large spinning inertia of fossil fuel generators with energy storage systems (ESS) including information flow as a necessary part of the renewable energy sources (RES) and subject to certain criteria. General criteria metrics include: energy storage, information flow, estimation, communication links, central versus decentralized, etc. Our focus is on evaluating the Fisher Information Equivalency (FIE) metric as a multi-criteria trade-off cost function for the minimization of ESS options and information flow. This paper begins with a formal conceptual definition of an infinite bus. Then a simple example of a One Machine Infinite Bus (OMIB) system with a Unified Power Flow Controller (UPFC) to demonstrate the FIE-based approach to minimize the ESS. A second more detailed example of several spinning machines are included with representative power electronic and ESS for RES that are attached to the electric power grid. A simple trade-study begins to highlight requirements to support large penetration of RES. Keep in mind for a large scale high penetration of RES will require large investments in ESS which we want to minimize.
Multiple-input/multiple-output (MIMO) vibration control often relies on a least-squares solution utilizing a matrix pseudo-inverse. While this is simple and effective for many cases, it lacks flexibility in assigning preference to specific control channels or degrees of freedom (DOFs). For example, the user may have some DOFs where accuracy is very important and other DOFs where accuracy is less important. This chapter shows a method for assigning weighting to control channels in the MIMO vibration control process. These weights can be constant or frequency-dependent functions depending on the application. An algorithm is presented for automatically selecting DOF weights based on a frequency-dependent data quality metric to ensure the control solution is only using the best, linear data. An example problem is presented to demonstrate the effectiveness of the weighted solution.
The Rydberg dipole blockade has emerged as the standard mechanism to induce entanglement between neutral-Atom qubits. In these protocols, laser fields that couple qubit states to Rydberg states are modulated to implement entangling gates. Here we present an alternative protocol to implement entangling gates via Rydberg dressing and a microwave-field-driven spin-flip blockade [Y.-Y. Jau, Nat. Phys. 12, 71 (2016)1745-247310.1038/nphys3487]. We consider the specific example of qubits encoded in the clock states of cesium. An auxiliary hyperfine state is optically dressed so that it acquires partial Rydberg character. It thus acts as a proxy Rydberg state, with a nonlinear light shift that plays the role of blockade strength. A microwave-frequency field coupling a qubit state to this dressed auxiliary state can be modulated to implement entangling gates. Logic gate protocols designed for the optical regime can be imported to this microwave regime, for which experimental control methods are more robust. We show that unlike the strong dipole-blockade regime usually employed in Rydberg experiments, going to a moderate-spin-flip-blockade regime results in faster gates and smaller Rydberg decay. We study various regimes of operations that can yield high-fidelity two-qubit entangling gates and characterize their analytical behavior. In addition to the inherent robustness of microwave control, we can design these gates to be more robust to laser amplitude and frequency noises at the cost of a small increase in Rydberg decay.
A Marx generator module from the decommissioned RITS pulsed power machine from Sandia National Labs was modified to operate in an existing setup at Texas Tech University. This will ultimately be used as a testbed for laser triggered gas switching. The existing experimental setup at Texas Tech University consists of a large Marx tank, an oil-filled coaxial pulse forming line, an adjustable peaking gap, and load section along with various diagnostics. The setup was previously operated at a lower voltage than the new experiment, so electrostatic modeling was done to ensure viability and drive needed modifications. The oil tank will house the modified RITS Marx. This Marx contains half as many stages as the original RITS module and has an expected output of 1 MV. A trigger Marx generator consisting of 8 stages has been fabricated to trigger the RITS Marx. Charging and triggering of both Marx generators will be controlled through a fiber optic network. The output from the modified RITS Marx will be used to charge the oil-filled coaxial line acting as a low impedance pulse forming line (PFL). Once charged, the self-breaking peaking gap will close, allowing the compressed pulse to be released into the load section. For testing of the Marx module and PFL, a match 10 Ω water load was fabricated. The output pulsewidth is 55 nsec. Diagnostics include two capacitive voltage probes on either side of the peaking gap, a quarter-turn Rogowski coil for load current measurement, and a Pearson coil for calibrations purposes.
The explosive BTF (benzotrifuroxan) is an interesting molecule for sub-millimeter studies of initiation and detonation. It has no hydrogen, thus no water in the detonation products and a subsequently high temperature in the reaction zone. The material has impact sensitivity that is comparable or less than that of PETN (pentaerythritol tetranitrate) and slightly greater than RDX, HMX, and CL-20. Physical vapor deposition (PVD) can be used to grow high-density films of pure explosives with precise control over geometry, and we apply this technique to BTF to study detonation and initiation behavior as a function of sample thickness. The geometrical effects on detonation and corner turning behavior are studied with the critical detonation thickness experiment and the micromushroom test, respectively. Initiation behavior is studied with the high-throughput initiation experiment. Vapor-deposited films of BTF show detonation failure, corner turning, and initiation consistent with a heterogeneous explosive. Scaling of failure thickness to failure diameter shows that BTF has a very small failure diameter.
Multi-axis testing has become a popular test method because it provides a more realistic simulation of a field environment when compared to traditional vibration testing. However, field data may not be available to derive the multi-axis environment. This means that methods are needed to generate “virtual field data” that can be used in place of measured field data. Transfer path analysis (TPA) has been suggested as a method to do this since it can be used to estimate the excitation forces on a legacy system and then apply these forces to a new system to generate virtual field data. This chapter will provide a review of using TPA methods to do this. It will include a brief background on TPA, discuss the benefits of using TPA to compute virtual field data, and delve into the areas for future work that could make TPA more useful in this application.
We consider the problem of decentralized control of reactive power provided by distributed energy resources for voltage support in the distribution grid. We assume that the reactance matrix of the grid is unknown and potentially time-varying. We present a decentralized adaptive controller in which the reactive power at each inverter is set using a potentially heterogeneous droop curve and analyze the stability and the steady-state error of the resulting system. The effectiveness of the controller is validated in simulations using a modified version of the IEEE 13-bus and a 8500-node test system.
Measurements of the oxidation rates of various forms of carbon (soot, graphite, coal char) have often shown an unexplained attenuation with increasing temperatures in the vicinity of 2000 K, even when accounting for diffusional transport limitations and gas-phase chemical effects (e.g. CO2 dissociation). With the development of oxy-fuel combustion approaches for pulverized coal utilization with carbon capture, high particle temperatures are readily achieved in sufficiently oxygen-enriched environments. In this work, a new semi-global intrinsic kinetics model for high temperature carbon oxidation is created by starting with a previously developed 5-step mechanism that was shown to reproduce all major known trends in carbon oxidation, except for its high temperature kinetic falloff, and incorporating a recently discovered surface oxide decomposition step. The predictions of this new model are benchmarked by deploying the kinetic model in a steady-state reacting particle code (SKIPPY) and comparing the simulated results against a carefully measured set of pulverized coal char combustion temperature measurements over a wide range of oxygen concentrations in N2 and CO2 environments. The results show that the inclusion of the spontaneous surface oxide decomposition reaction step significantly improves predictions at high particle temperatures. Furthermore, the simulations reveal that O atoms released from the oxide decomposition step enhance the radical pool in the near-surface region and within the particle interior itself. Incorporation of literature rates for O and OH reactions with the carbon surface results in a reduction in the predicted radical pool concentrations and a very minor enhancement of the overall carbon oxidation rate.
Deep neural networks (DNNs) achieve state-of-the-art performance in video anomaly detection. However, the usage of DNNs is limited in practice due to their computational overhead, generally requiring significant resources and specialized hardware. Further, despite recent progress, current evaluation criteria of video anomaly detection algorithms are flawed, preventing meaningful comparisons among algorithms. In response to these challenges, we propose (1) a compression-based technique referred to as Spatio-Temporal N-Gram Prediction by Partial Matching (STNG PPM) and (2) simple modifications to current evaluation criteria for improved interpretation and broader applicability across algorithms. STNG PMM does not require specialized hardware, has few parameters to tune, and is competitive with DNNs on multiple benchmark data sets in video anomaly detection.
Deep neural networks for automatic target recognition (ATR) have been shown to be highly successful for a large variety of Synthetic Aperture Radar (SAR) benchmark datasets. However, the black box nature of neural network approaches raises concerns about how models come to their decisions, especially when in high-stake scenarios. Accordingly, a variety of techniques are being pursued seeking to offer understanding of machine learning algorithms. In this paper, we first provide an overview of explainability and interpretability techniques introducing their concepts and the insights they produce. Next we summarize several methods for computing specific approaches to explainability and interpretability as well as analyzing their outputs. Finally, we demonstrate the application of several attribution map methods and apply both attribution analysis metrics as well as localization interpretability analysis to six neural network models trained on the Synthetic and Measured Paired Labeled Experiment (SAMPLE) dataset to illustrate the insights these methods offer for analyzing SAR ATR performance.
Accuracy-optimized convolutional neural networks (CNNs) have emerged as highly effective models at predicting neural responses in brain areas along the primate ventral stream, but it is largely unknown whether they effectively model neurons in the complementary primate dorsal stream. We explored how well CNNs model the optic flow tuning properties of neurons in dorsal area MSTd and we compared our results with the Non-Negative Matrix Factorization (NNMF) model, which successfully models many tuning properties of MSTd neurons. To better understand the role of computational properties in the NNMF model that give rise to optic flow tuning that resembles that of MSTd neurons, we created additional CNN model variants that implement key NNMF constraints – non-negative weights and sparse coding of optic flow. While the CNNs and NNMF models both accurately estimate the observer's self-motion from purely translational or rotational optic flow, NNMF and the CNNs with nonnegative weights yield substantially less accurate estimates than the other CNNs when tested on more complex optic flow that combines observer translation and rotation. Despite its poor accuracy, NNMF gives rise to tuning properties that align more closely with those observed in primate MSTd than any of the accuracy-optimized CNNs. This work offers a step toward a deeper understanding of the computational properties and constraints that describe the optic flow tuning of primate area MSTd.
Accurate understanding of the behavior of commercial-off-the-shelf electrical devices is important in many applications. This paper discusses methods for the principled statistical analysis of electrical device data. We present several recent successful efforts and describe two current areas of research that we anticipate will produce widely applicable methods. Because much electrical device data is naturally treated as functional, and because such data introduces some complications in analysis, we focus on methods for functional data analysis.
Disposal of commercial spent nuclear fuel in a geologic repository is studied. In situ heater experiments in underground research laboratories provide a realistic representation of subsurface behavior under disposal conditions. This study describes process model development and modeling analysis for a full-scale heater experiment in opalinus clay host rock. The results of thermal-hydrology simulation, solving coupled nonisothermal multiphase flow, and comparison with experimental data are presented. The modeling results closely match the experimental data.
Emerging hydrogen technologies span a diverse range of operating environments. High-pressure storage for mobility applications has become commonplace up to about 1,000 bar, whereas transmission of gaseous hydrogen can occur at hydrogen partial pressure of a few bar when blended into natural gas. In the former case, cascade storage is utilized to manage hydrogen-assisted fatigue and the Boiler and Pressure Vessel Code, Section VIII, Division 3 includes fatigue design curves for fracture mechanics design of hydrogen vessels at pressure of 1,030 bar (using a Paris Law formulation). Recent research on hydrogen-assisted fatigue crack growth has shown that a diverse range of ferritic steels show similar fatigue crack growth behavior in gaseous hydrogen environments, including low-carbon steels (e.g., pipeline steels) as well as quench and tempered Cr-Mo and Ni-Cr-Mo pressure vessel steels with tensile strength less than 915 MPa. However, measured fatigue crack growth is sensitive to hydrogen partial pressure and fatigue crack growth can be accelerated in hydrogen at pressure as low as 1 bar. The effect of hydrogen partial pressure from 1 to 1,000 bar can be quantified through a simple semi-empirical correction factor to the fatigue crack growth design curves. This paper documents the technical basis for the pressure-sensitive fatigue crack growth rules for gaseous hydrogen service in ASME B31.12 Code Case 220 and for revision of ASME VIII-3 Code Case 2938-1, including the range of applicability of these fatigue design curves in terms of environmental, materials and mechanics variables.
A single Synthetic Aperture Radar (SAR) image is a 2-Dimensional projection of a 3-Dimensional scene, with very limited ability to estimate surface topography. However, with multiple SAR images collected from suitably different geometries, they may be compared with multilateration calculations to estimate characteristics of the missing dimension. The ability to employ effective multilateration algorithms is highly dependent on the geometry of the data collections, and can be cast as a least-squares exercise. A measure of Dilution of Precision (DOP) can be used to compare the relative merits of various collection geometries.
The importance of user-accessible multiple-input/multiple-output (MIMO) control methods has been highlighted in recent years. Several user-created control laws have been integrated into Rattlesnake, an open-source MIMO vibration controller developed at Sandia National Laboratories. Much of the effort to date has focused on stationary random vibration control. However, there are many field environments which are not well captured by stationary random vibration testing, for example shock, sine, or arbitrary waveform environments. This work details a time waveform replication technique that uses frequency domain deconvolution, including a theoretical overview and implementation details. Example usage is demonstrated using a simple structural dynamics system and complicated control waveforms at multiple degrees of freedom.
A method for battery state of charge (SoC) estimation that compensates input noise using an adaptive square-root unscented Kalman filter (ASRUKF) is presented in this paper. In contrast to traditional state estimation approaches that consider deterministic system inputs, this method can improve the accuracy of battery state estimator by considering that the measurements of the control input variable of the filter, the cell currents, are subject to noise. Also, this paper presents two estimators for input and output noise covariance. The proposed method consists of initialization, state correction, sigma point calculations, state prediction, and covariance estimation steps and is demonstrated using simulations. We simulate two battery cycling protocols of three series-connected batteries whose SoC is estimated by the proposed method. The results show that the improved ASRUKF can track closely the states and achieves a 20.63 % reduction in SoC estimation error when compared to a benchmark that does not consider input noise.
We demonstrate evanescently coupled waveguide integrated silicon photonic avalanche photodiodes designed for single photon detection for quantum applications. Simulation, high responsivity, and record low dark currents for evanescently coupled devices are presented.
There has always been a desire to port high-fidelity reactive flow models from one code to another. For example, the AWE reactive burn model known as CREST has been or is being implemented in several of the U.S. Department of Energy hydrocodes. Those involved with reactive burn model implementation recognize the challenges immediately, e.g., Eulerian versus Lagrangian frameworks, the form of the equation of state, the closure relations, etc. In this work, we report the development of the CREST reactive burn model in CTH, a multidimensional, multi-material hydrocode developed by Sandia National Laboratories, following an earlier implementation shown at the last International Detonation Symposium. Results include code-to-code comparisons between CTH and the AWE hydrocode PERUSE, focusing on the simulated particle velocity histories during a shock-to-detonation transition, and corresponding to previous gas gun impact experiments as well as new model verification studies. Lessons learned are provided, including discussions of the numerical accuracy, in addition to the role of artificial viscosity and artificial viscous work. Finally, simulation results are shown to compare the Snowplough versus P-Alpha porosity model options.
In this work, we evaluate the usefulness of nonsmooth basis functions for representing the periodic response of a nonlinear system subject to contact/impact behavior. As with sine and cosine basis functions for classical Fourier series, which have C∞ smoothness, nonsmooth counterparts with C0 smoothness are defined to develop a nonsmooth functional representation of the solution. Some properties of these basis functions are outlined, such as periodicity, derivatives, and orthogonality, which are useful for functional series applied via the Galerkin method. Least-squares fits of the classical Fourier series and nonsmooth basis functions are presented and compared using goodness-of-fit metrics for time histories from vibro-impact systems with varying contact stiffnesses. This formulation has the potential to significantly reduce the computational cost of harmonic balance solvers for nonsmooth dynamical systems. Rather than requiring many harmonics to capture a system response using classical, smooth Fourier terms, the frequency domain discretization could be captured by a combination of a finite Fourier series supplemented with nonsmooth basis functions to improve convergence of the solution for contact-impact problems.
Structural materials used in combustion or power generation systems need to have both environmental and temperature resistance to ensure long-term performance. As the energy sector transitions to hydrogen, there is a need to ensure compatibility of highly-alloyed austenitic steels and nickel-based alloys with hydrogen over a range of temperatures. Hydrogen embrittlement of these alloy systems is often considered most detrimental near ambient temperatures and low temperatures, although there is some evidence in the literature that hydrogen can affect creep behavior at elevated temperature. In the intermediate temperature range (e.g., 100-400C), it is uncertain whether hydrogen degradation of mechanical properties will be of concern. In this study, three alloys (304L, IN625, Hastelloy X) commonly used in power generation systems were thermally precharged with hydrogen and subsequently tensile tested to failure in air at temperatures ranging from 20°C to 200°C. At 20°C, the hydrogen-precharged condition for all materials exhibited loss in ductility with relative reduction of area ranging between 32% and 57%. The three alloys exhibited different trends with temperature but, in general, the relative reduction of area improved with increasing temperature tending towards noncharged behavior. Tests were performed at a nominal strain rate of 2 x 10-3 s-1 in order to minimize loss of hydrogen during elevated temperature testing. Hydrogen contents from the grip sections were measured both before and after testing and remained within 10% of starting content for 100°C tests and within 8-23% for 200°C tests.
The development of multi-axis force sensing ca-pabilities in elastomeric materials has enabled new types of human motion measurement with many potential applications. In this work, we present a new soft insole that enables mobile measurement of ground reaction forces (GRFs) outside of a lab-oratory setting. This insole is based on hybrid shear and normal force detecting (SAND) tactile elements (taxels) consisting of optical sensors optimized for shear sensing and piezoresistive pressure sensors dedicated to normal force measurement. We develop polynomial regression and deep neural network (DNN) GRF prediction models and compare their performance to ground-truth force plate data during two walking experiments. Utilizing a 4-layer DNN, we demonstrate accurate prediction of the anterior-posterior (AP), medial-lateral (ML) and vertical components of the GRF with normalized mean absolute errors (NMAE) of <5.1 %, 4.1 %, and 4.5%, respectively. We also demonstrate the durability of the hybrid SAND insole construction through more than 20,000 cycles of use.
Underground caverns in salt formations are promising geologic features to store hydrogen (H2) because of salt's extremely low permeability and self-healing behavior.Successful salt-cavern H2 storage schemes must maximize the efficiency of cyclic injection-production while minimizing H2 loss through adjacent damaged salt.The salt cavern storage community, however, has not fully understood the geomechanical behaviors of salt rocks driven by quick operation cycles of H2 injection-production, which may significantly impact the cost-effective storage-recovery performance.Our field-scale generic model captures the impact of combined drag and back stressing on the salt creep behavior corresponding to cycles of compression and extension, which may lead to substantial loss of cavern volumes over time and diminish the cavern performance for H2 storage.Our preliminary findings address that it is essential to develop a new salt constitutive model based on geomechanical tests of site-specific salt rock to probe the cyclic behaviors of salt both beneath and above the dilatancy boundary, including reverse (inverse transient) creep, the Bauschinger effect and fatigue.
This paper provides a summary of planning work for experiments that will be necessary to address the long-term model validation needs required to meet offshore wind energy deployment goals. Conceptual experiments are identified and laid out in a validation hierarchy for both wind turbine and wind plant applications. Instrumentation needs that will be required for the offshore validation experiments to be impactful are then listed. The document concludes with a nominal vision for how these experiments can be accomplished.
Different data pipelines and statistical methods are applied to photovoltaic (PV) performance datasets to quantify the performance loss rate (PLR). Since the real values of PLR are unknown, a variety of unvalidated values are reported. As such, the PV industry commonly assumes PLR based on statistically extracted ranges from the literature. However, the accuracy and uncertainty of PLR depend on several parameters including seasonality, local climatic conditions, and the response of a particular PV technology. In addition, the specific data pipeline and statistical method used affect the accuracy and uncertainty. To provide insights, a framework of (≈200 million) synthetic simulations of PV performance datasets using data from different climates is developed. Time series with known PLR and data quality are synthesized, and large parametric studies are conducted to examine the accuracy and uncertainty of different statistical approaches over the contiguous US, with an emphasis on the publicly available and “standardized” library, RdTools. In the results, it is confirmed that PLRs from RdTools are unbiased on average, but the accuracy and uncertainty of individual PLR estimates vary with climate zone, data quality, PV technology, and choice of analysis workflow. Best practices and improvement recommendations based on the findings of this study are provided.
2024 IEEE International Power Modulator and High Voltage Conference, IPMHVC 2024
Graves, David Z.; Lehmann, Megan; Bilbao, Argenis V.; Bayne, Stephen B.; Schrock, Emily A.
This paper builds upon previous research in developing a SiC Drift Step Recovery Diode (DSRD) Model in Silvaco Victory Device. For this research, the DSRD is based on an N-type substrate for improved manufacturability. The model described in this paper was developed by characterizing DSRD devices under DC and transient conditions. The details of the pulsed power testbed developed for the transient characterization is outlined in this paper. The goal of this model is to allow the rapid development of future pulsed power systems and for further device structure optimization.
While recent research has greatly improved our ability to test and model nonlinear dynamic systems, it is rare that these studies quantify the effect that the nonlinearity would have on failure of the structure of interest. While several very notable exceptions certainly exist, such as the work of Hollkamp et al. on the failure of geometrically nonlinear skin panels for high speed vehicles (see, e.g., Gordon and Hollkamp, Reduced-order models for acoustic response prediction. Technical Report AFRL-RB-WP-TR-2011-3040, Air Force Research Laboratory, AFRL-RB-WP-TR-2011-3040, Dayton, 2011. Issue: AFRL-RB-WP-TR-2011-3040AFRL-RB-WP-TR-2011-3040), other studies have given little consideration to failure. This work studies the effect of common nonlinearities on the failure (and failure margins) of components that undergo durability testing in dynamic environments. This context differs from many engineering applications because one usually assumes that any nonlinearities have been fully exercised during the test.
Control volume analysis models physics via the exchange of generalized fluxes between subdomains. We introduce a scientific machine learning framework adopting a partition of unity architecture to identify physically-relevant control volumes, with generalized fluxes between subdomains encoded via Whitney forms. The approach provides a differentiable parameterization of geometry which may be trained in an end-to-end fashion to extract reduced models from full field data while exactly preserving physics. The architecture admits a data-driven finite element exterior calculus allowing discovery of mixed finite element spaces with closed form quadrature rules. An equivalence between Whitney forms and graph networks reveals that the geometric problem of control volume learning is equivalent to an unsupervised graph discovery problem. The framework is developed for manifolds in arbitrary dimension, with examples provided for H(div) problems in R2 establishing convergence and structure preservation properties. Finally, we consider a lithium-ion battery problem where we discover a reduced finite element space encoding transport pathways from high-fidelity microstructure resolved simulations. The approach reduces the 5.89M finite element simulation to 136 elements while reproducing pressure to under 0.1% error and preserving conservation.
Numerous types of pulsed power driven inertial confinement fusion (ICF) and high energy density (HED) systems rely on implosion stability to achieve desired temperatures, pressures, and densities. Sandia National Laboratories Pulsed Power Sciences Center’s main ICF platform, Magnetized Liner Inertial Fusion (MagLIF), suffers from implosion instabilities which limit attainable fuel conditions and can compromise fuel confinement. This Truman Fellowship research primarily focused on computationally exploring (a) methods for improving our understanding of hydrodynamic and magnetohydrodynamic instabilities that form during cylindrical liner implosions, (b) methods for mitigating implosion instabilities, particularly those that degrade performance of MagLIF targets, and (c) novel MagLIF target designs intended to improve target performance primarily via enhanced implosion stability. Several multi-dimensional computational tools were used, including the magnetohydrodynamics code ALEGRA, the radiation-magnetohydrodynamics code HYDRA, and the magnetohydrodynamics code KRAKEN. This research succeeded in executing and analyzing simulations of automagnetizing liner implosions, shockless MagLIF implosions, dynamic screw pinch driven cylindrical liner implosions, and cylindrically convergent HED instability studies. The methods and tools explored and developed in this Truman Fellowship research have been published in several peer-reviewed journal articles and will serve as useful contributions to the fields of pulsed power science and engineering, particularly pertaining to pulsed power ICF and HED science.
For the cylindrically symmetric targets that are normally fielded on the Z machine, two dimensional axisymmetric MHD simulations provide the backbone of our target design capability. These simulations capture the essential operation of the target and allow for a wide range of physics to be addressed at a substantially lower computational cost than 3D simulations. This approach, however, makes some approximations that may impact its ability to accurately provide insight into target operation. As an example, in 2D simulations, targets are able to stagnate directly to the axis in a way that is not entirely physical, leading to uncertainty in the impact of the dynamical instabilities that are an important source of degradation for ICF concepts. In this report, we have performed a series of 3D calculations in order to assess the importance of this higher fidelity treatment on MagLIF target performance.
A technique using the photon kerma cross section for a material in combination with the number fraction from a photon energy spectrum has been developed to determine the estimated subzone dimension needed to provide an energy deposition profile in radiation transport calculations. The technique was verified using the ITS code for monoenergetic photon sources and a selection of photon spectra. A Python script was written to use the CEPXS cross-section file with a Rapture calculated transmission spectrum to provide the dimensional estimates in a rapid fashion. The script is available for SNL users through the corporate gitlab server.
The nonlinear viscoelastic Spectacular model is calibrated to the thermo-mechanical behavior of 828/D230/Alox with an alox volume fraction of 20 %. Legacy experimental data from Sandia’s polymer properties database (PPD) is used to calibrate the model. Based on known densities of the epoxy 828/D230 and the alox filler, the alox volume fractions listed on the PPD were likely reported incorrectly. The alox volume fractions are recalculated here. Using the recalculated alox volume fractions, the PPD contains experimental data for 828/D230/Alox with alox volume fractions of 16 %, 24 %, and 33 %, so the thermo-mechanical behavior at 20 % alox volume fraction is estimated by interpolating between the bounding cases of of 16 % and 24 %. Because the Spectacularmodel can be fairly challenging to calibrate, the calibration procedure is described in detail. Several of the calibration steps involve inverse parameter identification, where an experiment is simulated and parameters are iteratively updated until the model response matches the experimental data. As the PPD does not fully describe all experimental procedures, the experimental simulations use assumed thermal and mechanical loading rates that are typical for the viscoelastic characterization of epoxies. Spectacular uses four independent relaxation functions related to volumetric (ƒ1), shear (ƒ2), thermal strain (ƒ3), and thermal relaxations (ƒ4). The previous SPEC model form, also known as the universal_polymer model, uses two independent relaxation functions related to volumetric and thermal relaxation (ƒν = ƒ1 = ƒ3 = ƒ4) and shear relaxation (ƒs = ƒ2). The two constitutive choices are briefly evaluated here, where it is found that the four relaxation function approach of Spectacular was better suited for fitting the coefficient of thermal expansion during both heating and cooling.
The Storage Sizing and Placement Simulation (SSIM) application allows a user to define the possible sizes and locations of energy storage elements on an existing grid model defined in OpenDSS. Given these possibilities, the software will automatically search through them and attempt to determine which configurations result in the best overall grid performance. This quick-start guide will go through, in detail, the creation of an SSIM model based on a modified version of the IEEE 34 bus test feeder system. There are two primary parts of this document. The first is a complete list of instructions with little-to-no explanation of the meanings of the actions requested. The second is a detailed description of each input and action stating the intent and effect of each. There are links between the two sections.
Optimization is a key tool for scientific and engineering applications; however, in the presence of models affected by uncertainty, the optimization formulation needs to be extended to consider statistics of the quantity of interest. Optimization under uncertainty (OUU) deals with this endeavor and requires uncertainty quantification analyses at several design locations; i.e., its overall computational cost is proportional to the cost of performing a forward uncertainty analysis at each design location. An OUU workflow has two main components: an inner loop strategy for the computation of statistics of the quantity of interest, and an outer loop optimization strategy tasked with finding the optimal design, given a merit function based on the inner loop statistics. Here, in this work, we propose to alleviate the cost of the inner loop uncertainty analysis by leveraging the so-called multilevel Monte Carlo (MLMC) method, which is able to allocate resources over multiple models with varying accuracy and cost. The resource allocation problem in MLMC is formulated by minimizing the computational cost given a target variance for the estimator. We consider MLMC estimators for statistics usually employed in OUU workflows and solve the corresponding allocation problem. For the outer loop, we consider a derivative-free optimization strategy implemented in the SNOWPAC library; our novel strategy is implemented and released in the Dakota software toolkit. We discuss several numerical test cases to showcase the features and performance of our approach with respect to its Monte Carlo single fidelity counterpart.
This report describes work originally performed in FY19 that assembled a workflow enabling formal verification of high-consequence digital controllers. The approach builds on an engineering analysis strategy using multiple abstraction levels (Model-Based Design) and performs exhaustive formal analysis of appropriate levels – here, state machines and C code – to assure always/never properties of digital logic that cannot be verified by testing alone. The operation of the workflow is illustrated using example models and code, including expected failures of verification when properties are violated.
The International Database of Reference Gamma-Ray Spectra of Various Nuclear Matter is designed to hold curated gamma spectral data is hosted by the International Atomic Energy Agency on its public facing web site. The database used to hold the spectral data was designed by Sandia National Labs under the auspices of the State Department’s Support Program. This document describes the tables and entity relationships that make up the database.
Long-term stable sealing elements are a basic component in the safety concept for a possible repository for heat-emitting radioactive waste in rock salt. The sealing elements will be part of the closure concept for drifts and shafts. They will be made from a welldefinied crushed salt in employ a specific manufacturing process. The use of crushed salt as geotechnical barrier as required by the German Site Selection Act from 2017 /STA 17/ represents a paradigm change in the safety function of crushed salt, since this material was formerly only considered as stabilizing backfill for the host rock. The demonstration of the long-term stability and impermeability of crushed salt is crucial for its use as a geotechnical barrier. The KOMPASS-II project, is a follow-up of the KOMPASS-I project and continues the work with focus on improving the understanding of the thermal-hydraulic-mechanical (THM) coupled processes in crushed salt compaction with the objective to enhance the scientific competence for using crushed salt for the long-term isolation of high-level nuclear waste within rock salt repositories. The project strives for an adequate characterization of the compaction process and the essential influencing parameters, as well as a robust and reliable long-term prognosis using validated constitutive models. For this purpose, experimental studies on long-term compaction tests are combined with microstructural investigations and numerical modeling. The long-term compaction tests in this project focused on the effect of mean stress, deviatoric stress and temperature on the compaction behavior of crushed salt. A laboratory benchmark was performed identifying a variability in compaction behavior. Microstructural investigations were executed with the objective to characterize the influence of pre-compaction procedure, humidity content and grain size/grain size distribution on the overall compaction process of crushed salt with respect to the deformation mechanisms. The created database was used for benchmark calculations aiming for improvement and optimization of a large number of constitutive models available for crushed salt. The models were calibrated, and the improvement process was made visible applying the virtual demonstrator.
Infrasound, low frequency sound less than 20 Hz, is generated by both natural and anthropogenic sources. Infrasound sensors measure pressure fluctuations only in the vertical plane and are single channel. However, the most robust infrasound signal detection methods rely on stations with multiple sensors (arrays), despite the fact that these are sparse. Automated methods developed for seismic data, such as short-term average to long-term average ratio (STA/LTA), often have a high false alarm rate when applied to infrasound data. Leveraging single channel infrasound stations has the potential to decrease signal detection limits, though this cannot be done without a reliable detection method. Therefore, this report presents initial results using (1) a convolutional neural network (CNN) to detect infrasound signals and (2) unsupervised learning to gain insight into source type.
This report summarizes the collaboration between Sandia National Laboratories (SNL) and the Nuclear Regulatory Commission (NRC) to improve the state of knowledge on chloride induced stress corrosion cracking (CISCC). The foundation of this work relied on using SNL’s CISCC computer code to assess the current state of knowledge for probabilistically modeling CISCC on stainless steel canisters. This work is presented as three tasks. The first task is exploring and independently comparing crack growth rate (CGR) models typically used in CISCC modeling by the research community. The second task is implementing two of the more conservative CGR models from the first task into SNL’s full CISCC code to understand the impact of the different CGR models on a full probabilistic analysis while studying uncertainty from three key input parameters. The combined work of the first two tasks showed that properly measuring salt deposition rates is impactful to reducing uncertainty when modeling CISCC. The work in Task 2 also showed how probabilistic CGR models can be more appropriate at capturing aleatory uncertainty when modeling SCC. Lastly, appropriate and realistic input parameters relevant for CISCC modeling were documented in the last task as a product of the simulations considered in the first two tasks.
Accurately locating seismoacoustic sources with geophysical observations helps to monitor natural and anthropogenic phenomena. Sparsely deployed infrasound arrays can readily locate large sources thousands of kms away, but small events typically produce signals observable at only local to regional distances. At such distances, accurate location efforts rely on observations across smaller regional or temporary deployments which often consist of single-channel infrasound sensors that cannot record direction of arrival. Event locations can also be aided by inclusion of ground coupled airwaves (GCA). This study demonstrates how we can robustly locate a catalog of seismoacoustic events using infrasound, GCA, and seismic arrival times at local to near-regional distances. We employ a probabilistic location framework using simplified forward models. Our results indicate that both single-channel infrasound and GCA arrival times can provide accurate estimates of event location in the absence of array-based observations even when using simple models. However, one must carefully choose model uncertainty bounds to avoid underestimation of confidence intervals.
This work was conducted in support of the American Made Geothermal Prize. The following data summary report presents the testing conducted at Sandia National Labs to validate the performance of the Ultra-High Temperature Seismic Tool for Geothermal Wells. The goal of the testing was to measure the sensitivity of the device to seismic vibrations and reliability of the instrument at elevated temperatures. To this end, two tests were conducted: 1) Ambient Temperature Seismic Testing, which measured the response of the tool to a sweep of frequencies from 1 to 1000 Hz, and 2) Elevated Temperature Survivability Testing which measured the voltage response of the device at 225°C over a month-long testing window. The details of the testing methodology and summary of the tests are presented herein.