Operation and control of a galvanically isolated three-phase AC-AC converter for solid state transformer applications is described. The converter regulates bidirectional power transfer by phase shifting voltages applied on either side of a high-frequency transformer. The circuit structure and control system are symmetrical around the transformer. Each side operates independently, enabling conversion between AC systems with differing voltage magnitude, phase angle, and frequency. This is achieved in a single conversion stage with low component count and high efficiency. The modulation strategy is discussed in detail and expressions describing the relationship between phase shift and power transfer are presented. Converter operation is demonstrated in a 3 kW hardware prototype.
This study introduces the Progressive Improved Neural Operator (p-INO) framework, aimed at advancing machine-learning-based reduced-order models within geomechanics for underground resource optimization and carbon sequestration applications.The p-INO method transcends traditional transfer learning limitations through progressive learning, enhancing the capability of transferring knowledge from many sources.Through numerical experiments, the performance of p-INO is benchmarked against standard Improved Neural Operators (INO) in scenarios varying by data availability (different number of training samples).The research utilizes simulation data reflecting scenarios like single-phase, two-phase, and two-phase flow with mechanics inspired by the Illinois Basin Decatur Project.Results reveal that p-INO significantly surpasses conventional INO models in accuracy, particularly in data-constrained environments.Besides, adding more priori information (more trained models used by p-INO) can further enhance the process.This experiment demonstrates p-INO's robustness in leveraging sparse datasets for precise predictions across complex subsurface physics scenarios.The findings underscore the potential of p-INO to revolutionize predictive modeling in geomechanics, presenting a substantial improvement in computational efficiency and accuracy for large-scale subsurface simulations.
Folsom, Matthew; Sewell, Steven; Cumming, William; Zimmerman, Jade; Sabin, Andy; Downs, Christine; Hinz, Nick; Winn, Carmen; Schwering, Paul C.
Blind geothermal systems are believed to be common in the Basin and Range province and represent an underutilized source of renewable green energy. Their discovery has historically been by chance but more methodological strategies for exploration of these resources are being developed. One characteristic of blind systems is that they are often overlain by near-surface zones of low-resistivity caused by alteration of the overlying sediments to swelling clays. These zones can be imaged by resistivity-based geophysical techniques to facilitate their discovery and characterization. Here we present a side-by-side comparison of resistivity models produced from helicopter transient electromagnetic (HTEM) and ground-based broadband magnetotelluric (MT) surveys over a previously discovered blind geothermal system with measured shallow temperatures of ~100°C in East Hawthorne, NV. The HTEM and MT data were collected as part of the BRIDGE project, an initiative for improving methodologies for discovering blind geothermal systems. HTEM data were collected and modelled along profiles, and the results suggest the method can resolve the resistivity structure 300 - 500 m deep. A 61-station MT survey was collected on an irregular grid with ~800 m station spacing and modelled in 3D on a rotated mesh aligned with HTEM flight directions. Resistivity models are compared with results from potential fields datasets, shallow temperature surveys, and available temperature gradient data in the area of interest. We find that the superior resolution of the HTEM can reveal near-surface details often missed by MT. However, MT is sensitive to several km deep, can resolve 3D structures, and is thus better suited for single-prospect characterization. We conclude that HTEM is a more practical subregional prospecting tool than is MT, because it is highly scalable and can rapidly discover shallow zones of low resistivity that may indicate the presence of a blind geothermal system. Other factors such as land access and ground disturbance considerations may also be decisive in choosing the best method for a particular prospect. Resistivity methods in general cannot fully characterize the structural setting of a geothermal system, and so we used potential fields and other datasets to guide the creation of a diagrammatic structural model at East Hawthorne.
Fault location, isolation, and service restoration of a self-healing, self-Assembling microgrid operating off-grid from distributed inverter-based resources (IBRs) can be a unique challenge because of the fault current limitations and uncertainties regarding which sources are operational at any given time. The situation can become even more challenging if data sharing between the various microgrid controllers, relays, and sources is not available. This paper presents an innovative robust partitioning approach, which is used as part of a larger self-Assembling microgrid concept utilizing local measurements only. This robust partitioning approach splits a microgrid into sub-microgrids to isolate the fault to just one of the sub-microgrids, allowing the others to continue normal operation. A case study is implemented in the IEEE 123-bus distribution test system in Simulink to show the effectiveness of this approach. The results indicate that including the robust partitions leads to less loss of load and shorter overall restoration times.
Control volume analysis models physics via the exchange of generalized fluxes between subdomains. We introduce a scientific machine learning framework adopting a partition of unity architecture to identify physically-relevant control volumes, with generalized fluxes between subdomains encoded via Whitney forms. The approach provides a differentiable parameterization of geometry which may be trained in an end-to-end fashion to extract reduced models from full field data while exactly preserving physics. The architecture admits a data-driven finite element exterior calculus allowing discovery of mixed finite element spaces with closed form quadrature rules. An equivalence between Whitney forms and graph networks reveals that the geometric problem of control volume learning is equivalent to an unsupervised graph discovery problem. The framework is developed for manifolds in arbitrary dimension, with examples provided for H(div) problems in R2 establishing convergence and structure preservation properties. Finally, we consider a lithium-ion battery problem where we discover a reduced finite element space encoding transport pathways from high-fidelity microstructure resolved simulations. The approach reduces the 5.89M finite element simulation to 136 elements while reproducing pressure to under 0.1% error and preserving conservation.
For turbulent reacting flow systems, identification of low-dimensional representations of the thermo-chemical state space is vitally important, primarily to significantly reduce the computational cost of device-scale simulations. Principal component analysis (PCA), and its variants, are a widely employed class of methods. Recently, an alternative technique that focuses on higher-order statistical interactions, co-kurtosis PCA (CoK-PCA), has been shown to effectively provide a low-dimensional representation by capturing the stiff chemical dynamics associated with spatiotemporally localized reaction zones. While its effectiveness has only been demonstrated based on a priori analyses with linear reconstruction, in this work, we employ nonlinear techniques to reconstruct the full thermo-chemical state and evaluate the efficacy of CoK-PCA compared to PCA. Specifically, we combine a CoK-PCA-/PCA-based dimensionality reduction (encoding) with an artificial neural network (ANN) based reconstruction (decoding) and examine, a priori, the reconstruction errors of the thermo-chemical state. In addition, we evaluate the errors in species production rates and heat release rates, which are nonlinear functions of the reconstructed state, as a measure of the overall accuracy of the dimensionality reduction technique. We employ four datasets to assess CoK-PCA/PCA coupled with ANN-based reconstruction: zero-dimensional (homogeneous) reactor for autoignition of an ethylene/air mixture that has conventional single-stage ignition kinetics, a dimethyl ether (DME)/air mixture which has two-stage (low and high temperature) ignition kinetics, a one-dimensional freely propagating premixed ethylene/air laminar flame, and a two-dimensional dataset representing turbulent autoignition of ethanol in a homogeneous charge compression ignition (HCCI) engine. Results from the analyses demonstrate the robustness of the CoK-PCA based low-dimensional manifold with ANN reconstruction in accurately capturing the data, specifically from the reaction zones.
The explosive BTF (benzotrifuroxan) is an interesting molecule for sub-millimeter studies of initiation and detonation. It has no hydrogen, thus no water in the detonation products and a subsequently high temperature in the reaction zone. The material has impact sensitivity that is comparable or less than that of PETN (pentaerythritol tetranitrate) and slightly greater than RDX, HMX, and CL-20. Physical vapor deposition (PVD) can be used to grow high-density films of pure explosives with precise control over geometry, and we apply this technique to BTF to study detonation and initiation behavior as a function of sample thickness. The geometrical effects on detonation and corner turning behavior are studied with the critical detonation thickness experiment and the micromushroom test, respectively. Initiation behavior is studied with the high-throughput initiation experiment. Vapor-deposited films of BTF show detonation failure, corner turning, and initiation consistent with a heterogeneous explosive. Scaling of failure thickness to failure diameter shows that BTF has a very small failure diameter.
The siting of nuclear waste is a process that requires consideration of concerns of the public. This report demonstrates the significant potential for natural language processing techniques to gain insights into public narratives around “nuclear waste.” Specifically, the report highlights that the general discourse regarding “nuclear waste” within the news media has fluctuated in prevalence compared to “nuclear” topics broadly over recent years, with commonly mentioned entities reflecting a limited variety of geographies and stakeholders. General sentiments within the “nuclear waste” articles appear to use neutral language, suggesting that a scientific or “facts-only” framing of “waste”-related issues dominates coverage; however, the exact nuances should be further evaluated. The implications of a number of these insights about how nuclear waste is framed in traditional media (e.g., regarding emerging technologies, historical events, and specific organizations) are discussed. This report lays the groundwork for larger, more systematic research using, for example, transformer-based techniques and covariance analysis to better understand relationships among “nuclear waste” and other nuclear topics, sentiments of specific entities, and patterns across space and time (including in a particular region). By identifying priorities and knowledge needs, these data-driven methods can complement and inform engagement strategies that promote dialogue and mutual learning regarding nuclear waste.
As deep learning networks increase in size and performance, so do associated computational costs, approaching prohibitive levels. Dendrites offer powerful nonlinear "on-The-wire"computational capabilities, increasing the expressivity of the point neuron while preserving many of the advantages of SNNs. We seek to demonstrate the potential of dendritic computations by combining them with the low-power event-driven computation of Spiking Neural Networks (SNNs) for deep learning applications. To this end, we have developed a library that adds dendritic computation to SNNs within the PyTorch framework, enabling complex deep learning networks that still retain the low power advantages of SNNs. Our library leverages a dendrite CMOS hardware model to inform the software model, which enables nonlinear computation integrated with snnTorch at scale. By leveraging dendrites in a deep learning framework, we examine the capabilities of dendrites via coincidence detection and comparison in a machine learning task with a SNN. Finally, we discuss potential deep learning applications in the context of current state-of-The-Art deep learning methods and energy-efficient neuromorphic hardware.
Single-axis solar trackers are typically simulated under the assumption that all modules on a given section of torque tube are at a single orientation. In reality, various mechanical effects can cause twisting along the torque tube length, creating variation in module orientation along the row. Simulation of the impact of this on photovoltaic system performance reveals that the performance loss resulting from torque tube twisting is significant at twists as small as fractions of a degree per module. The magnitude of the loss depends strongly on the design of the photovoltaic module, but does not vary significantly across climates. Additionally, simple tracker control setting tweaks were found to substantially reduce the loss for certain types of twist.
The diesel-piloted dual-fuel compression ignition combustion strategy is well-suited to accelerate the decarbonization of transportation by adopting hydrogen as a renewable energy carrier into the existing internal combustion engine with minimal engine modifications. Despite the simplicity of engine modification, many questions remain unanswered regarding the optimal pilot injection strategy for reliable ignition with minimum pilot fuel consumption. The present study uses a single-cylinder heavy-duty optical engine to explore the phenomenology and underlying mechanisms governing the pilot fuel ignition and the subsequent combustion of a premixed hydrogen-air charge. The engine is operated in a dual-fuel mode with hydrogen premixed into the engine intake charge with a direct pilot injection of n-heptane as a diesel pilot fuel surrogate. Optical diagnostics used to visualize in-cylinder combustion phenomena include high-speed IR imaging of the pilot fuel spray evolution as well as high-speed HCHO* and OH* chemiluminescence as indicators of low-temperature and high-temperature heat release, respectively. Three pilot injection strategies are compared to explore the effects of pilot fuel mass, injection pressure, and injection duration on the probability and repeatability of successful ignition. The thermodynamic and imaging data analysis supported by zero-dimensional chemical kinetics simulations revealed a complex interplay between the physical and chemical processes governing the pilot fuel ignition process in a hydrogen containing charge. Hydrogen strongly inhibits the ignition of pilot fuel mixtures and therefore requires longer injection duration to create zones with sufficiently high pilot fuel concentration for successful ignition. Results show that ignition typically tends to rely on stochastic pockets with high pilot fuel concentration, which results in poor repeatability of combustion and frequent misfiring. This work has improved the understanding on how the unique chemical properties of hydrogen pose a challenge for maximization of hydrogen's energy share in hydrogen dual-fuel engines and highlights a potential mitigation pathway.
This article aims at discovering the unknown variables in the system through data analysis. The main idea is to use the time of data collection as a surrogate variable and try to identify the unknown variables by modeling gradual and sudden changes in the data. We use Gaussian process modeling and a sparse representation of the sudden changes to efficiently estimate the large number of parameters in the proposed statistical model. The method is tested on a realistic dataset generated using a one-dimensional implementation of a Magnetized Liner Inertial Fusion (MagLIF) simulation model, and encouraging results are obtained.
Full-scale testing of pipes is costly and requires significant infrastructure investments. Subscale testing offers the potential to substantially reduce experimental costs and provides testing flexibility when transferrable test conditions and specimens can be established. To this end, a subscale pipe testing platform was developed to pressure cycle 60 mm diameter pipes (Nominal Pipe Size 2) to failure with gaseous hydrogen. Engineered defects were machined into the inner surface or outer surface to represent pre-existing flaws. The pipes were pressure cycled to failure with gaseous hydrogen at pressures to match operating stresses in large diameter pipes (e.g., stresses comparable to similar fractions of the specified minimum yield stress in transmission pipelines). Additionally, the pipe specimens were instrumented to identify crack initiation, such that crack growth could be compared to fracture mechanics predictions. Predictions leverage an extensive body of materials testing in gaseous hydrogen (e.g., ASME B31.12 Code Case 220) and the recently developed probabilistic fracture mechanics framework for hydrogen (Hydrogen Extremely Low Probability of Rupture, HELPR). In this work, we evaluate the failure response of these subscale pipe specimens and assess the conservatism of fracture mechanics-based design strategies (e.g., API 579/ASME FFS). This paper describes the subscale hydrogen testing capability, compares experimental outcomes to predictions from the probabilistic hydrogen fracture framework (HELPR), and discusses the complement to full-scale testing.
Natural gas pipelines could be an important pathway to transport gaseous hydrogen (GH2) as a cleaner alternative to fossil fuels. However, a comprehensive understanding of hydrogen-assisted fatigue and fracture resistance in pipeline steels is needed, including an assessment of the diverse microstructures present in natural gas infrastructure. In thus study, we focus on modern steel pipe and consider both welded pipe and seamless pipe. In-situ fatigue crack growth (FCG) and fracture tests were conducted on compact tension samples extracted from the base metal, seam-weld, and heat affected zone of an X70 pipe steel in high-purity GH2 (210 bar pressure). Additionally, a seamless X65 pipeline microstructure (with comparable strength) was evaluated to compare the different microstructure of seamless pipe. The different microstructures had comparable FCG rates in GH2, with crack growth rates up to 30 times faster in hydrogen compared to air. In contrast, the fracture resistance in GH2 depended on the characteristics of the microstructure varying in the range of approximately 80 to 110 MPa√m.
We demonstrate an InAs-based terahertz (THz) metasurface emitter that can generate and focus THz pulses using a binary-phase Fresnel zone plate concept. The metalens emitter successfully generates a focused THz beam without additional THz optics.
We consider the problem of decentralized control of reactive power provided by distributed energy resources for voltage support in the distribution grid. We assume that the reactance matrix of the grid is unknown and potentially time-varying. We present a decentralized adaptive controller in which the reactive power at each inverter is set using a potentially heterogeneous droop curve and analyze the stability and the steady-state error of the resulting system. The effectiveness of the controller is validated in simulations using a modified version of the IEEE 13-bus and a 8500-node test system.
In many applications, one can only access the inexact gradients and inexact hessian times vector products. Thus it is essential to consider algorithms that can handle such inexact quantities with a guaranteed convergence to solution. An inexact adaptive and provably convergent semismooth Newton method is considered to solve constrained optimization problems. In particular, dynamic optimization problems, which are known to be highly expensive, are the focus. A memory efficient semismooth Newton algorithm is introduced for these problems. The source of efficiency and inexactness is the randomized matrix sketching. Applications to optimization problems constrained by partial differential equations are also considered.
Concentrating solar power (CSP) plants with integrated thermal energy storage (TES) have successfully been coupled with photovoltaics (PV) + chemical battery energy storage (BES) in recent commercial-scale projects to balance system cost and diurnal power availability. Sandia National Laboratories has been tasked with designing an advanced solar energy system to power Kirtland Air Force Base (KAFB) where Sandia is co-located in Albuquerque, NM, USA. This design process requires optimization of individual components and capacities of the hybrid system. Preliminary modeling efforts have shown that a hybrid CSP+TES/PV+BES in Albuquerque, NM is sufficient for net-zero power generation for Sandia/KAFB for the next decade. However, the ability to meet the load in real-time (and minimize energy export) requires balance of generation and storage assets. Our results also show that excess PV used to charge TES improves resilience and overall renewables-to-load for the system. Here we will present the results of a parametric study varying the land use proportions of CSP and PV, and TES and BES capacities. We evaluate the effects of these variables on energy generation, real-time load satisfaction, site resilience to grid outages, and LCOE, to determine viable hybrid solar energy designs and their cost implications.
National Security Presidential Memorandum-20 defines three tier levels for launch approval of space nuclear systems. The two main factors determining the tier level are the total quantity and type of radioactive sources and the probability of any member of the public receiving doses above certain thresholds. The total quantity of radioactive sources is compared with International Atomic Energy Agency transportation regulations. The dose probability is determined by the product of three terms: 1) the probability of a launch accident occurring; 2) the probability of a release of radioactive material given an accident; and 3) the probability of exceeding the dose threshold to any member of the public given a release. This paper provides a methodology for evaluating these values and applies this methodology to an example mission as a demonstration. For the example mission, a preliminary tier determination of Tier III was concluded.
The primary goal of any laboratory test is to expose the unit-under-test to conservative realistic representations of a field environment. Satisfying this objective is not always straightforward due to laboratory equipment constraints. For vibration and shock tests performed on shakers over-testing and unrealistic failures can result because the control is a base acceleration and mechanical shakers have nearly infinite impedance. Force limiting and response limiting are relatively standard practices to reduce over-test risks in random-vibration testing. Shaker controller software generally has response limiting as a built-in capability and it is done without much user intervention since vibration control is a closed loop process. Limiting in shaker shocks is done for the same reasons, but because the duration of a shock is only a few milliseconds, limiting is a pre-planned user in the loop process. Shaker shock response limiting has been used for at least 30 years at Sandia National Laboratories, but it seems to be little known or used in industry. This objective of this paper is to re-introduce response limiting for shaker shocks to the aerospace community. The process is demonstrated on the BARBECUE testbed.
Multiple scattering is a common phenomenon in acoustic media that arises from the interaction of the acoustic field with a network of scatterers. This mechanism is dominant in problems such as the design and simulation of acoustic metamaterial structures often used to achieve acoustic control for sound isolation, and remote sensing. In this study, we present a physics-informed neural network (PINN) capable of simulating the propagation of acoustic waves in an infinite domain in the presence of multiple rigid scatterers. This approach integrates a deep neural network architecture with the mathematical description of the physical problem in order to obtain predictions of the acoustic field that are consistent with both governing equations and boundary conditions. The predictions from the PINN are compared with those from a commercial finite element software model in order to assess the performance of the method.
Emerging hydrogen technologies span a diverse range of operating environments. High-pressure storage for mobility applications has become commonplace up to about 1,000 bar, whereas transmission of gaseous hydrogen can occur at hydrogen partial pressure of a few bar when blended into natural gas. In the former case, cascade storage is utilized to manage hydrogen-assisted fatigue and the Boiler and Pressure Vessel Code, Section VIII, Division 3 includes fatigue design curves for fracture mechanics design of hydrogen vessels at pressure of 1,030 bar (using a Paris Law formulation). Recent research on hydrogen-assisted fatigue crack growth has shown that a diverse range of ferritic steels show similar fatigue crack growth behavior in gaseous hydrogen environments, including low-carbon steels (e.g., pipeline steels) as well as quench and tempered Cr-Mo and Ni-Cr-Mo pressure vessel steels with tensile strength less than 915 MPa. However, measured fatigue crack growth is sensitive to hydrogen partial pressure and fatigue crack growth can be accelerated in hydrogen at pressure as low as 1 bar. The effect of hydrogen partial pressure from 1 to 1,000 bar can be quantified through a simple semi-empirical correction factor to the fatigue crack growth design curves. This paper documents the technical basis for the pressure-sensitive fatigue crack growth rules for gaseous hydrogen service in ASME B31.12 Code Case 220 and for revision of ASME VIII-3 Code Case 2938-1, including the range of applicability of these fatigue design curves in terms of environmental, materials and mechanics variables.
A Marx generator module from the decommissioned RITS pulsed power machine from Sandia National Labs was modified to operate in an existing setup at Texas Tech University. This will ultimately be used as a testbed for laser triggered gas switching. The existing experimental setup at Texas Tech University consists of a large Marx tank, an oil-filled coaxial pulse forming line, an adjustable peaking gap, and load section along with various diagnostics. The setup was previously operated at a lower voltage than the new experiment, so electrostatic modeling was done to ensure viability and drive needed modifications. The oil tank will house the modified RITS Marx. This Marx contains half as many stages as the original RITS module and has an expected output of 1 MV. A trigger Marx generator consisting of 8 stages has been fabricated to trigger the RITS Marx. Charging and triggering of both Marx generators will be controlled through a fiber optic network. The output from the modified RITS Marx will be used to charge the oil-filled coaxial line acting as a low impedance pulse forming line (PFL). Once charged, the self-breaking peaking gap will close, allowing the compressed pulse to be released into the load section. For testing of the Marx module and PFL, a match 10 Ω water load was fabricated. The output pulsewidth is 55 nsec. Diagnostics include two capacitive voltage probes on either side of the peaking gap, a quarter-turn Rogowski coil for load current measurement, and a Pearson coil for calibrations purposes.
We demonstrate for the first time waveguide integrated cascaded germanium photodetector arrays operated as photocells. We characterize several different array designs, and discuss their effects on voltage and photocurrent performance parameters.
Existing natural gas (NG) pipeline infrastructure can be used to transport gaseous hydrogen (GH2) or blends of NG and hydrogen as low carbon alternatives to NG. Pipeline steels exhibit accelerated fatigue crack growth rates and reduced fracture resistance in the presence of GH2. The hydrogen-assisted fatigue crack growth (HAFCG) rates and hydrogen assisted fracture (HAF) resistance for pipeline steels depend on the hydrogen gas pressure. This study aims to correlate and compare the HAFCG rates of pipeline steels tested in two different gaseous environments at different pressures; high-purity hydrogen (99.9999 % H2) and a blend of nitrogen with 3% hydrogen gas (N2+3%H2). K-controlled FCG tests were performed using compact tension (CT) samples extracted from a vintage X52 (installed in 1962) and a modern X70 (2021) pipeline steel in the different gaseous environments. Subsequently, monotonic fracture tests were performed in the GH2 environment. The HAFCG rates increased with increasing GH2 pressure for both steels, in the ΔK range explored in this study. Nearly identical HAFCG rates were observed for the steels tested in different environments with equivalent fugacity (34.5 bar pure GH2 and 731 bar Blend with 3%H2). The fracture resistance of pipeline steels was significantly reduced in the presence of GH2, even at pressure as low as 1 bar. The reduction in HAF resistance tends to saturate with increasing GH2 pressure. While the fracture resistance of modern steel is substantially higher than vintage steel in air, in high pressure GH2, the HAF resistance is comparable. Similar HAF resistance values were obtained for the respective steels in the pure and blended GH2 environment with similar fugacity. This study confirms that fugacity parameter can be used to correlate HAFCG and HAF behavior of different hydrogen blends. The fracture surface features of the pipeline steels, tested in the different environments are compared to rationalize the observed behavior in GH2.
While recent research has greatly improved our ability to test and model nonlinear dynamic systems, it is rare that these studies quantify the effect that the nonlinearity would have on failure of the structure of interest. While several very notable exceptions certainly exist, such as the work of Hollkamp et al. on the failure of geometrically nonlinear skin panels for high speed vehicles (see, e.g., Gordon and Hollkamp, Reduced-order models for acoustic response prediction. Technical Report AFRL-RB-WP-TR-2011-3040, Air Force Research Laboratory, AFRL-RB-WP-TR-2011-3040, Dayton, 2011. Issue: AFRL-RB-WP-TR-2011-3040AFRL-RB-WP-TR-2011-3040), other studies have given little consideration to failure. This work studies the effect of common nonlinearities on the failure (and failure margins) of components that undergo durability testing in dynamic environments. This context differs from many engineering applications because one usually assumes that any nonlinearities have been fully exercised during the test.
Battery energy storage systems (BESSs) are crucial for modernizing the power grid but are monitored by sensors that are susceptible to anomalies like failures, faults, or cyberattacks that could affect BESS functionality. Much work has been done to detect sensor anomalies, but a research gap persists in responding to anomalies. An approach is proposed to mitigate the damage caused by additive bias anomalies by employing one-of-three estimators based on the anomalies present. A tuned cumulative sum (CUSUM) algorithm is used to identify anomalies, and a set of rules are proposed to select an estimator that will isolate the effect of the anomaly. The proposed approach is evaluated using two simulated studies, one in which an anomaly impacts the input and one where an anomaly impacts an output sensor.
Diesel generators (gensets) are often the lowest-cost electric generation for reliable supply in remote microgrids. The development of converter-dominated diesel-backed microgrids requires accurate dynamic modeling to ensure power quality and system stability. Dynamic response derived using original genset system models often does not match those observed in field experiments. This paper presents the experimental system identification of a frequency dynamics model for a 400 kVA diesel genset. The genset is perturbed via active power load changes and a linearized dynamics model is fit based on power and frequency measurements using moving horizon estimation (MHE). The method is first simulated using a detailed genset model developed in MATLAB/Simulink. The simulation model is then validated against the frequency response obtained from a real 400 kVA genset system at the Power System Integration (PSI) Lab at the University of Alaska Fairbanks (UAF). The simulation and experimental results had model errors of 3.17% and 11.65%, respectively. The resulting genset model can then be used in microgrid frequency dynamic studies, such as for the integration of renewable energy sources.
Accuracy-optimized convolutional neural networks (CNNs) have emerged as highly effective models at predicting neural responses in brain areas along the primate ventral stream, but it is largely unknown whether they effectively model neurons in the complementary primate dorsal stream. We explored how well CNNs model the optic flow tuning properties of neurons in dorsal area MSTd and we compared our results with the Non-Negative Matrix Factorization (NNMF) model, which successfully models many tuning properties of MSTd neurons. To better understand the role of computational properties in the NNMF model that give rise to optic flow tuning that resembles that of MSTd neurons, we created additional CNN model variants that implement key NNMF constraints – non-negative weights and sparse coding of optic flow. While the CNNs and NNMF models both accurately estimate the observer's self-motion from purely translational or rotational optic flow, NNMF and the CNNs with nonnegative weights yield substantially less accurate estimates than the other CNNs when tested on more complex optic flow that combines observer translation and rotation. Despite its poor accuracy, NNMF gives rise to tuning properties that align more closely with those observed in primate MSTd than any of the accuracy-optimized CNNs. This work offers a step toward a deeper understanding of the computational properties and constraints that describe the optic flow tuning of primate area MSTd.
Conceptual models of smectite hydration include planar (flat) clay layers that undergo stepwise expansion as successive monolayers of water molecules fill the interlayer regions. However, X-ray diffraction (XRD) studies indicate the presence of interstratified hydration states, suggesting non-uniform interlayer hydration in smectites. Additionally, recent theoretical studies have shown that clay layers can adopt bent configurations over nanometer-scale lateral dimensions with minimal effect on mechanical properties. Therefore, in this study we used molecular simulations to evaluate structural properties and water adsorption isotherms for montmorillonite models composed of bent clay layers in mixed hydration states. Results are compared with models consisting of planar clay layers with interstratified hydration states (e.g. 1W–2W). The small degree of bending in these models (up to 1.5 Å of vertical displacement over a 1.3 nm lateral dimension) had little or no effect on bond lengths and angle distributions within the clay layers. Except for models that included dry states, porosities and simulated water adsorption isotherms were nearly identical for bent or flat clay layers with the same averaged layer spacing. Similar agreement was seen with Na- and Ca-exchanged clays. While the small bent models did not retain their configurations during unconstrained molecular dynamics simulation with flexible clay layers, we show that bent structures are stable at much larger length scales by simulating a 41.6×7.1 nm2 system that included dehydrated and hydrated regions in the same interlayer.
Sandia National Laboratories (SNL) has completed a comparative evaluation of three design assessment approaches for a 2-liter (2L) capacity containment vessel (CV) of a novel plutonium air transport (PAT) package designed to survive the hypothetical accident condition (HAC) test sequence defined in Title 10 of the United States (US) Code of Federal Regulations (CFR) Part 71.74(a), which includes a 129 meter per second (m/s) impact of the package into an essentially unyielding target. CVs for hazardous materials transportation packages certified in the US are typically designed per the requirements defined in the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (B&PVC) Section III Division 3 Subsection WB “Class TC Transportation Containments.” For accident conditions, the level D service limits and analysis approaches specified in paragraph WB-3224 are applicable. Data derived from finite element analyses of the 129 m/s impact of the 2L-PAT package were utilized to assess the adequacy of the CV design. Three different CV assessment approaches were investigated and compared, one based on stress intensity limits defined in subparagraph WB-3224.2 for plastic analyses (the stress-based approach), a second based on strain limits defined in subparagraph WB-3224.3, subarticle WB-3700, and Section III Nonmandatory Appendix FF for the alternate strain-based acceptance criteria approach (the strain-based approach), and a third based on failure strain limits derived from a ductile fracture model with dependencies on the stress and strain state of the material, and their histories (the Xue-Wierzbicki (X-W) failure-integral-based approach). This paper gives a brief overview of the 2L-PAT package design, describes the finite element model used to determine stresses and strains in the CV generated by the 129 m/s impact HAC, summarizes the three assessment approaches investigated, discusses the analyses that were performed and the results of those analyses, and provides a comparison between the outcomes of the three assessment approaches.
Determining the thermal response of energetic materials at high densities can be difficult when pressure dependent reactions occur within the interior of the material. At high temperatures, reactive components such as hexahydro-l,3,5-tri-nitro-l,3,5-triazine (RDX), ammonium perchlorate (AP), and hydroxyl-terminated polybutadiene (HTPB) decompose and interact. The decomposition products accumulate near defects where internal pressure ultimately causes mechanical damage with closed pores transitioning into open pores. Gases are no longer confined locally; instead, they freely migrate between open pores and ultimately escape into the surrounding headspace or vent. Recently we have developed a universal cookoff model (UCM) coupled to a micromechanics pressurization (MMP) model to address pressure-dependent reactions that occur within the interior of explosives. Parameters for the UCM/MMP model are presented for an explosive and two propellants that contain similar portions of both aluminum (Al) and a binder. The explosive contains RDX and the propellants contain AP with no RDX. One of the propellants contains small amounts of curing catalysts and a burn modifier whereas the other propellant does not. We found that the cookoff behavior of the two propellants behave similarly leading and conclude that small amounts of catalysts or burn modifiers do not influence cookoff behavior appreciably. Kinetic parameters for the UCM/MMP models were obtained from the Sandia Instrumented Thermal Ignition (SITI) experiment. Validation is done with data from other laboratories.
Dendrites enable neurons to perform nonlinear operations. Existing silicon dendrite circuits sufficiently model passive and active characteristics, but do not exploit shunting inhibition as an active mechanism. We present a dendrite circuit implemented on a reconfigurable analog platform that uses active inhibitory conductance signals to modulate the circuit's membrane potential. We explore the potential use of this circuit for direction selectivity by emulating recent observations demonstrating a role for shunting inhibition in a directionally-selective Drosophila (Fruit Fly) neuron.
Explosives exposed to conditions above the Chapman-Jouget (CJ) state exhibit an overdriven response that is transient. Reactive flow models are often fit to the CJ conditions, and they transition to detonation based on inputs lower than or near CJ, but these models may also be used to predict explosive behavior in the overdriven regime. One scenario that can create a strongly overdriven state is a Mach stem shock interaction. These interactions can drive an already detonating or transitioning explosive to an overdriven state, and they can also cause detonation at the interaction location where the separate shocks may be insufficient to detonate the material. In this study, the reactive flow model XHVRB utilizing a Mie-Grüneisen equation of state (EOS) for the unreacted explosive, and a Sesame table for the reacted products, will be used to examine Mach stem interactions from multi-point detonation schemes in CTH. The effect of the overdriven response driven by PETN-based explosive pellets will be tracked to determine the transient detonation behavior, and the predicted states from the burn model will be compared to previously published data.
Uncertainty quantification (UQ) plays a vital role in addressing the challenges and limitations encountered in full-waveform inversion (FWI). Most UQ methods require parameter sampling which requires many forward and adjoint solves. This often results in very high computational overhead compared to traditional FWI, which hinders the practicality of the UQ for FWI. In this work, we develop an efficient UQ-FWI framework based on unsupervised variational autoencoder (VAE) to assess the uncertainty of single and multi-parameter FWI. The inversion operator is modeled using an encoder-decoder network. The input to the network is seismic shot gathers and the output are samples (distribution) of model parameters. We then use these samples to estimate the mean and standard deviation of each parameter population, which provide insights on the uncertainty in the inversion process. To speed up the UQ process, we carried out the reconstruction in an unsupervised learning approach. Moreover, we physics-constrained the network by injecting the FWI gradients during the backpropagation process, leading to better reconstruction. The computational cost of the proposed approach is comparable to the traditional autoencoder full-waveform inversion (AE-FWI), which is encouraging to be used to get further insight on the quality of the inversion. We apply this idea for synthetic data to show its potential in assessing uncertainty in multi-parameter FWI.
To decarbonize the energy sector, there are international efforts to displace carbon-based fuels with renewable alternatives, such as hydrogen. Storage and transportation of gaseous hydrogen are key components of large-scale deployment of carbon-neutral energy technologies, especially storage at scale and transportation over long distances. Due to the high cost of deploying large-scale infrastructure, the existing pipeline network is a potential means of transporting blended natural gas-hydrogen fuels in the near term and carbon-free hydrogen in the future. Much of the existing infrastructure in North America was deployed prior to 1970 when greater variability existed in steel processing and joining techniques often leading to microstructural inhomogeneities and hard spots, which are local regions of elevated hardness relative to the pipe or weld. Hard spots, particularly in older pipes and welds, are a known threat to structural integrity in the presence of hydrogen. High-strength materials are susceptible to hydrogen-assisted fracture, but the susceptibility of hard spots in otherwise low-strength materials (such as vintage pipelines) has not been systematically examined. Assessment of fracture performance of pipeline steels in gaseous hydrogen is a necessary step to establish an approach for structural integrity assessment of pipeline infrastructure for hydrogen service. This approach must include comprehensive understanding of microstructural anomalies (such as hard spots), especially in vintage materials. In this study, fracture resistance of pipeline steels is measured in gaseous hydrogen with a focus on high strength materials and hardness limits established in common practice and in current pipeline codes (such as ASME B31.12). Elastic-plastic fracture toughness measurements were compared for several steel grades to identify the relationship between hardness and fracture resistance in gaseous hydrogen.
Geogenic gases often reside in intergranular pore space, fluid inclusions, and within mineral grains. In particular, helium-4 (4He) is generated by alpha decay of uranium and thorium in rocks. The emitted 4He nuclei can be trapped in the rock matrix or in fluid inclusions. Recent work has shown that releases of helium occur during plastic deformation of crustal rocks above atmospheric concentrations that are detectable in the field. However, it is unclear how rock type and deformation modalities affect the cumulative gas released. This work seeks to address how different deformation modalities observed in several rock types affect release of helium. Axial compression tests with granite, rhyolite, tuff, dolostone, and sandstone - under vacuum conditions - were conducted to measure the transient release of helium from each sample during crushing. It was found that, when crushed up to 97500 N, each rock type released helium at a rate quantifiable using a helium mass spectrometer leak detector. For plutonic rock like granite, helium flow rate spikes with the application of force as the samples elastically deform until fracture, then decays slowly until grain breakdown comminution begins to occur. Both the rhyolite and tuff do not experience such large spikes in helium flow rate, with the rhyolites fracturing at much lower force and the tuffs compacting instead of fracturing due to their high porosity. Both rhyolite and tuff instead experience a lesser but steady helium release as they are crushed. The cumulative helium release for the volcanic tuffs varies as much as two orders of magnitude but is fairly consistent for the denser rhyolite and granite tested. The results indicate that there is a large degassing of helium as rocks are elastically and inelastically deformed prior to fracturing. For more porous and less brittle rocks, the cumulative release will depend more on the degree of deformation applied. These results are compared with known U/Th radioisotopes in the rocks to relate the trapped helium as either produced in the rock or from secondary migration of 4He.
Multiple-input/multiple-output (MIMO) vibration control often relies on a least-squares solution utilizing a matrix pseudo-inverse. While this is simple and effective for many cases, it lacks flexibility in assigning preference to specific control channels or degrees of freedom (DOFs). For example, the user may have some DOFs where accuracy is very important and other DOFs where accuracy is less important. This chapter shows a method for assigning weighting to control channels in the MIMO vibration control process. These weights can be constant or frequency-dependent functions depending on the application. An algorithm is presented for automatically selecting DOF weights based on a frequency-dependent data quality metric to ensure the control solution is only using the best, linear data. An example problem is presented to demonstrate the effectiveness of the weighted solution.
Hargis, Joshua W.; Egeln, Anthony; Houim, Ryan; Guildenbecher, Daniel R.
Visualization of flow structures within post-detonation fireballs has been performed for benchmark validation of numerical simulations. Custom pressed PETN explosives with a 12-mm diameter hemispherical form factor were used to produce a spherically symmetric post-detonation flow with low soot yield. Hydroxyl-radical planar laser induce fluorescence (OH-PLIF) was employed to visualize the structure ranging from approximately 10μs to 35μs after shock breakout from the explosive pellet. Fireball simulations were performed using the HyBurn Computational Fluid Dynamics (CFD) package. Experimental OH-PLIF results were compared to synthetic OH-PLIF from post-processing of CFD simulations. From the comparison of experimental and synthetic OH-PLIF images, CFD is shown to replicate much of the flow structure observed in the experiments, revealing potential differences in turbulent length scales and OH kinetics. Results provide significant advancement in experimental resolution of these harsh turbulent combustion environments and validate physical models thereof.
The importance of user-accessible multiple-input/multiple-output (MIMO) control methods has been highlighted in recent years. Several user-created control laws have been integrated into Rattlesnake, an open-source MIMO vibration controller developed at Sandia National Laboratories. Much of the effort to date has focused on stationary random vibration control. However, there are many field environments which are not well captured by stationary random vibration testing, for example shock, sine, or arbitrary waveform environments. This work details a time waveform replication technique that uses frequency domain deconvolution, including a theoretical overview and implementation details. Example usage is demonstrated using a simple structural dynamics system and complicated control waveforms at multiple degrees of freedom.
A new Adaptive Mesh Refinement (AMR) keyword was added to the CTH1 hydrocode developed at Sandia National Laboratories (SNL). The new indicator keyword, "ratec*ycle", allows the user to specify the minimum number of computational cycles before an AMR block is allowed to be un-refined. This option is designed to allow the analyst to control how quickly a block is un-refined to avoid introducing anomalous waves in their solution due to information propagating across mesh resolution changes. For example, in reactive flow simulations it is often desirable to accurately capture the expansion region behind the reaction front. The effect of this new option was examined using the XHVRB2, 3 model for XTX8003 to model the propagation of the detonation wave in explosives in small channels, and also for a simpler explosive model driving a steel case. The effect on computational cost as a function of this new option was also examined.
Underground caverns in salt formations are promising geologic features to store hydrogen (H2) because of salt's extremely low permeability and self-healing behavior.Successful salt-cavern H2 storage schemes must maximize the efficiency of cyclic injection-production while minimizing H2 loss through adjacent damaged salt.The salt cavern storage community, however, has not fully understood the geomechanical behaviors of salt rocks driven by quick operation cycles of H2 injection-production, which may significantly impact the cost-effective storage-recovery performance.Our field-scale generic model captures the impact of combined drag and back stressing on the salt creep behavior corresponding to cycles of compression and extension, which may lead to substantial loss of cavern volumes over time and diminish the cavern performance for H2 storage.Our preliminary findings address that it is essential to develop a new salt constitutive model based on geomechanical tests of site-specific salt rock to probe the cyclic behaviors of salt both beneath and above the dilatancy boundary, including reverse (inverse transient) creep, the Bauschinger effect and fatigue.
Numerical simulations were performed in 3D Cartesian coordinates to examine the post-detonation processes produced by the detonation of a 12 mm-diameter hemispherical PETN explosive charge in air. The simulations captured air dissociation by the Mach 20+ shock, chemical equilibration, and afterburning using finite-rate chemical kinetics with a skeletal chemical reaction mechanism. The Becker-Kistiakowsky-Wilson real-gas equation of state is used for the gas-phase. A simplified programmed burn model is used to seamlessly couple the detonation propagation through the explosive charge to the post-detonation reaction processes inside the fireball. Four charge sizes were considered, including diameters of 12 mm, 38 mm, 120 mm, and 1200 mm. The computed blast, shock structures, and chemical composition within the fireball agree with literature. The evolution of the flow at early times is shown to be gas dynamic driven and nearly self-similar when the time and space was scaled. The flow fields were azimuthally averaged and a mixing layer analysis was performed. The results show differences in the temperature and chemical composition with increasing charge size, implying a transition from a chemical kinetic-limited to a mixing-limited regime.
Multifidelity emulators have found wide-ranging applications in both forward and inverse problems within the computational sciences. Thanks to recent advancements in neural architectures, they provide significant flexibility for integrating information from multiple models, all while retaining substantial efficiency advantages over single-fidelity methods. In this context, existing neural multifidelity emulators operate by separately resolving the linear and nonlinear correlation between equally parameterized high-and low-fidelity approximants. However, many complex models ensembles in science and engineering applications only exhibit a limited degree of linear correlation between models. In such a case, the effectiveness of these approaches is impeded, i.e., larger datasets are needed to obtain satisfactory predictions. In this work, we present a general strategy that seeks to maximize the linear correlation between two models through input encoding. We showcase the effectiveness of our approach through six numerical test problems, and we show the ability of the proposed multifidelity emulator to accurately recover the high-fidelity model response under an increasing number of quasi-random samples. In our experiments, we show that input encoding produces in many cases emulators with significantly simpler nonlinear correlations. Finally, we demonstrate how the input encoding can be leveraged to facilitate the fusion of information between low-and high-fidelity models with dissimilar parametrization, i.e., situations in which the number of inputs is different between low-and high-fidelity models.
Multi-axis testing has become a popular test method because it provides a more realistic simulation of a field environment when compared to traditional vibration testing. However, field data may not be available to derive the multi-axis environment. This means that methods are needed to generate “virtual field data” that can be used in place of measured field data. Transfer path analysis (TPA) has been suggested as a method to do this since it can be used to estimate the excitation forces on a legacy system and then apply these forces to a new system to generate virtual field data. This chapter will provide a review of using TPA methods to do this. It will include a brief background on TPA, discuss the benefits of using TPA to compute virtual field data, and delve into the areas for future work that could make TPA more useful in this application.
Photovoltaic modules undergoing laboratory hail tests were observed using high speed video to analyze the key characteristics of impact-induced glass fracture, including crack onset time, initiation location relative to the impact site, and propagation trends. Fifteen commercially representative glass-glass thin-film modules were recorded at 300,000 frames per second during hail impacts which happened to cause glass fracture. Images were processed to identify the time between impact and first plausible glass crack appearance (average 126 μs, standard deviation 59 μs) along with the time to a confirmed crack (average 158 μs, standard deviation 77μs), during the ice ball impacts which had a median kinetic energy of 47 J delivered by 55 mm diameter balls. Limiting factors for identifying glass crack timings were ice ball fragmentation obscuring the impact site and indistinct initial crack appearance, which were inherent to the images and not improved with processing. Computational simulations corresponding to each impact event showed that glass stresses were still localized to the impact site during times with definitively identifiable fracture, and even impacts which did not induce failure created local stress magnitudes exceeding stress levels associated with static glass fracture. These observations confirm that impact-induced glass failure is a time-and rate-dependent phenomena. Results from this study provide baseline metrics for developing a glass fracture criterion to predict module damage during hail impact events, which in turn allows for analysis of design features that may affect damage susceptibility.
Vapor-deposited PETN films undergo significant microstructure evolution when exposed to elevated temperatures, even for short periods of time. This accelerated aging impacts initiation behavior and can lead to chemical changes as well. In this study, as-deposited and aged PETN films are characterized using scanning electron microscopy and ultra-high performance liquid chromatography and compared with changes in initiation behavior measured via a high-throughput experimental platform that uses laser-driven flyers to sequentially impact an array of small explosive samples. Accelerated aging leads to rapid coarsening of the grain structure. At longer times, little additional coarsening is evident, but the distribution of porosity continues to evolve. These changes in microstructure correspond to shifts in the initiation threshold and onset of reactions to higher flyer impact velocities.
For an Energy System to be truly equitable, it should provide affordable and reliable energy services to disadvantaged and underserved populations. Disadvantaged communities often face a combination of economic, social, health, and environmental burdens and may be geographically isolated (e.g., rural communities), which systematically limits their opportunity to fully participate in aspects of economic, social, and civic life.