Battery energy storage systems (BESSs) are crucial for modernizing the power grid but are monitored by sensors that are susceptible to anomalies like failures, faults, or cyberattacks that could affect BESS functionality. Much work has been done to detect sensor anomalies, but a research gap persists in responding to anomalies. An approach is proposed to mitigate the damage caused by additive bias anomalies by employing one-of-three estimators based on the anomalies present. A tuned cumulative sum (CUSUM) algorithm is used to identify anomalies, and a set of rules are proposed to select an estimator that will isolate the effect of the anomaly. The proposed approach is evaluated using two simulated studies, one in which an anomaly impacts the input and one where an anomaly impacts an output sensor.
Physical experiments are often expensive and time-consuming. Test engineers must certify the compatibility of aircraft and their weapon systems before they can be deployed in the field, but the testing required is time consuming, expensive, and resource limited. Adopting Bayesian adaptive designs is a promising way to borrow from the successes seen in the clinical trials domain. The use of predictive probability (PP) to stop testing early and make faster decisions is particularly appealing given the aforementioned constraints. Given the high-consequence nature of the tests performed in the national security space, a strong understanding of new methods is required before being deployed. Although PP has been thoroughly studied for binary data, there is less work with continuous data, where many reliability studies are interested in certifying the specification limits of components. A simulation study evaluating the robustness of this approach indicates early stopping based on PP is reasonably robust to minor assumption violations, especially when only a few interim analyses are conducted. The simulation study also compares PP to conditional power, showing its relative strengths and weaknesses. A post-hoc analysis exploring whether release requirements of a weapon system from an aircraft are within specification with desired reliability resulted in stopping the experiment early and saving 33% of the experimental runs.
Over the past few years, advancements in closed-loop geothermal systems (CLGS), also called advanced geothermal systems (AGS), have sparked a renewed interest in these types of designs. CLGS have certain advantages over traditional and enhanced geothermal systems (EGS), including not requiring in-situ reservoir permeability, conservation of the circulating fluid, and allowing for different fluids, including working fluids directly driving a turbine at the surface. CLGS may be attractive in environments where water resources are limited, rock contaminants must be avoided, and stimulation treatments are not available (e.g., due to regulatory or technical reasons). Despite these advantages, CLGS have some challenges, including limited surface area for heat transfer and requiring long wellbores and laterals to obtain multi-MW output in conduction-only reservoirs. CLGS have been investigated in conduction-only systems. In this paper, we explore the impact of both forced and natural convection on the levels of heat extraction with a CLGS deployed in a hot wet rock reservoir. We bound potential benefits of convection by investigating liquid reservoirs over a range of natural and forced convective coefficients. Additionally, we investigate the effects of permeability, porosity, and geothermal temperature gradient in the reservoir on CLGS outputs. Reservoir simulations indicate that reservoir permeabilities of at least ~100 mD are required for natural convection to increase the heat output with respect to a conduction-only scenario. The impact increases with increasing reservoir temperature. When subject to a forced convection flow field, Darcy velocities of at least 10-7 m/s are required to obtain an increase in heat output.
Modal characterization of a structure is necessary to inform predictive simulation models. Unfortunately, cost and schedule limitations tend to prioritize other dynamic tests, which can lead to inadequate or nonexistent modal testing. To utilize the dynamic test data that is acquired, analysts can extract operational deflection shapes (ODS) which can then be used as a substitute for modal data in model updating and structure characterization. However, extremely high levels of excitation during vibration testing may introduce nonlinear behavior that distorts the ODS prediction. This chapter investigates the reliability of using ODS as a replacement for traditional modal testing on an academic structure designed to respond with intermittent impact. This chapter calculates ODS from responses at several input excitation levels, and the influence of nonlinear impact on the resulting operating modes is discussed.
Resonant plate shock testing techniques have been used for mechanical shock testing at Sandia for several decades. A mechanical shock qualification test is often done by performing three separate uniaxial tests on a resonant plate to simulate one shock event. Multi-axis mechanical shock activities, in which shock specifications are simultaneously met in different directions during a single shock test event performed in the lab, are not always repeatable and greatly depend on the fixture used during testing. This chapter provides insights into various designs of a concept fixture that includes both resonant plate and angle bracket used for multi-axis shock testing from a modeling and simulation point of view based on the results of finite element modal analysis. Initial model validation and testing performed show substantial excitation of the system under test as the fundamental modes drive the response in all three directions. The response also shows that higher order modes are influencing the system, the axial and transverse response are highly coupled, and tunability is difficult to achieve. By varying the material properties, changing thicknesses, adding masses, and moving the location of the fixture on the resonant plate, the response can be changed significantly. The goal of this work is to identify the parameters that have the greatest influence on the response of the system when using the angle bracket fixture for a mechanical shock test for the intent of tunability of the system.
In this work, we evaluate the usefulness of nonsmooth basis functions for representing the periodic response of a nonlinear system subject to contact/impact behavior. As with sine and cosine basis functions for classical Fourier series, which have C∞ smoothness, nonsmooth counterparts with C0 smoothness are defined to develop a nonsmooth functional representation of the solution. Some properties of these basis functions are outlined, such as periodicity, derivatives, and orthogonality, which are useful for functional series applied via the Galerkin method. Least-squares fits of the classical Fourier series and nonsmooth basis functions are presented and compared using goodness-of-fit metrics for time histories from vibro-impact systems with varying contact stiffnesses. This formulation has the potential to significantly reduce the computational cost of harmonic balance solvers for nonsmooth dynamical systems. Rather than requiring many harmonics to capture a system response using classical, smooth Fourier terms, the frequency domain discretization could be captured by a combination of a finite Fourier series supplemented with nonsmooth basis functions to improve convergence of the solution for contact-impact problems.
We demonstrate high-efficiency emission at wavelengths longer than 540 nm from InGaN quantum wells regrown on periodic arrays of GaN nanostructures and explore their incorporation into nanophotonic resonators for semiconductor laser development.
Plenoptic background-oriented schlieren is a diagnostictechnique that enables the measure-ment of three-dimensional refractive gradients by a combination of background-oriented schlieren and a plenoptic light field camera. This plenoptic camera is a modification of a traditional camera via the insertion of an array of microlenses between the imaging lens and digital sensor. This allows the collection of both spatial and angular information on the incoming light rays and therefore provides three-dimensional information about the imaged scene. Background-oriented schlieren requires a relatively simple experimental configurationincludingonlyacameraviewing a patterned background through the density field of interest. By using a plenoptic camera to capture background-oriented schlieren images the optical distortion created by density gradients in three dimensions can be measured. This chapter is intended to review critical developments in plenoptic background-oriented schlieren imaging and provide an outlook for future applications of this measurement technique.
In this work, the frequency response of a simplified shaft-bearing assembly is studied using numerical continuation. Roller-bearing clearances give rise to contact behavior in the system, and past research has focused on the nonlinear normal modes of the system and its response to shock-type loads. A harmonic balance method (HBM) solver is applied instead of a time integration solver, and numerical continuation is used to map out the system’s solution branches in response to a harmonic excitation. Stability analysis is used to understand the bifurcation behavior and possibly identify numerical or system-inherent anomalies seen in past research. Continuation is also performed with respect to the forcing magnitude, resulting in what are known as S-curves, in an effort to detect isolated solution branches in the system response.
Operation and control of a galvanically isolated three-phase AC-AC converter for solid state transformer applications is described. The converter regulates bidirectional power transfer by phase shifting voltages applied on either side of a high-frequency transformer. The circuit structure and control system are symmetrical around the transformer. Each side operates independently, enabling conversion between AC systems with differing voltage magnitude, phase angle, and frequency. This is achieved in a single conversion stage with low component count and high efficiency. The modulation strategy is discussed in detail and expressions describing the relationship between phase shift and power transfer are presented. Converter operation is demonstrated in a 3 kW hardware prototype.
As deep learning networks increase in size and performance, so do associated computational costs, approaching prohibitive levels. Dendrites offer powerful nonlinear "on-The-wire"computational capabilities, increasing the expressivity of the point neuron while preserving many of the advantages of SNNs. We seek to demonstrate the potential of dendritic computations by combining them with the low-power event-driven computation of Spiking Neural Networks (SNNs) for deep learning applications. To this end, we have developed a library that adds dendritic computation to SNNs within the PyTorch framework, enabling complex deep learning networks that still retain the low power advantages of SNNs. Our library leverages a dendrite CMOS hardware model to inform the software model, which enables nonlinear computation integrated with snnTorch at scale. By leveraging dendrites in a deep learning framework, we examine the capabilities of dendrites via coincidence detection and comparison in a machine learning task with a SNN. Finally, we discuss potential deep learning applications in the context of current state-of-The-Art deep learning methods and energy-efficient neuromorphic hardware.
Spatial navigation involves the formation of coherent representations of a map-like space, while simultaneously tracking current location in a primarily unsupervised manner. Despite a plethora of neurophysiological experiments revealing spatially-tuned neurons across the mammalian neocortex and subcortical structures, it remains unclear how such representations are acquired in the absence of explicit allocentric targets. Drawing upon the concept of predictive learning, we utilize a biologically plausible learning rule which utilizes sensory-driven observations with internally-driven expectations and learns through a contrastive manner to better predict sensory information. The local and online nature of this approach is ideal for deployment to neuromorphic hardware for edge-applications. We implement this learning rule in a network with the feedforward and feedback pathways known to be necessary for spatial navigation. After training, we find that the receptive fields of the modeled units resemble experimental findings, with allocentric and egocentric representations in the expected order along processing streams. These findings illustrate how a local and self-supervised learning method for predicting sensory information can extract latent structure from the environment.
Uncertainty quantification (UQ) plays a vital role in addressing the challenges and limitations encountered in full-waveform inversion (FWI). Most UQ methods require parameter sampling which requires many forward and adjoint solves. This often results in very high computational overhead compared to traditional FWI, which hinders the practicality of the UQ for FWI. In this work, we develop an efficient UQ-FWI framework based on unsupervised variational autoencoder (VAE) to assess the uncertainty of single and multi-parameter FWI. The inversion operator is modeled using an encoder-decoder network. The input to the network is seismic shot gathers and the output are samples (distribution) of model parameters. We then use these samples to estimate the mean and standard deviation of each parameter population, which provide insights on the uncertainty in the inversion process. To speed up the UQ process, we carried out the reconstruction in an unsupervised learning approach. Moreover, we physics-constrained the network by injecting the FWI gradients during the backpropagation process, leading to better reconstruction. The computational cost of the proposed approach is comparable to the traditional autoencoder full-waveform inversion (AE-FWI), which is encouraging to be used to get further insight on the quality of the inversion. We apply this idea for synthetic data to show its potential in assessing uncertainty in multi-parameter FWI.
Measurements of the oxidation rates of various forms of carbon (soot, graphite, coal char) have often shown an unexplained attenuation with increasing temperatures in the vicinity of 2000 K, even when accounting for diffusional transport limitations and gas-phase chemical effects (e.g. CO2 dissociation). With the development of oxy-fuel combustion approaches for pulverized coal utilization with carbon capture, high particle temperatures are readily achieved in sufficiently oxygen-enriched environments. In this work, a new semi-global intrinsic kinetics model for high temperature carbon oxidation is created by starting with a previously developed 5-step mechanism that was shown to reproduce all major known trends in carbon oxidation, except for its high temperature kinetic falloff, and incorporating a recently discovered surface oxide decomposition step. The predictions of this new model are benchmarked by deploying the kinetic model in a steady-state reacting particle code (SKIPPY) and comparing the simulated results against a carefully measured set of pulverized coal char combustion temperature measurements over a wide range of oxygen concentrations in N2 and CO2 environments. The results show that the inclusion of the spontaneous surface oxide decomposition reaction step significantly improves predictions at high particle temperatures. Furthermore, the simulations reveal that O atoms released from the oxide decomposition step enhance the radical pool in the near-surface region and within the particle interior itself. Incorporation of literature rates for O and OH reactions with the carbon surface results in a reduction in the predicted radical pool concentrations and a very minor enhancement of the overall carbon oxidation rate.
Deep neural networks for automatic target recognition (ATR) have been shown to be highly successful for a large variety of Synthetic Aperture Radar (SAR) benchmark datasets. However, the black box nature of neural network approaches raises concerns about how models come to their decisions, especially when in high-stake scenarios. Accordingly, a variety of techniques are being pursued seeking to offer understanding of machine learning algorithms. In this paper, we first provide an overview of explainability and interpretability techniques introducing their concepts and the insights they produce. Next we summarize several methods for computing specific approaches to explainability and interpretability as well as analyzing their outputs. Finally, we demonstrate the application of several attribution map methods and apply both attribution analysis metrics as well as localization interpretability analysis to six neural network models trained on the Synthetic and Measured Paired Labeled Experiment (SAMPLE) dataset to illustrate the insights these methods offer for analyzing SAR ATR performance.
Hargis, Joshua W.; Egeln, Anthony; Houim, Ryan; Guildenbecher, Daniel R.
Visualization of flow structures within post-detonation fireballs has been performed for benchmark validation of numerical simulations. Custom pressed PETN explosives with a 12-mm diameter hemispherical form factor were used to produce a spherically symmetric post-detonation flow with low soot yield. Hydroxyl-radical planar laser induce fluorescence (OH-PLIF) was employed to visualize the structure ranging from approximately 10μs to 35μs after shock breakout from the explosive pellet. Fireball simulations were performed using the HyBurn Computational Fluid Dynamics (CFD) package. Experimental OH-PLIF results were compared to synthetic OH-PLIF from post-processing of CFD simulations. From the comparison of experimental and synthetic OH-PLIF images, CFD is shown to replicate much of the flow structure observed in the experiments, revealing potential differences in turbulent length scales and OH kinetics. Results provide significant advancement in experimental resolution of these harsh turbulent combustion environments and validate physical models thereof.
Autonomous and semi-autonomous robot manipulation systems require fast classification and localization of objects in the world to realize online generation of motion plans and manipulation waypoints in real-time. Furthermore, constraints and estimated plausible motions of objects of interest in space is paramount for autonomous manipulation tasks. For nongrasping tasks like pushing a box or opening an unlatched door, physical properties such as the center of mass and location of constraints like hinges or bearings must be considered. This paper presents a methodology for rapidly inferring constraints and motion plans for objects of interest to be manipulated. This approach is based on a combination of object detection, instance segmentation, localization methods, and algebraically relating different semantically labeled objects. These methods for motion estimation are implemented on a color-depth camera (RGB-D) and a 7 degree-of-freedom serial robot arm. The algorithm's performance is evaluated through different arm poses, assessing both centroid accuracy and estimation speed, and motion estimation performance. Algorithms are tested on an exemplar problem consisting of a block constrained on a dual linear rail system, i.e., constrained linear motion. Experimental results showcase the scalability of this approach to multiple classes with sublinear slowdowns and linear motion plan direction errors as low as 1.23E-4 [rad]. The manuscript also outlines how these methods for rapid constrained object motion estimation can be leveraged for other applications.
Concentrating solar power (CSP) plants with integrated thermal energy storage (TES) have successfully been coupled with photovoltaics (PV) + chemical battery energy storage (BES) in recent commercial-scale projects to balance system cost and diurnal power availability. Sandia National Laboratories has been tasked with designing an advanced solar energy system to power Kirtland Air Force Base (KAFB) where Sandia is co-located in Albuquerque, NM, USA. This design process requires optimization of individual components and capacities of the hybrid system. Preliminary modeling efforts have shown that a hybrid CSP+TES/PV+BES in Albuquerque, NM is sufficient for net-zero power generation for Sandia/KAFB for the next decade. However, the ability to meet the load in real-time (and minimize energy export) requires balance of generation and storage assets. Our results also show that excess PV used to charge TES improves resilience and overall renewables-to-load for the system. Here we will present the results of a parametric study varying the land use proportions of CSP and PV, and TES and BES capacities. We evaluate the effects of these variables on energy generation, real-time load satisfaction, site resilience to grid outages, and LCOE, to determine viable hybrid solar energy designs and their cost implications.
Geogenic gases often reside in intergranular pore space, fluid inclusions, and within mineral grains. In particular, helium-4 (4He) is generated by alpha decay of uranium and thorium in rocks. The emitted 4He nuclei can be trapped in the rock matrix or in fluid inclusions. Recent work has shown that releases of helium occur during plastic deformation of crustal rocks above atmospheric concentrations that are detectable in the field. However, it is unclear how rock type and deformation modalities affect the cumulative gas released. This work seeks to address how different deformation modalities observed in several rock types affect release of helium. Axial compression tests with granite, rhyolite, tuff, dolostone, and sandstone - under vacuum conditions - were conducted to measure the transient release of helium from each sample during crushing. It was found that, when crushed up to 97500 N, each rock type released helium at a rate quantifiable using a helium mass spectrometer leak detector. For plutonic rock like granite, helium flow rate spikes with the application of force as the samples elastically deform until fracture, then decays slowly until grain breakdown comminution begins to occur. Both the rhyolite and tuff do not experience such large spikes in helium flow rate, with the rhyolites fracturing at much lower force and the tuffs compacting instead of fracturing due to their high porosity. Both rhyolite and tuff instead experience a lesser but steady helium release as they are crushed. The cumulative helium release for the volcanic tuffs varies as much as two orders of magnitude but is fairly consistent for the denser rhyolite and granite tested. The results indicate that there is a large degassing of helium as rocks are elastically and inelastically deformed prior to fracturing. For more porous and less brittle rocks, the cumulative release will depend more on the degree of deformation applied. These results are compared with known U/Th radioisotopes in the rocks to relate the trapped helium as either produced in the rock or from secondary migration of 4He.
The ability to accurately predict the structure and dynamics of pool fires using computational simulations is of great interest in a wide variety of applications, including accidental and wildland fires. However, the presence of physical processes spanning a broad range of spatial and temporal scales poses a significant challenge for simulations of such fires, particularly at conditions near the transition between laminar and turbulent flow. In this study, we examine the transition to turbulence in methane pool fires using high-resolution simulations with multi-step finite rate chemistry, where adaptive mesh refinement (AMR) is used to directly resolve small-scale flow phenomena. We perform three simulations of methane pool fires, each with increasing diameter, corresponding to increasing inlet Reynolds and Richardson numbers. As the diameter increases, the flow transitions from organized vortex roll-up via the puffing instability to much more chaotic mixing associated with finger formation along the shear layer and core collapse near the inlet. These effects combine to create additional mixing close to the inlet, thereby enhancing fuel consumption and causing more rapid acceleration of the fluid above the pool. We also make comparisons between the transition to turbulence and core collapse in the present pool fires and in inert helium plumes, which are often used as surrogates for the study of buoyant reacting flows.
Analytic relations that describe crack growth are vital for modeling experiments and building a theoretical understanding of fracture. Upon constructing an idealized model system for the crack and applying the principles of statistical thermodynamics, it is possible to formulate the rate of thermally activated crack growth as a function of load, but the result is analytically intractable. Here, an asymptotically correct theory is used to obtain analytic approximations of the crack growth rate from the fundamental theoretical formulation. These crack growth rate relations are compared to those that exist in the literature and are validated with respect to Monte Carlo calculations and experiments. The success of this approach is encouraging for future modeling endeavors that might consider more complicated fracture mechanisms, such as inhomogeneity or a reactive environment.
We investigate the kinetics and report the time-resolved concentrations of key chemical species in the oxidation of tetrahydrofuran (THF) at 7500 torr and 450-675 K. Experiments are carried out using high-pressure multiplexed photoionization mass spectrometry (MPIMS) combined with tunable vacuum ultraviolet radiation from the Berkely Lab Advanced Light Source. Intermediates and products are quantified using reference photoionization (PI) cross sections, when available, and constrained by a global carbon balance tracking approach at all experimental temperatures simultaneously for the species without reference cross sections. From carbon balancing, we determine time-resolved concentrations for the ROO˙ and ˙OOQOOH radical intermediates, butanedial, and the combined concentration of ketohydroperoxide (KHP) and unsaturated hydroperoxide (UHP) products stemming from the ˙QOOH + O2 reaction. Furthermore, we quantify a product that we tentatively assign as fumaraldehyde, which arises from UHP decomposition via H2O or ˙OH + H loss. The experimentally derived species concentrations are compared with model predictions using the most recent literature THF oxidation mechanism of Fenard et al., (Combust. Flame, 2018, 191, 252-269). Our results indicate that the literature mechanism significantly overestimates THF consumption and the UHP + KHP concentration at our conditions. The model predictions are sensitive to the rate coefficient for the ROO˙ isomerization to ˙QOOH, which is the gateway for radical chain propagating and branching pathways. Comparisons with our recent results for cyclopentane (Demireva et al., Combust. Flame, 2023, 257, 112506) provide insights into the effect of the ether group on reactivity and highlight the need to determine accurate rate coefficients of ROO˙ isomerization and subsequent reactions.
Traditional electronics assemblies are typically packaged using physically or chemically blown potted foams to reduce the effects of shock and vibration. These potting materials have several drawbacks including manufacturing reliability, lack of internal preload control, and poor serviceability. A modular foam encapsulation approach combined with additively manufactured (AM) silicone lattice compression structures can address these issues for packaged electronics. These preloaded silicone lattice structures, known as foam replacement structures (FRSs), are an integral part of the encapsulation approach and must be properly characterized to model the assembly stresses and dynamics. In this study, dynamic test data is used to validate finite element models of an electronics assembly with modular encapsulation and a direct ink write (DIW) AM silicone FRS. A variety of DIW compression architectures are characterized, and their nominal stress-strain behavior is represented with hyperfoam constitutive model parameterizations. Modeling is conducted with Sierra finite element software, specifically with a handoff from assembly preloading and uniaxial compression in Sierra/Solid Mechanics to linear modal and vibration analysis in Sierra/Structural Dynamics. This work demonstrates the application of this advanced modeling workflow, and results show good agreement with test data for both static and dynamic quantities of interest, including preload, modal, and vibration response.
Although fire events inside nuclear power plants (NPPs) are infrequent, when they occur, they can affect the safe operation of the plant if there is not sufficient protection addressing the risk. As mitigation for fire events, NPPs have comprehensive fire protection systems intended to reduce the likelihood of a fire event and the associated consequences. An electrical arcing fault involving components made of aluminum is one such hazard that could lead to a significant consequence. Because the original evaluation of high-energy arcing faults (HEAF) was performed on components made of copper, there is an interest in understanding the effects of aluminum in these incidents. The nuclear regulatory commission (NRC) has led a series of HEAF experiments at a facility near Philadelphia, PA, in conjunction with the national institute of standards and technology (NIST), European and Japanese partners, and Sandia National Laboratories (SNL). To capture a range of different HEAF events, Sandia has provided high-speed visible and IR videography from multiple angles during this series of experiments. One of the data products provided by Sandia is the combination and synchronization of infrared and visible data from the multiple cameras used in the tests. This multispectral fusion of information (visible, MWIR, and LWIR) allows the customer to visualize the tests and understand when different events happen in the 2 to 4 second duration of a test. The presentation will dissect three experiments and describe the different events occurring during their duration. The presentation will compare the behavior of equipment that contains aluminum components versus the ones containing copper or steel. Finally, data from a switchgear experiment will be presented to complement the bus duct data.
2024 IEEE International Power Modulator and High Voltage Conference, IPMHVC 2024
Graves, David Z.; Lehmann, Megan; Bilbao, Argenis V.; Bayne, Stephen B.; Schrock, Emily A.
This paper builds upon previous research in developing a SiC Drift Step Recovery Diode (DSRD) Model in Silvaco Victory Device. For this research, the DSRD is based on an N-type substrate for improved manufacturability. The model described in this paper was developed by characterizing DSRD devices under DC and transient conditions. The details of the pulsed power testbed developed for the transient characterization is outlined in this paper. The goal of this model is to allow the rapid development of future pulsed power systems and for further device structure optimization.
Folsom, Matthew; Sewell, Steven; Cumming, William; Zimmerman, Jade; Sabin, Andy; Downs, Christine; Hinz, Nick; Winn, Carmen; Schwering, Paul C.
Blind geothermal systems are believed to be common in the Basin and Range province and represent an underutilized source of renewable green energy. Their discovery has historically been by chance but more methodological strategies for exploration of these resources are being developed. One characteristic of blind systems is that they are often overlain by near-surface zones of low-resistivity caused by alteration of the overlying sediments to swelling clays. These zones can be imaged by resistivity-based geophysical techniques to facilitate their discovery and characterization. Here we present a side-by-side comparison of resistivity models produced from helicopter transient electromagnetic (HTEM) and ground-based broadband magnetotelluric (MT) surveys over a previously discovered blind geothermal system with measured shallow temperatures of ~100°C in East Hawthorne, NV. The HTEM and MT data were collected as part of the BRIDGE project, an initiative for improving methodologies for discovering blind geothermal systems. HTEM data were collected and modelled along profiles, and the results suggest the method can resolve the resistivity structure 300 - 500 m deep. A 61-station MT survey was collected on an irregular grid with ~800 m station spacing and modelled in 3D on a rotated mesh aligned with HTEM flight directions. Resistivity models are compared with results from potential fields datasets, shallow temperature surveys, and available temperature gradient data in the area of interest. We find that the superior resolution of the HTEM can reveal near-surface details often missed by MT. However, MT is sensitive to several km deep, can resolve 3D structures, and is thus better suited for single-prospect characterization. We conclude that HTEM is a more practical subregional prospecting tool than is MT, because it is highly scalable and can rapidly discover shallow zones of low resistivity that may indicate the presence of a blind geothermal system. Other factors such as land access and ground disturbance considerations may also be decisive in choosing the best method for a particular prospect. Resistivity methods in general cannot fully characterize the structural setting of a geothermal system, and so we used potential fields and other datasets to guide the creation of a diagrammatic structural model at East Hawthorne.
Underground caverns in salt formations are promising geologic features to store hydrogen (H2) because of salt's extremely low permeability and self-healing behavior.Successful salt-cavern H2 storage schemes must maximize the efficiency of cyclic injection-production while minimizing H2 loss through adjacent damaged salt.The salt cavern storage community, however, has not fully understood the geomechanical behaviors of salt rocks driven by quick operation cycles of H2 injection-production, which may significantly impact the cost-effective storage-recovery performance.Our field-scale generic model captures the impact of combined drag and back stressing on the salt creep behavior corresponding to cycles of compression and extension, which may lead to substantial loss of cavern volumes over time and diminish the cavern performance for H2 storage.Our preliminary findings address that it is essential to develop a new salt constitutive model based on geomechanical tests of site-specific salt rock to probe the cyclic behaviors of salt both beneath and above the dilatancy boundary, including reverse (inverse transient) creep, the Bauschinger effect and fatigue.
The impact of more extreme climate conditions under global warming on soil organic carbon (SOC) dynamics remains unquantified. Here we estimate the response of SOC to climate extreme shifts under 1.5 °C warming by combining a space-for-time substitution approach and global SOC measurements (0–30 cm soil). Most extremes (22 out of 33 assessed extreme types) exacerbate SOC loss under warming globally, but their effects vary among ecosystems. Only decreasing duration of cold spells exerts consistent positive effects, and increasing extreme wet days exerts negative effects in all ecosystems. Temperate grasslands and croplands negatively respond to most extremes, while positive responses are dominant in temperate and boreal forests and deserts. In tundra, 21 extremes show neutral effects, but 11 extremes show negative effects with stronger magnitude than in other ecosystems. Our results reveal distinct, biome-specific effects of climate extremes on SOC dynamics, promoting more reliable SOC projection under climate change.
A single Synthetic Aperture Radar (SAR) image is a 2-Dimensional projection of a 3-Dimensional scene, with very limited ability to estimate surface topography. However, with multiple SAR images collected from suitably different geometries, they may be compared with multilateration calculations to estimate characteristics of the missing dimension. The ability to employ effective multilateration algorithms is highly dependent on the geometry of the data collections, and can be cast as a least-squares exercise. A measure of Dilution of Precision (DOP) can be used to compare the relative merits of various collection geometries.
In this paper, we develop a nested chi-squared likelihood ratio test for selecting among shrinkage-regularized covariance estimators for background modeling in hyperspectral imagery. Critical to many target and anomaly detection algorithms is the modeling and estimation of the underlying background signal present in the data. This is especially important in hyperspectral imagery, wherein the signals of interest often represent only a small fraction of the observed variance, for example when targets of interest are subpixel. This background is often modeled by a local or global multivariate Gaussian distribution, which necessitates estimating a covariance matrix. Maximum likelihood estimation of this matrix often overfits the available data, particularly in high dimensional settings such as hyperspectral imagery, yielding subpar detection results. Instead, shrinkage estimators are often used to regularize the estimate. Shrinkage estimators linearly combine the overfit covariance with an underfit shrinkage target, thereby producing a well-fit estimator. These estimators introduce a shrinkage parameter, which controls the relative weighting between the covariance and shrinkage target. There have been many proposed methods for setting this parameter, but comparing these methods and shrinkage values is often performed with a cross-validation procedure, which can be computationally expensive and highly sample inefficient. Drawing from Bayesian regression methods, we compute the degrees of freedom of a covariance estimate using eigenvalue thresholding and employ a nested chi-squared likelihood ratio test for comparing estimators. This likelihood ratio test requires no cross-validation procedure and enables direct comparison of different shrinkage estimates, which is computationally efficient.
Sandia National Laboratories (SNL) has completed a comparative evaluation of three design assessment approaches for a 2-liter (2L) capacity containment vessel (CV) of a novel plutonium air transport (PAT) package designed to survive the hypothetical accident condition (HAC) test sequence defined in Title 10 of the United States (US) Code of Federal Regulations (CFR) Part 71.74(a), which includes a 129 meter per second (m/s) impact of the package into an essentially unyielding target. CVs for hazardous materials transportation packages certified in the US are typically designed per the requirements defined in the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (B&PVC) Section III Division 3 Subsection WB “Class TC Transportation Containments.” For accident conditions, the level D service limits and analysis approaches specified in paragraph WB-3224 are applicable. Data derived from finite element analyses of the 129 m/s impact of the 2L-PAT package were utilized to assess the adequacy of the CV design. Three different CV assessment approaches were investigated and compared, one based on stress intensity limits defined in subparagraph WB-3224.2 for plastic analyses (the stress-based approach), a second based on strain limits defined in subparagraph WB-3224.3, subarticle WB-3700, and Section III Nonmandatory Appendix FF for the alternate strain-based acceptance criteria approach (the strain-based approach), and a third based on failure strain limits derived from a ductile fracture model with dependencies on the stress and strain state of the material, and their histories (the Xue-Wierzbicki (X-W) failure-integral-based approach). This paper gives a brief overview of the 2L-PAT package design, describes the finite element model used to determine stresses and strains in the CV generated by the 129 m/s impact HAC, summarizes the three assessment approaches investigated, discusses the analyses that were performed and the results of those analyses, and provides a comparison between the outcomes of the three assessment approaches.
Single-axis solar trackers are typically simulated under the assumption that all modules on a given section of torque tube are at a single orientation. In reality, various mechanical effects can cause twisting along the torque tube length, creating variation in module orientation along the row. Simulation of the impact of this on photovoltaic system performance reveals that the performance loss resulting from torque tube twisting is significant at twists as small as fractions of a degree per module. The magnitude of the loss depends strongly on the design of the photovoltaic module, but does not vary significantly across climates. Additionally, simple tracker control setting tweaks were found to substantially reduce the loss for certain types of twist.
This study introduces the Progressive Improved Neural Operator (p-INO) framework, aimed at advancing machine-learning-based reduced-order models within geomechanics for underground resource optimization and carbon sequestration applications.The p-INO method transcends traditional transfer learning limitations through progressive learning, enhancing the capability of transferring knowledge from many sources.Through numerical experiments, the performance of p-INO is benchmarked against standard Improved Neural Operators (INO) in scenarios varying by data availability (different number of training samples).The research utilizes simulation data reflecting scenarios like single-phase, two-phase, and two-phase flow with mechanics inspired by the Illinois Basin Decatur Project.Results reveal that p-INO significantly surpasses conventional INO models in accuracy, particularly in data-constrained environments.Besides, adding more priori information (more trained models used by p-INO) can further enhance the process.This experiment demonstrates p-INO's robustness in leveraging sparse datasets for precise predictions across complex subsurface physics scenarios.The findings underscore the potential of p-INO to revolutionize predictive modeling in geomechanics, presenting a substantial improvement in computational efficiency and accuracy for large-scale subsurface simulations.
Network Operation Centers (NOCs) and Security Operation Centers (SOCs) play a critical role in addressing a wide range of threats in critical infrastructure systems such as the electric grid. However, when considering the electric grid and related industrial control systems (ICSs), visibility into the information technology (IT), operational technology (OT), and underlying physical process systems are often disconnected and standalone. As the electric grid becomes increasingly cyber-physical and faces dynamic, cyber-physical threats, it is vital that cyber-physical situational awareness (CPSA) across the interconnected system is achieved. In this paper, we review existing NOC and SOC capabilities and visualizations, motivate the need for CPSA, and define design principles with example visualizations for a next-generation grid cyber-physical integrated SOC (CP-ISOC).
More than 90% of utility-scale photovoltaic (PV) power plants in the US use single-axis trackers (SATs) due to their potential for substantially higher power production over fixed-array systems. However, they are subject to software misconfigurations and mechanical failures, leading to suboptimal tracking accuracy. If failures are left undetected, the overall power yield of the PV power plant is reduced significantly. Robust detection and diagnosis of SAT faults is needed to minimize downtime and ensure continuous and efficient operation. This work presents analytic tools based on machine learning to detect deviations in SAT tracking performance and classify SAT faults.
The power grid, traditionally perceived as an independent physical network has undergone a significant transformation in recent years due to its integration with cyber communication networks and modern digital components. Cyber situations, including cyber-attacks and network anomalies, can directly affect the physical operation of the grid; therefore, studying this intricate relationship between the physical and cyber systems is pivotal for enhancing the resilience and security of modern power systems. In this digest, a novel Long Short-Term Memory (LSTM)-based Autoencoder (AE) model for cyber-physical data fusion and threat detection is proposed. The scenario under consideration includes the effective detection of a physical disturbance and a Denial-of-Service (DoS) attack, which obstructs control commands during the physical disturbance in the power grid. Detailed analysis and quantitative results regarding the LSTM-based AE model's training and evaluation phases is provided, which highlight its key operation features and benefits for guaranteeing security and resilience in the power grid.
Multiple scattering is a common phenomenon in acoustic media that arises from the interaction of the acoustic field with a network of scatterers. This mechanism is dominant in problems such as the design and simulation of acoustic metamaterial structures often used to achieve acoustic control for sound isolation, and remote sensing. In this study, we present a physics-informed neural network (PINN) capable of simulating the propagation of acoustic waves in an infinite domain in the presence of multiple rigid scatterers. This approach integrates a deep neural network architecture with the mathematical description of the physical problem in order to obtain predictions of the acoustic field that are consistent with both governing equations and boundary conditions. The predictions from the PINN are compared with those from a commercial finite element software model in order to assess the performance of the method.
Numerical simulations were performed in 3D Cartesian coordinates to examine the post-detonation processes produced by the detonation of a 12 mm-diameter hemispherical PETN explosive charge in air. The simulations captured air dissociation by the Mach 20+ shock, chemical equilibration, and afterburning using finite-rate chemical kinetics with a skeletal chemical reaction mechanism. The Becker-Kistiakowsky-Wilson real-gas equation of state is used for the gas-phase. A simplified programmed burn model is used to seamlessly couple the detonation propagation through the explosive charge to the post-detonation reaction processes inside the fireball. Four charge sizes were considered, including diameters of 12 mm, 38 mm, 120 mm, and 1200 mm. The computed blast, shock structures, and chemical composition within the fireball agree with literature. The evolution of the flow at early times is shown to be gas dynamic driven and nearly self-similar when the time and space was scaled. The flow fields were azimuthally averaged and a mixing layer analysis was performed. The results show differences in the temperature and chemical composition with increasing charge size, implying a transition from a chemical kinetic-limited to a mixing-limited regime.
Coherent anti-Stokes Raman scattering (CARS) and nitric oxide molecular tagging velocimetry (NO-MTV) are used to characterize the freestream in Sandia’s Hypersonic Shock Tunnel (HST) using a burst-mode laser operated at 100-kHz. Experiments are performed at nominal freestream velocities of 3 and 4 km/s using both air and N2 test gas. The CARS diagnostic provides nonequilibrium characterization of the flow by measuring vibrational and rotational temperatures of N2 and O2, which are compared to NO temperatures from separate laser absorption experiments. Simultaneous, colinear freestream velocities are measured using NO MTV along with pitot pressures. This extensive freestream dataset is compared to nonequilibrium CFD capable of modeling species-specific, vibrational temperatures throughout the nozzle expansion. Significant nonequilibrium between vibrational and rotational temperatures are measured at each flow condition. N2 exhibits the most nonequilibrium followed by O2 and NO. The CFD model captures this trend, although it consistently overpredicts N2 vibrational temperatures. The modeled temperatures agree with the O2 data. At 3 km/s, the modeled NO nonequilibrium is underpredicted, whereas it is overpredicted at 4 km/s. Good agreement is seen between CFD and the velocity and rotational temperature measurements. Experiments with water added to the test gas yielded no discernable difference in vibrational relaxation.
Proceedings of ISMA 2024 International Conference on Noise and Vibration Engineering and Usd 2024 International Conference on Uncertainty in Structural Dynamics
In general, multiple-input/multiple-output (MIMO) vibration testing utilizes a response-controlled test methodology where specifications are in the form of response quantities at various locations distributed on the device under test (DUT). There are some advantages to this approach, namely that DUT response could be measured in some field environment and directly used as MIMO specifications for subsequent MIMO vibration tests on similar DUTs. However, in some cases it may be advantageous to control the MIMO vibration test at the inputs rather than the responses. One such case is free-flight environments, where the DUT is unconstrained, and all loads come from aerodynamic pressures. In this case, the force-controlled test method is much more robust to system changes such as unit-to-unit variability as compared to a response-controlled test method. This could make force-controlled MIMO test specifications more generalizable and easier to derive. This is exactly akin to transfer path analysis, where pseudo-forces are applicable in special circumstances. This paper will explore the force-controlled test concept and demonstrate it with a numerical example, comparing performance under various conditions vs. the traditional response-controlled test method.
Multifidelity emulators have found wide-ranging applications in both forward and inverse problems within the computational sciences. Thanks to recent advancements in neural architectures, they provide significant flexibility for integrating information from multiple models, all while retaining substantial efficiency advantages over single-fidelity methods. In this context, existing neural multifidelity emulators operate by separately resolving the linear and nonlinear correlation between equally parameterized high-and low-fidelity approximants. However, many complex models ensembles in science and engineering applications only exhibit a limited degree of linear correlation between models. In such a case, the effectiveness of these approaches is impeded, i.e., larger datasets are needed to obtain satisfactory predictions. In this work, we present a general strategy that seeks to maximize the linear correlation between two models through input encoding. We showcase the effectiveness of our approach through six numerical test problems, and we show the ability of the proposed multifidelity emulator to accurately recover the high-fidelity model response under an increasing number of quasi-random samples. In our experiments, we show that input encoding produces in many cases emulators with significantly simpler nonlinear correlations. Finally, we demonstrate how the input encoding can be leveraged to facilitate the fusion of information between low-and high-fidelity models with dissimilar parametrization, i.e., situations in which the number of inputs is different between low-and high-fidelity models.
We present a materials study of AlGaInP grown on GaAs leveraging deep-level optical spectroscopy and time resolved photoluminescence. Our materials may serve as the basis for wide-bandgap analogs of silicon photomultipliers optimized for short wavelength sensing.
With the amount of neuromorphic tools and frame-works growing in number, we recognize a need to increase interoperability within our field. As an illustration of this, we explore linking two independently constructed tools. Specifically, we detail the construction of an a execution backend based on STACS: Simulation Tool for Asynchronous Cortical Streams for the Fugu spiking neural algorithms framework. STACS extends the computational scope of Fugu, enabling fast simulation of large-scale neural networks. Combining these two tools is shown to be mutually beneficial, ultimately enabling more functionality than either tool on its own. We discuss design considerations, in-cluding recognizing the advantages of straightforward standards. Further, we provide some benchmark results showing drastic improvements in execution time.
Full-scale testing of pipes is costly and requires significant infrastructure investments. Subscale testing offers the potential to substantially reduce experimental costs and provides testing flexibility when transferrable test conditions and specimens can be established. To this end, a subscale pipe testing platform was developed to pressure cycle 60 mm diameter pipes (Nominal Pipe Size 2) to failure with gaseous hydrogen. Engineered defects were machined into the inner surface or outer surface to represent pre-existing flaws. The pipes were pressure cycled to failure with gaseous hydrogen at pressures to match operating stresses in large diameter pipes (e.g., stresses comparable to similar fractions of the specified minimum yield stress in transmission pipelines). Additionally, the pipe specimens were instrumented to identify crack initiation, such that crack growth could be compared to fracture mechanics predictions. Predictions leverage an extensive body of materials testing in gaseous hydrogen (e.g., ASME B31.12 Code Case 220) and the recently developed probabilistic fracture mechanics framework for hydrogen (Hydrogen Extremely Low Probability of Rupture, HELPR). In this work, we evaluate the failure response of these subscale pipe specimens and assess the conservatism of fracture mechanics-based design strategies (e.g., API 579/ASME FFS). This paper describes the subscale hydrogen testing capability, compares experimental outcomes to predictions from the probabilistic hydrogen fracture framework (HELPR), and discusses the complement to full-scale testing.
Intermolecular Coulombic decay (ICD) in liquid water is a relatively novel type of nonlocal electronic decay mechanism, competing with the traditional mechanism of proton transfer between neighboring water molecules. Key features of ICD are its ultrafast non-radiative decay process and ultralong-range for excess energy transfer from the excited atom/molecule to its neighbors. Since detecting unambiguous ICD signatures in bulk liquid water is technically challenging, small water clusters have often been utilized to gain insights into ICD and other ionization processes in aqueous environment. Here, we present results from quantum mechanical calculations of the electronic structures of neutral to multiply-ionized water monomer, dimer, trimer, and tetramer. Core-level electrons of water are also considered here since recent studies demonstrated that emission site and energy of the electrons released during resonant-Auger-ICD cascade can be controlled by coupling ICD to resonant core excitation. Previous studies of ICD and electronic structures of neutral and ionized small water clusters and liquid water are briefly discussed.
Accurate understanding of the behavior of commercial-off-the-shelf electrical devices is important in many applications. This paper discusses methods for the principled statistical analysis of electrical device data. We present several recent successful efforts and describe two current areas of research that we anticipate will produce widely applicable methods. Because much electrical device data is naturally treated as functional, and because such data introduces some complications in analysis, we focus on methods for functional data analysis.
National Security Presidential Memorandum-20 defines three tier levels for launch approval of space nuclear systems. The two main factors determining the tier level are the total quantity and type of radioactive sources and the probability of any member of the public receiving doses above certain thresholds. The total quantity of radioactive sources is compared with International Atomic Energy Agency transportation regulations. The dose probability is determined by the product of three terms: 1) the probability of a launch accident occurring; 2) the probability of a release of radioactive material given an accident; and 3) the probability of exceeding the dose threshold to any member of the public given a release. This paper provides a methodology for evaluating these values and applies this methodology to an example mission as a demonstration. For the example mission, a preliminary tier determination of Tier III was concluded.
The use of surrogate models in computational mechanics is an area of high interest due to the potential for significant savings in computational cost. However, assessment and presentation of evidence for surrogate model credibility has yet to reach a standard form. The present study utilizes a deep neural network as a surrogate for a computational fluid dynamics simulation in order to predict the coefficients of lift and drag on a NACA 0012 airfoil for various Reynolds numbers and angles of attack. Using best practices, the credibility of the underlying simulation predictions and of the surrogate model predictions are analyzed. Conclusions are drawn which should better inform future uses of surrogate models in the context of their credibility.
The explosive BTF (benzotrifuroxan) is an interesting molecule for sub-millimeter studies of initiation and detonation. It has no hydrogen, thus no water in the detonation products and a subsequently high temperature in the reaction zone. The material has impact sensitivity that is comparable or less than that of PETN (pentaerythritol tetranitrate) and slightly greater than RDX, HMX, and CL-20. Physical vapor deposition (PVD) can be used to grow high-density films of pure explosives with precise control over geometry, and we apply this technique to BTF to study detonation and initiation behavior as a function of sample thickness. The geometrical effects on detonation and corner turning behavior are studied with the critical detonation thickness experiment and the micromushroom test, respectively. Initiation behavior is studied with the high-throughput initiation experiment. Vapor-deposited films of BTF show detonation failure, corner turning, and initiation consistent with a heterogeneous explosive. Scaling of failure thickness to failure diameter shows that BTF has a very small failure diameter.
We demonstrate evanescently coupled waveguide integrated silicon photonic avalanche photodiodes designed for single photon detection for quantum applications. Simulation, high responsivity, and record low dark currents for evanescently coupled devices are presented.
Multifidelity (MF) uncertainty quantification (UQ) seeks to leverage and fuse information from a collection of models to achieve greater statistical accuracy with respect to a single-fidelity counterpart, while maintaining an efficient use of computational resources. Despite many recent advancements in MF UQ, several challenges remain and these often limit its practical impact in certain application areas. In this manuscript, we focus on the challenges introduced by nondeterministic models to sampling MF UQ estimators. Nondeterministic models produce different responses for the same inputs, which means their outputs are effectively noisy. MF UQ is complicated by this noise since many state-of-the-art approaches rely on statistics, e.g., the correlation among models, to optimally fuse information and allocate computational resources. We demonstrate how the statistics of the quantities of interest, which impact the design, effectiveness, and use of existing MF UQ techniques, change as functions of the noise. With this in hand, we extend the unifying approximate control variate framework to account for nondeterminism, providing for the first time a rigorous means of comparing the effect of nondeterminism on different multifidelity estimators and analyzing their performance with respect to one another. Numerical examples are presented throughout the manuscript to illustrate and discuss the consequences of the presented theoretical results.
Existing natural gas (NG) pipeline infrastructure can be used to transport gaseous hydrogen (GH2) or blends of NG and hydrogen as low carbon alternatives to NG. Pipeline steels exhibit accelerated fatigue crack growth rates and reduced fracture resistance in the presence of GH2. The hydrogen-assisted fatigue crack growth (HAFCG) rates and hydrogen assisted fracture (HAF) resistance for pipeline steels depend on the hydrogen gas pressure. This study aims to correlate and compare the HAFCG rates of pipeline steels tested in two different gaseous environments at different pressures; high-purity hydrogen (99.9999 % H2) and a blend of nitrogen with 3% hydrogen gas (N2+3%H2). K-controlled FCG tests were performed using compact tension (CT) samples extracted from a vintage X52 (installed in 1962) and a modern X70 (2021) pipeline steel in the different gaseous environments. Subsequently, monotonic fracture tests were performed in the GH2 environment. The HAFCG rates increased with increasing GH2 pressure for both steels, in the ΔK range explored in this study. Nearly identical HAFCG rates were observed for the steels tested in different environments with equivalent fugacity (34.5 bar pure GH2 and 731 bar Blend with 3%H2). The fracture resistance of pipeline steels was significantly reduced in the presence of GH2, even at pressure as low as 1 bar. The reduction in HAF resistance tends to saturate with increasing GH2 pressure. While the fracture resistance of modern steel is substantially higher than vintage steel in air, in high pressure GH2, the HAF resistance is comparable. Similar HAF resistance values were obtained for the respective steels in the pure and blended GH2 environment with similar fugacity. This study confirms that fugacity parameter can be used to correlate HAFCG and HAF behavior of different hydrogen blends. The fracture surface features of the pipeline steels, tested in the different environments are compared to rationalize the observed behavior in GH2.
Natural gas pipelines could be an important pathway to transport gaseous hydrogen (GH2) as a cleaner alternative to fossil fuels. However, a comprehensive understanding of hydrogen-assisted fatigue and fracture resistance in pipeline steels is needed, including an assessment of the diverse microstructures present in natural gas infrastructure. In thus study, we focus on modern steel pipe and consider both welded pipe and seamless pipe. In-situ fatigue crack growth (FCG) and fracture tests were conducted on compact tension samples extracted from the base metal, seam-weld, and heat affected zone of an X70 pipe steel in high-purity GH2 (210 bar pressure). Additionally, a seamless X65 pipeline microstructure (with comparable strength) was evaluated to compare the different microstructure of seamless pipe. The different microstructures had comparable FCG rates in GH2, with crack growth rates up to 30 times faster in hydrogen compared to air. In contrast, the fracture resistance in GH2 depended on the characteristics of the microstructure varying in the range of approximately 80 to 110 MPa√m.
Heat waves are increasing in severity, duration, and frequency. The Multi-Scenario Extreme Weather Simulator (MEWS) models this using historical data, climate model outputs, and heat wave multipliers. In this study, MEWS is applied for planning of a community resilience hub in Hau’ula, Hawaii. The hub will have normal operations and resilience operations modes. Both these modes were modeled using EnergyPlus. The resilience operations mode includes cutting off air conditioning for many spaces to decrease power requirements during emergencies. Results were simulated for 300 future weather files generated by MEWS for 2020, 2040, 2060, and 2080. Shared socioeconomic pathways 2–4.5, 3–7.0 and 5–8.5 were used. The resilience operations mode results show two to six times increase of hours of exceedance beyond 32.2 °C from present conditions, depending on climate scenario and future year. The resulting decrease in thermal resilience enables an average decrease of energy use intensity of 26% with little sensitivity to climate change. The decreased thermal resilience predicted in the future is undesirable, but was not severe enough to require a more energy-intensive resilience mode. Instead, planning is needed to assure vulnerable individuals are given prioritized access to air-conditioned parts of the hub if worst-case heat waves occur.
Engineers are interested in the ability to compare dynamic environments for many reasons. Current methods of comparing environments compare the measured acceleration at the same physical point via a direct measurement during the two environments. Comparing the acceleration at a defined point only provides a comparison of response at that location. However, the stress and strain of the structure are defined by the global response of all the points in a structure. This chapter uses modal filtering to transform a set of measurements at physical degrees of freedom into modal degrees of freedom that quantify the global response of the structure. Once the global response of the structure is quantified, two environments can be more reliably and accurately compared. This chapter compares the response of an aerospace component in a service environment to the response of the same component in a laboratory test environment. The comparison first compares the mode shapes between the two environments. Once it is determined that the same mode shapes are present in both configurations, the modal accelerations are compared in order to determine the similarity of the global response of the component.
Additive manufacturing has ushered in a new paradigm of bottom-up materials-by-design of spatially non-uniform materials. Functionally graded materials have locally tailored compositions to provide optimized global properties and performance. In this letter, we propose an opportunity for the application of graded magnetic materials as lens elements for charged particle optics. A Hiperco50/Hymu80 (FeCo-2 V/Fe-80Ni-5Mo) graded magnetic alloy was successfully additively manufactured via Laser Directed Energy Deposition with spatially varying magnetic properties. The compositional gradient is then applied using computational simulations to demonstrate how a tailored material can enhance the magnetic performance of a critical, image-forming component of a transmission electron microscope.
The diesel-piloted dual-fuel compression ignition combustion strategy is well-suited to accelerate the decarbonization of transportation by adopting hydrogen as a renewable energy carrier into the existing internal combustion engine with minimal engine modifications. Despite the simplicity of engine modification, many questions remain unanswered regarding the optimal pilot injection strategy for reliable ignition with minimum pilot fuel consumption. The present study uses a single-cylinder heavy-duty optical engine to explore the phenomenology and underlying mechanisms governing the pilot fuel ignition and the subsequent combustion of a premixed hydrogen-air charge. The engine is operated in a dual-fuel mode with hydrogen premixed into the engine intake charge with a direct pilot injection of n-heptane as a diesel pilot fuel surrogate. Optical diagnostics used to visualize in-cylinder combustion phenomena include high-speed IR imaging of the pilot fuel spray evolution as well as high-speed HCHO* and OH* chemiluminescence as indicators of low-temperature and high-temperature heat release, respectively. Three pilot injection strategies are compared to explore the effects of pilot fuel mass, injection pressure, and injection duration on the probability and repeatability of successful ignition. The thermodynamic and imaging data analysis supported by zero-dimensional chemical kinetics simulations revealed a complex interplay between the physical and chemical processes governing the pilot fuel ignition process in a hydrogen containing charge. Hydrogen strongly inhibits the ignition of pilot fuel mixtures and therefore requires longer injection duration to create zones with sufficiently high pilot fuel concentration for successful ignition. Results show that ignition typically tends to rely on stochastic pockets with high pilot fuel concentration, which results in poor repeatability of combustion and frequent misfiring. This work has improved the understanding on how the unique chemical properties of hydrogen pose a challenge for maximization of hydrogen's energy share in hydrogen dual-fuel engines and highlights a potential mitigation pathway.
The radar generalized image quality equation (RGIQE) is a metric used to measure both monostatic and bistatic synthetic aperture radar (BSAR) image quality, it is a function of signal-to-noise ratio (SNR) and 2-D bandwidth. The 2-D bandwidth is equal to the area of the transfer function's (TF) passband region. With the exception of side-looking monostatic geometries, almost all monostatic and bistatic geometries have skewed passband shapes when waveform frequency parameters remain unchanged from pulse to pulse. Most synthetic aperture radar (SAR) applications require a rectangular-shaped passband region, this is achieved by inscribing a rectangular region within the skewed intrinsic passband region. Increasing skewness results in less inscription area reducing 2-D bandwidth, image SNR, and thus RGIQE capacity. In this article, a waveform with frequency agility is used to rectify the skewness that degrades RGIQE capacity. By changing the waveform's center frequency and instantaneous bandwidth from pulse to pulse in a particular manner, the intrinsic passband region can be de-skewed. The de-skewed shape maximizes the inscription area thus maximizing 2-D bandwidth, image SNR, and RGIQE capacity. Three examples are given in this article, one monostatic geometry, and two bistatic geometries. RGIQE capacity is increased by 52.02%, 44.42%, and 79.09% for the three examples.
The MACCS code was created by Sandia National Laboratories for the U.S. Nuclear Regulatory Commission and has been used for emergency planning, level 3 probabilistic risk assessments, consequence analyses and other scientific and regulatory research for over half a century. Specializing in modeling the transport of nuclear material into the environment, MACCS accounts for atmospheric transport and dispersion, wet and dry deposition, probabilistic treatment of meteorology, exposure pathways, varying protective actions for the emergency, intermediate and long-term phases, dosimetry, health effects (including but not limited to population dose, acute radiation injury and increased cancer risk), and economic impacts. Routine updates and recent enhancements to the MACCS code, such as the inclusion of a higher fidelity atmospheric transport and dispersion model, the addition of a new economic impact model, and the application of nearfield modeling, have continuously increased the codes capabilities in consequence analysis. Additionally, investigations of MACCS capabilities for advanced reactor applications have shown that MACCS can provide realistic and informative risk assessments for the new generation of reactor designs. Even so, areas of improvement as well as gaps have been identified that if resolved can increase the usefulness of MACCS in any application regarding a release of nuclear material into the environment.
Explosives exposed to conditions above the Chapman-Jouget (CJ) state exhibit an overdriven response that is transient. Reactive flow models are often fit to the CJ conditions, and they transition to detonation based on inputs lower than or near CJ, but these models may also be used to predict explosive behavior in the overdriven regime. One scenario that can create a strongly overdriven state is a Mach stem shock interaction. These interactions can drive an already detonating or transitioning explosive to an overdriven state, and they can also cause detonation at the interaction location where the separate shocks may be insufficient to detonate the material. In this study, the reactive flow model XHVRB utilizing a Mie-Grüneisen equation of state (EOS) for the unreacted explosive, and a Sesame table for the reacted products, will be used to examine Mach stem interactions from multi-point detonation schemes in CTH. The effect of the overdriven response driven by PETN-based explosive pellets will be tracked to determine the transient detonation behavior, and the predicted states from the burn model will be compared to previously published data.
We use complete polarization tomography of photon pairs generated in semiconductor metasurfaces via spontaneous parametric down-conversion to show how bound states in the continuum resonances affect the polarization state of the emitted photons.
We present a novel design of a III-V-on-silicon heterogeneously integrated tunable ring laser, achieving >80 nanometers of tuning bandwidth, the widest conceived using only two rings, fostering many applications such as spectroscopy and beam steering.
Different data pipelines and statistical methods are applied to photovoltaic (PV) performance datasets to quantify the performance loss rate (PLR). Since the real values of PLR are unknown, a variety of unvalidated values are reported. As such, the PV industry commonly assumes PLR based on statistically extracted ranges from the literature. However, the accuracy and uncertainty of PLR depend on several parameters including seasonality, local climatic conditions, and the response of a particular PV technology. In addition, the specific data pipeline and statistical method used affect the accuracy and uncertainty. To provide insights, a framework of (≈200 million) synthetic simulations of PV performance datasets using data from different climates is developed. Time series with known PLR and data quality are synthesized, and large parametric studies are conducted to examine the accuracy and uncertainty of different statistical approaches over the contiguous US, with an emphasis on the publicly available and “standardized” library, RdTools. In the results, it is confirmed that PLRs from RdTools are unbiased on average, but the accuracy and uncertainty of individual PLR estimates vary with climate zone, data quality, PV technology, and choice of analysis workflow. Best practices and improvement recommendations based on the findings of this study are provided.
Laser Applications to Chemical, Security and Environmental Analysis, LACSEA 2024 in Proceedings Optica Sensing Congress 2024, AIS, LACSEA, Sensors, QSM - Part of Optica Sensing Congress
Glass wedges are used increase the dimensionality of various optical measurements. Light refracted through the wedges can be focused to closely spaced points, lines or planes as shown in the applications herein.
A mesoscale model for the shock initiation of pentaerythritol tetranitrate (PETN) films has been utilized to elucidate changes in initiation thresholds due to aging conditions and surface roughness, as has been observed from a series of high-throughput initiation (HTI) experiments. The HTI experiment has generated a wealth of thin-pulse, sub-millimeter shock initiation data for vapor deposited PETN films with thicknesses of 67-125 μm and varying accelerated aging conditions. This is because the HTI experiment provides access to growth-to-detonation information for explosives that exhibit a shock-to-detonation transition (SDT) with length and time scales that are too short to be resolved by conventional experiments. Mesoscale modeling results using experimentally characterized PETN microstructures are able to capture the general trend observed in experiments, in that increasing flyer impact velocity increases reactions until full detonation is reached. Moreover, the varying degrees of surface roughness that were considered were found to provide only minor variances in the peak particle velocity at the explosive output. The model did not predict a shift in the initiation threshold due to aged microstructures alone, indicating that additional mesoscale model improvements are necessary.
This chapter will show the results of a study where component-based transfer path analysis was used to translate vibration environments between versions of the round-robin structure. This was done to evaluate a hybrid approach where the responses were measured experimentally, but the frequency response functions were derived analytically. This work will describe the test setup, force estimation process, response prediction (on the new system), and show comparisons between the predicted and measured responses. Observations will also be made on the applicability of this hybrid approach in more complex systems.
Polymeric materials are commonplace in the natural gas infrastructure as distribution pipes, coatings, seals, and gaskets. Under the auspices of the U.S. Department of Energy HyBlend program, one of the means to reduce greenhouse gas emissions is with replacing natural gas, either partially or completely, with hydrogen. This approach makes it imperative that we conduct near-term and long-term materials compatibility research in these relevant environments. Insights into the effects of hydrogen and hydrogen gas blends on polymer integrity can be gained through both ex-situ and in-situ analytical methods. Our work represented here highlights a study of the behavior of pipeline polyethylene (PE) materials, including HDPE (Dow 2490 and GDB50) and MDPE (Ineos and legacy Dupont Aldyl A), when exposed to hydrogen by means of in-situ X-ray scattering and ex-situ Raman spectroscopy techniques. These methods complement each other in analyzing polymer microstructure. Data collected revealed that the aforementioned polymers did not show significant changes in crystallinity or morphology under the exposure conditions tested. These studies help establish techniques to study real-time effects of hydrogen gas on polymer structure and chemistry, which is directly related to pipeline mechanical strength and longevity of service.
The Explosive Destruction System (EDS) V31 containment vessel was procured by the US Army Recovered Chemical Materiel Directorate (RCMD) as a third-generation system used to destroy chemical munitions. It is the fifth individual EDS vessel to be fabricated under Code Case 2564 of the 2019 ASME Boiler and Pressure Vessel Code, which provides rules for the design of impulsively loaded vessels. The explosive rating for this vessel, based on the code case, is twenty-four (24) pounds TNT-equivalent for up to 1092 detonations. This report documents the results of explosive tests that were performed on the vessel at Sandia National Laboratories in Albuquerque, New Mexico to qualify the vessel for field operations use. There were three design basis configurations for qualification testing. Qualification test (1) consisted of a simulated M55 rocket motor and warhead assembly of 24lbs of Composition C-4 (30 lb TNT equivalent). This test was considered the maximum load case, based on modeling and simulation methods performed by Sandia prior to the vessel design phase. Qualification test (2) consisted of a regular, right circular cylinder, unitary charge, located central to the vessel interior of 19.2 lb of Composition C-4 (24 lb TNT equivalent). Qualification test (3) consisted of a 12-pack of regular, right circular cylinders, distributed evenly inside the vessel (totaling 19.2 lb of C-4, or 24 lb TNT equivalent). The ASME certification was based exclusively on the analytical predictions because the data required for certification cannot be obtained through testing. Strains through the thickness of the wall and on the inside surface of the cylinder were required and could only be obtained through analysis. Strain gages were placed on the outside of the vessel in three locations and the displacement of the door was measured using a Photonic Doppler Velocimetry (PDV) system. These measured values are compared to analytical predictions to help ensure the accuracy of the predicted strains and displacements throughout the rest of the model.
The Rotor Aerodynamics, Aeroelastics, and Wake (RAAW) project's main objective was collecting data for validation of aerodynamic and aeroelastic codes for large, flexible rotors. These data come from scanning lidars of the inflow and wake, met tower, profiling lidar, blade deflection from photogrammetry, turbine SCADA data (including root bending loads), and hub-mounted SpinnerLidar inflow measurements. The goal of the present work is to analyze various methods to align the SpinnerLidar inflow data in time and space with individual blade loading. These methods would prove a way of analyzing turbine response while estimating the flowfield at each blade and provide a way of improving turbine response understanding using field data in real time, not just from simulations. The hub-mounted SpinnerLidar measures the inflow in the rotor frame meaning the locations of the blades relative to the measurement pattern do not change. The present work outlines some methods for correlating the SpinnerLidar inflow measurements with root bending loads in the rotor frame of reference accounting for both changes in wind speed and rotor speed from the measurement location one diameter upstream to each blade.
Vapor-deposited PETN films undergo significant microstructure evolution when exposed to elevated temperatures, even for short periods of time. This accelerated aging impacts initiation behavior and can lead to chemical changes as well. In this study, as-deposited and aged PETN films are characterized using scanning electron microscopy and ultra-high performance liquid chromatography and compared with changes in initiation behavior measured via a high-throughput experimental platform that uses laser-driven flyers to sequentially impact an array of small explosive samples. Accelerated aging leads to rapid coarsening of the grain structure. At longer times, little additional coarsening is evident, but the distribution of porosity continues to evolve. These changes in microstructure correspond to shifts in the initiation threshold and onset of reactions to higher flyer impact velocities.
Hail poses a significant threat to photovoltaic (PV) systems due to the potential for both cell and glass cracking. This work experimentally investigates hail-related failures in Glass/Backsheet and Glass/Glass PV modules with varying ice ball diameters and velocities. Post-impact Electroluminescence (EL) imaging revealed the damage extent and location, while high-speed Digital Image Correlation (DIC) measured the out-of-plane module displacements. The findings indicate that impacts of 20 J or less result in negligible damage to the modules tested. The thinner glass in Glass/Glass modules cracked at lower impact energies (-25 J) than Glass/Backsheet modules (-40 J). Furthermore, both module types showed cell and glass cracking at lower energies when impacted at the module's edges compared to central impacts. At the time of presentation, we will use DIC to determine if out-of-plane displacements are responsible for the impact location discrepancy and provide more insights into the mechanical response of hail impacted modules. This study provides essential insights into the correlation between impact energy, impact location, displacements, and resulting damage. The findings may inform critical decisions regarding module type, site selection, and module design to contribute to more reliable PV systems.
Contact mechanics, or the modeling of the impenetrability of solid objects, is fundamental to computational solid mechanics (CSM) applications yet is oftentimes the most challenging in terms of computational efficiency and performance. These challenges arise from the irregularity and highly dynamic nature of contact simulation, particularly with algorithms designed for distributed memory architectures. First among these challenges is the inherent load imbalance when distributing contact load across compute nodes. This imbalance is highly problem dependent, and relates to the surface area of contact manifolds and the volume around them, rather than the distribution of the mesh over compute nodes, meaning the application load can vary drastically over different phases. The dynamic nature of contact problems motivates the use of distributed asynchronous many-tasking (AMT) frameworks to efficiently handle irregular workloads. In this paper, we present our work on distBVH, a distributed contact solution using the DARMA/vt library for asynchronous tasking that is also capable of running on-node Kokkos-based kernels. We explore how distBVH addresses the various challenges of CSM contact problems. We evaluate the use of many of DARMA/vt’s dynamic load balancers and demonstrate how our load balancing approach can provide significant performance improvements on various computational solid mechanics benchmarks. Additionally, we show how our approach can take advantage of DARMA/vt for tasking and efficient on-node kernels using Kokkos to scale over hundreds of processing elements.
A Marx generator module from the decommissioned RITS pulsed power machine from Sandia National Labs was modified to operate in an existing setup at Texas Tech University. This will ultimately be used as a testbed for laser triggered gas switching. The existing experimental setup at Texas Tech University consists of a large Marx tank, an oil-filled coaxial pulse forming line, an adjustable peaking gap, and load section along with various diagnostics. The setup was previously operated at a lower voltage than the new experiment, so electrostatic modeling was done to ensure viability and drive needed modifications. The oil tank will house the modified RITS Marx. This Marx contains half as many stages as the original RITS module and has an expected output of 1 MV. A trigger Marx generator consisting of 8 stages has been fabricated to trigger the RITS Marx. Charging and triggering of both Marx generators will be controlled through a fiber optic network. The output from the modified RITS Marx will be used to charge the oil-filled coaxial line acting as a low impedance pulse forming line (PFL). Once charged, the self-breaking peaking gap will close, allowing the compressed pulse to be released into the load section. For testing of the Marx module and PFL, a match 10 Ω water load was fabricated. The output pulsewidth is 55 nsec. Diagnostics include two capacitive voltage probes on either side of the peaking gap, a quarter-turn Rogowski coil for load current measurement, and a Pearson coil for calibrations purposes.
Tabulated chemistry models are widely used to simulate large-scale turbulent fires in applications including energy generation and fire safety. Tabulation via piecewise Cartesian interpolation suffers from the curse-of-dimensionality, leading to a prohibitive exponential growth in parameters and memory usage as more dimensions are considered. Artificial neural networks (ANNs) have attracted attention for constructing surrogates for chemistry models due to their ability to perform high-dimensional approximation. However, due to well-known pathologies regarding the realization of suboptimal local minima during training, in practice they do not converge and provide unreliable accuracy. Partition of unity networks (POUnets) are a recently introduced family of ANNs which preserve notions of convergence while performing high-dimensional approximation, discovering a mesh-free partition of space which may be used to perform optimal polynomial approximation. We assess their performance with respect to accuracy and model complexity in reconstructing unstructured flamelet data representative of nonadiabatic pool fire models. Our results show that POUnets can provide the desirable accuracy of classical spline-based interpolants with the low memory footprint of traditional ANNs while converging faster to significantly lower errors than ANNs. For example, we observe POUnets obtaining target accuracies in two dimensions with 40 to 50 times less memory and roughly double the compression in three dimensions. We also address the practical matter of efficiently training accurate POUnets by studying convergence over key hyperparameters, the impact of partition/basis formulation, and the sensitivity to initialization.
The nonlinear viscoelastic Spectacular model is calibrated to the thermo-mechanical behavior of 828/D230/Alox with an alox volume fraction of 20 %. Legacy experimental data from Sandia’s polymer properties database (PPD) is used to calibrate the model. Based on known densities of the epoxy 828/D230 and the alox filler, the alox volume fractions listed on the PPD were likely reported incorrectly. The alox volume fractions are recalculated here. Using the recalculated alox volume fractions, the PPD contains experimental data for 828/D230/Alox with alox volume fractions of 16 %, 24 %, and 33 %, so the thermo-mechanical behavior at 20 % alox volume fraction is estimated by interpolating between the bounding cases of of 16 % and 24 %. Because the Spectacularmodel can be fairly challenging to calibrate, the calibration procedure is described in detail. Several of the calibration steps involve inverse parameter identification, where an experiment is simulated and parameters are iteratively updated until the model response matches the experimental data. As the PPD does not fully describe all experimental procedures, the experimental simulations use assumed thermal and mechanical loading rates that are typical for the viscoelastic characterization of epoxies. Spectacular uses four independent relaxation functions related to volumetric (ƒ1), shear (ƒ2), thermal strain (ƒ3), and thermal relaxations (ƒ4). The previous SPEC model form, also known as the universal_polymer model, uses two independent relaxation functions related to volumetric and thermal relaxation (ƒν = ƒ1 = ƒ3 = ƒ4) and shear relaxation (ƒs = ƒ2). The two constitutive choices are briefly evaluated here, where it is found that the four relaxation function approach of Spectacular was better suited for fitting the coefficient of thermal expansion during both heating and cooling.
This work was conducted in support of the American Made Geothermal Prize. The following data summary report presents the testing conducted at Sandia National Labs to validate the performance of the Ultra-High Temperature Seismic Tool for Geothermal Wells. The goal of the testing was to measure the sensitivity of the device to seismic vibrations and reliability of the instrument at elevated temperatures. To this end, two tests were conducted: 1) Ambient Temperature Seismic Testing, which measured the response of the tool to a sweep of frequencies from 1 to 1000 Hz, and 2) Elevated Temperature Survivability Testing which measured the voltage response of the device at 225°C over a month-long testing window. The details of the testing methodology and summary of the tests are presented herein.
This report summarizes the collaboration between Sandia National Laboratories (SNL) and the Nuclear Regulatory Commission (NRC) to improve the state of knowledge on chloride induced stress corrosion cracking (CISCC). The foundation of this work relied on using SNL’s CISCC computer code to assess the current state of knowledge for probabilistically modeling CISCC on stainless steel canisters. This work is presented as three tasks. The first task is exploring and independently comparing crack growth rate (CGR) models typically used in CISCC modeling by the research community. The second task is implementing two of the more conservative CGR models from the first task into SNL’s full CISCC code to understand the impact of the different CGR models on a full probabilistic analysis while studying uncertainty from three key input parameters. The combined work of the first two tasks showed that properly measuring salt deposition rates is impactful to reducing uncertainty when modeling CISCC. The work in Task 2 also showed how probabilistic CGR models can be more appropriate at capturing aleatory uncertainty when modeling SCC. Lastly, appropriate and realistic input parameters relevant for CISCC modeling were documented in the last task as a product of the simulations considered in the first two tasks.
Infrasound, low frequency sound less than 20 Hz, is generated by both natural and anthropogenic sources. Infrasound sensors measure pressure fluctuations only in the vertical plane and are single channel. However, the most robust infrasound signal detection methods rely on stations with multiple sensors (arrays), despite the fact that these are sparse. Automated methods developed for seismic data, such as short-term average to long-term average ratio (STA/LTA), often have a high false alarm rate when applied to infrasound data. Leveraging single channel infrasound stations has the potential to decrease signal detection limits, though this cannot be done without a reliable detection method. Therefore, this report presents initial results using (1) a convolutional neural network (CNN) to detect infrasound signals and (2) unsupervised learning to gain insight into source type.
The performance of the ORNL ASIC and its readout system was tested with pixelated organic scintillators. We use a pixelated trans-Stilbene scintillator array from Inrad Optics and a pixelated organic glass scintillator array developed at Sandia National Laboratories to characterize the energy and timing resolutions and the pulse-shape discrimination (PSD) figure-of-merit (FoM). The results are compared to previous work in which the same metrics were measured on waveforms digitized at 250 MHz with 14-bit resolution. We found that the PSD FoM at 340 keVee of the ASIC configuration compared to waveform data varied with the scintillator type. We measured a PSD FoM of 1.12 ± 0.14 with the ASIC configuration versus 1.39 ± 0.23 with waveform data using the trans-Stilbene array. We measured a PSD FoM of 0.52 ± 0.18 with the ASIC configuration versus 1.25 ± 0.19 with waveform data using the the organic glass scintillator array. The coincidence timing resolution was measured using two 6x6x6 mm3 cubes of trans-Stilbene. It was measured to be 805 ± 9 ps with the ASIC configuration versus 300 ps on average with waveform data.
Numerous types of pulsed power driven inertial confinement fusion (ICF) and high energy density (HED) systems rely on implosion stability to achieve desired temperatures, pressures, and densities. Sandia National Laboratories Pulsed Power Sciences Center’s main ICF platform, Magnetized Liner Inertial Fusion (MagLIF), suffers from implosion instabilities which limit attainable fuel conditions and can compromise fuel confinement. This Truman Fellowship research primarily focused on computationally exploring (a) methods for improving our understanding of hydrodynamic and magnetohydrodynamic instabilities that form during cylindrical liner implosions, (b) methods for mitigating implosion instabilities, particularly those that degrade performance of MagLIF targets, and (c) novel MagLIF target designs intended to improve target performance primarily via enhanced implosion stability. Several multi-dimensional computational tools were used, including the magnetohydrodynamics code ALEGRA, the radiation-magnetohydrodynamics code HYDRA, and the magnetohydrodynamics code KRAKEN. This research succeeded in executing and analyzing simulations of automagnetizing liner implosions, shockless MagLIF implosions, dynamic screw pinch driven cylindrical liner implosions, and cylindrically convergent HED instability studies. The methods and tools explored and developed in this Truman Fellowship research have been published in several peer-reviewed journal articles and will serve as useful contributions to the fields of pulsed power science and engineering, particularly pertaining to pulsed power ICF and HED science.
For the cylindrically symmetric targets that are normally fielded on the Z machine, two dimensional axisymmetric MHD simulations provide the backbone of our target design capability. These simulations capture the essential operation of the target and allow for a wide range of physics to be addressed at a substantially lower computational cost than 3D simulations. This approach, however, makes some approximations that may impact its ability to accurately provide insight into target operation. As an example, in 2D simulations, targets are able to stagnate directly to the axis in a way that is not entirely physical, leading to uncertainty in the impact of the dynamical instabilities that are an important source of degradation for ICF concepts. In this report, we have performed a series of 3D calculations in order to assess the importance of this higher fidelity treatment on MagLIF target performance.
Accurately locating seismoacoustic sources with geophysical observations helps to monitor natural and anthropogenic phenomena. Sparsely deployed infrasound arrays can readily locate large sources thousands of kms away, but small events typically produce signals observable at only local to regional distances. At such distances, accurate location efforts rely on observations across smaller regional or temporary deployments which often consist of single-channel infrasound sensors that cannot record direction of arrival. Event locations can also be aided by inclusion of ground coupled airwaves (GCA). This study demonstrates how we can robustly locate a catalog of seismoacoustic events using infrasound, GCA, and seismic arrival times at local to near-regional distances. We employ a probabilistic location framework using simplified forward models. Our results indicate that both single-channel infrasound and GCA arrival times can provide accurate estimates of event location in the absence of array-based observations even when using simple models. However, one must carefully choose model uncertainty bounds to avoid underestimation of confidence intervals.
This report documents the results and findings of a one-year scoping study investigating multichannel readout application specific integrated circuits (ASICs) for interfacing to, and processing data from, silicon photomultiplier (SiPM) arrays. We document ASIC desired and required specifications for four applications supporting national security mission areas: neutron radiography, associated particle imaging, and two versions of kinematic neutron imaging cameras. While each application has a few unique requirements that stress capability, there is generally good agreement among most. Two recently developed ASIC devices were evaluated in a system-like configuration by interfacing these to scintillator crystals exposed to gamma and neutron sources. The 64-channel ORNL device delivered functional capability while meeting most mission requirements for neutron radiography. The Nalu Scientific device, a 32-channel full waveform digitizer, did not demonstrate reliable neutron / gamma separation but it is unclear if this was an ASIC issue or problems with test setup or firmware. A literature survey of other commercial and academic ASICs was undertaken to with the conclusion that existing devices do not meet all requirements.
Long-term stable sealing elements are a basic component in the safety concept for a possible repository for heat-emitting radioactive waste in rock salt. The sealing elements will be part of the closure concept for drifts and shafts. They will be made from a welldefinied crushed salt in employ a specific manufacturing process. The use of crushed salt as geotechnical barrier as required by the German Site Selection Act from 2017 /STA 17/ represents a paradigm change in the safety function of crushed salt, since this material was formerly only considered as stabilizing backfill for the host rock. The demonstration of the long-term stability and impermeability of crushed salt is crucial for its use as a geotechnical barrier. The KOMPASS-II project, is a follow-up of the KOMPASS-I project and continues the work with focus on improving the understanding of the thermal-hydraulic-mechanical (THM) coupled processes in crushed salt compaction with the objective to enhance the scientific competence for using crushed salt for the long-term isolation of high-level nuclear waste within rock salt repositories. The project strives for an adequate characterization of the compaction process and the essential influencing parameters, as well as a robust and reliable long-term prognosis using validated constitutive models. For this purpose, experimental studies on long-term compaction tests are combined with microstructural investigations and numerical modeling. The long-term compaction tests in this project focused on the effect of mean stress, deviatoric stress and temperature on the compaction behavior of crushed salt. A laboratory benchmark was performed identifying a variability in compaction behavior. Microstructural investigations were executed with the objective to characterize the influence of pre-compaction procedure, humidity content and grain size/grain size distribution on the overall compaction process of crushed salt with respect to the deformation mechanisms. The created database was used for benchmark calculations aiming for improvement and optimization of a large number of constitutive models available for crushed salt. The models were calibrated, and the improvement process was made visible applying the virtual demonstrator.
Optimization is a key tool for scientific and engineering applications; however, in the presence of models affected by uncertainty, the optimization formulation needs to be extended to consider statistics of the quantity of interest. Optimization under uncertainty (OUU) deals with this endeavor and requires uncertainty quantification analyses at several design locations; i.e., its overall computational cost is proportional to the cost of performing a forward uncertainty analysis at each design location. An OUU workflow has two main components: an inner loop strategy for the computation of statistics of the quantity of interest, and an outer loop optimization strategy tasked with finding the optimal design, given a merit function based on the inner loop statistics. Here, in this work, we propose to alleviate the cost of the inner loop uncertainty analysis by leveraging the so-called multilevel Monte Carlo (MLMC) method, which is able to allocate resources over multiple models with varying accuracy and cost. The resource allocation problem in MLMC is formulated by minimizing the computational cost given a target variance for the estimator. We consider MLMC estimators for statistics usually employed in OUU workflows and solve the corresponding allocation problem. For the outer loop, we consider a derivative-free optimization strategy implemented in the SNOWPAC library; our novel strategy is implemented and released in the Dakota software toolkit. We discuss several numerical test cases to showcase the features and performance of our approach with respect to its Monte Carlo single fidelity counterpart.
This report describes work originally performed in FY19 that assembled a workflow enabling formal verification of high-consequence digital controllers. The approach builds on an engineering analysis strategy using multiple abstraction levels (Model-Based Design) and performs exhaustive formal analysis of appropriate levels – here, state machines and C code – to assure always/never properties of digital logic that cannot be verified by testing alone. The operation of the workflow is illustrated using example models and code, including expected failures of verification when properties are violated.
The Storage Sizing and Placement Simulation (SSIM) application allows a user to define the possible sizes and locations of energy storage elements on an existing grid model defined in OpenDSS. Given these possibilities, the software will automatically search through them and attempt to determine which configurations result in the best overall grid performance. This quick-start guide will go through, in detail, the creation of an SSIM model based on a modified version of the IEEE 34 bus test feeder system. There are two primary parts of this document. The first is a complete list of instructions with little-to-no explanation of the meanings of the actions requested. The second is a detailed description of each input and action stating the intent and effect of each. There are links between the two sections.
A technique using the photon kerma cross section for a material in combination with the number fraction from a photon energy spectrum has been developed to determine the estimated subzone dimension needed to provide an energy deposition profile in radiation transport calculations. The technique was verified using the ITS code for monoenergetic photon sources and a selection of photon spectra. A Python script was written to use the CEPXS cross-section file with a Rapture calculated transmission spectrum to provide the dimensional estimates in a rapid fashion. The script is available for SNL users through the corporate gitlab server.
Near-term solutions are needed to allow for flexible engagement in future nuclear arms control discussions. This project developed a method for implementing an information barrier (IB) on commercial systems, shortening the research and development lifecycle for warhead verification technologies while offering improved and inherently flexible capabilities. The crux of the verification challenge remains the difficulty in developing an authenticatable IB which prevents sensitive host country information from inadvertent transmission to an inspector. Many concepts for IB’s rely on dedicated “trusted” processor modules developed with dedicated custom radiation detection systems and associated algorithms. Without a priori knowledge of the treaty item, the parameter space for measurements can be nearly infinite and robustness against spoofing without the ability to view sensitive data is key. This project has produced an unclassified framework capable of ingesting data from common gamma detectors and identifying the presence of weapons grade nuclear material at over 90% accuracy.
Wave energy converters (WECs) are designed to produce useful work from ocean waves. This useful work can take the form of electrical power or even pressurized water for, e.g., desalination. This report details the findings from a wave tank test focused on that production of useful work. To that end, the experimental system and test were specifically designed to validate models for power transmission throughout the WEC system. Additionally, the validity of co-design informed changes to the power take-off (PTO) were assessed and shown to provide the expected improvements in system performance.
The International Database of Reference Gamma-Ray Spectra of Various Nuclear Matter is designed to hold curated gamma spectral data is hosted by the International Atomic Energy Agency on its public facing web site. The database used to hold the spectral data was designed by Sandia National Labs under the auspices of the State Department’s Support Program. This document describes the tables and entity relationships that make up the database.
Exploding bridgewire detonators (EBWs) containing pentaerythritol tetranitrate (PETN) exposed to high temperatures may not function following discharge of the design electrical firing signal from a charged capacitor. Knowing functionality of these arbitrarily facing EBWs is crucial when making safety assessments of detonators in accidental fires. Orientation effects are only significant when the PETN is partially melted. Here, the melting temperature can be measured with a differential scanning calorimeter. Nonmelting EBWs will be fully functional provided the detonator never exceeds 406 K (133 °C) for at least 1 h. Conversely, EBWs will not be functional once the average input pellet temperature exceeds 414 K (141 °C) for a least 1 min which is long enough to cause the PETN input pellet to completely melt. Functionality of the EBWs at temperatures between 406 and 414 K will depend on orientation and can be predicted using a stratification model for downward facing detonators but is more complex for arbitrary orientations. A conservative rule of thumb would be to assume that the EBWs are fully functional unless the PETN input pellet has completely melted.
Brillouin scattering spectroscopy has been used to obtain an accurate (<1%) ρ-P equation of state (EOS) of 1:1 and 9:1 H2-He molar mixtures from 0.5 to 5.4 GPa at 296 K. Our calculated equations of state indicate close agreement with the experimental data right to the freezing pressure of hydrogen at 5.4 GPa. The measured velocities agree on average, within 0.5%, of an ideal mixing model. The ρ-P EOSs presented have a standard deviation of under 0.3% from the measured densities and under 1% deviation from ideal mixing. Furthermore, a detailed discussion of the accuracy, precision, and sources of error in the measurement and analyses of our equations of state is presented.
Porous liquids (PLs) are an attractive material for gas separation and carbon sequestration due to their permanent internal porosity and high adsorption capacity. PLs that contain zeolitic imidazole frameworks (ZIFs), such as ZIF-8, form PLs through exclusion of aqueous solvents from the framework pore due to its hydrophobicity. The gas adsorption sites in ZIF-8 based PLs are historically unknown; gas molecules could be captured in the ZIF-8 pore or adsorb at the ZIF-8 interface. To address this question, ab initio molecular dynamics was used to predict CO2 binding sites in a PL composed of a ZIF-8 particle solvated in a water, ethylene glycol, and 2-methylimidazole solvent system. Further, the results show that CO2 energetically prefers to reside inside the ZIF-8 pore aperture due to strong van der Waals interactions with the terminal imidazoles. However, the CO2 binding site can be blocked by larger solvent molecules that have greater adsorption interactions. CO2 molecules were unable to diffuse into the ZIF-8 pore, with CO2 adsorption occurring due to binding with the ZIF-8 surface. Therefore, future design of ZIF-based PLs for enhanced CO2 adsorption should be based on the strength of gas binding at the solvated particle surface.
High-entropy alloys (HEAs) represent an interesting alloying strategy that can yield exceptional performance properties needed across a variety of technology applications, including hydrogen storage. Examples include ultrahigh volumetric capacity materials (BCC alloys → FCC dihydrides) with improved thermodynamics relative to conventional high-capacity metal hydrides (like MgH2), but still further destabilization is needed to reduce operating temperature and increase system-level capacity. In this work, we demonstrate efficient hydride destabilization strategies by synthesizing two new Al0.05(TiVNb)0.95-xMox (x = 0.05, 0.10) compositions. We specifically evaluate the effect of molybdenum (Mo) addition on the phase structure, microstructure, hydrogen absorption, and desorption properties. Both alloys crystallize in a bcc structure with decreasing lattice parameters as the Mo content increases. The alloys can rapidly absorb hydrogen at 25 °C with capacities of 1.78 H/M (2.79 wt %) and 1.79 H/M (2.75 wt %) with increasing Mo content. Pressure-composition isotherms suggest a two-step reaction for hydrogen absorption to a final fcc dihydride phase. The experiments demonstrate that increasing Mo content results in a significant hydride destabilization, which is consistent with predictions from a gradient boosting tree data-driven model for metal hydride thermodynamics. Furthermore, improved desorption properties with increasing Mo content and reversibility were observed by in situ synchrotron X-ray diffraction, in situ neutron diffraction, and thermal desorption spectroscopy.
Accurate distribution system models are becoming increasingly critical for grid modernization tasks, and inaccurate phase labels are one type of modeling error that can have broad impacts on analyses using the distribution system models. This work demonstrates a phase identification methodology that leverages advanced metering infrastructure (AMI) data and additional data streams from sensors (relays in this case) placed throughout the medium-voltage sector of distribution system feeders. Intuitive confidence metrics are employed to increase the credibility of the algorithm predictions and reduce the incidence of false-positive predictions. The method is first demonstrated on a synthetic dataset under known conditions for robustness testing with measurement noise, meter bias, and missing data. Then, four utility feeders are tested, and the algorithm’s predictions are proven to be accurate through field validation by the utility. Lastly, the ability of the method to increase the accuracy of simulated voltages using the corrected model compared to actual measured voltages is demonstrated through quasi-static time-series (QSTS) simulations. The proposed methodology is a good candidate for widespread implementation because it is accurate on both the synthetic and utility test cases and is robust to measurement noise and other issues.
As the prospect of exceeding global temperature targets set forth in the Paris Agreement becomes more likely, methods of climate intervention are increasingly being explored. With this increased interest there is a need for an assessment process to understand the range of impacts across different scenarios against a set of performance goals in order to support policy decisions. The methodology and tools developed for Performance Assessment (PA) for nuclear waste repositories shares many similarities with the needs and requirements for a framework for climate intervention. Using PA, we outline and test an evaluation framework for climate intervention, called Performance Assessment for Climate Intervention (PACI) with a focus on Stratospheric Aerosol Injection (SAI). We define a set of key technical components for the example PACI framework which include identifying performance goals, the extent of the system, and identifying which features, events, and processes are relevant and impactful to calculating model output for the system given the performance goals. Having identified a set of performance goals, the performance of the system, including uncertainty, can then be evaluated against these goals. Using the Geoengineering Large Ensemble (GLENS) scenario, we develop a set of performance goals for monthly temperature, precipitation, drought index, soil water, solar flux, and surface runoff. The assessment assumes that targets may be framed in the context of risk-risk via a risk ratio, or the ratio of the risk of exceeding the performance goal for the SAI scenario against the risk of exceeding the performance goal for the emissions scenario. From regional responses, across multiple climate variables, it is then possible to assess which pathway carries lower risk relative to the goals. The assessment is not comprehensive but rather a demonstration of the evaluation of an SAI scenario. Future work is needed to develop a more complete assessment that would provide additional simulations to cover parametric and aleatory uncertainty and enable a deeper understanding of impacts, informed scenario selection, and allow further refinements to the approach.
Dynamic shockless compression experiments provide the ability to explore material behavior at extreme pressures but relatively low temperatures. Typically, the data from these types of experiments are interpreted through an analytic method called Lagrangian analysis. In this work, alternative analysis methods are explored using modern statistical methods. Specifically, Bayesian model calibration is applied to a new set of platinum data shocklessly compressed to 570 GPa. Several platinum equation-of-state models are evaluated, including traditional parametric forms as well as a novel non-parametric model concept. The results are compared to those in Paper I obtained by inverse Lagrangian analysis. The comparisons suggest that Bayesian calibration is not only a viable framework for precise quantification of the compression path, but also reveals insights pertaining to trade-offs surrounding model form selection, sensitivities of the relevant experimental uncertainties, and assumptions and limitations within Lagrangian analysis. The non-parametric model method, in particular, is found to give precise unbiased results and is expected to be useful over a wide range of applications. The calibration results in estimates of the platinum principal isentrope over the full range of experimental pressures to a standard error of 1.6%, which extends the results from Paper I while maintaining the high precision required for the platinum pressure standard.
pvlib python is a community-developed, open-source software toolbox for simulating the performance of solar photovoltaic (PV) energy components and systems. It provides reference implementations of over 100 empirical and physics-based models from the peer-reviewed scientific literature, including solar position algorithms, irradiance models, thermal models, and PV electrical models. In addition to individual low-level model implementations, pvlib python provides high-level workflows that chain these models together like building blocks to form complete “weather-to-power” photovoltaic system models. It also provides functions to fetch and import a wide variety of weather datasets useful for PV modeling. pvlib python has been developed since 2013 and follows modern best practices for open-source python software, with comprehensive automated testing, standards-based packaging, and semantic versioning. Its source code is developed openly on GitHub and releases are distributed via the Python Package Index (PyPI) and the conda-forge repository. pvlib python’s source code is made freely available under the permissive BSD-3 license. Here we (the project’s core developers) present an update on pvlib python, describing capability and community development since our 2018 publication (Holmgren, Hansen, & Mikofski, 2018).
Absolute measurements of solid-material compressibility by magnetically driven shockless dynamic compression experiments to multi-megabar pressures have the potential to greatly improve the accuracy and precision of pressure calibration standards for use in diamond anvil cell experiments. To this end, we apply characteristics-based inverse Lagrangian analysis (ILA) to 11 sets of ramp-compression data on pure platinum (Pt) metal and then reduce the resulting weighted-mean stress-strain curve to the principal isentrope and room-temperature isotherm using simple models for yield stress and Grüneisen parameter. We introduce several improvements to methods for ILA and quasi-isentrope reduction, the latter including calculation of corrections in wave speed instead of stress and pressure to render results largely independent of initial yield stress while enforcing thermodynamic consistency near zero pressure. More importantly, we quantify in detail the propagation of experimental uncertainty through ILA and model uncertainty through quasi-isentrope reduction, considering all potential sources of error except the electrode and window material models used in ILA. Compared to previous approaches, we find larger uncertainty in longitudinal stress. Monte Carlo analysis demonstrates that uncertainty in the yield-stress model constitutes by far the largest contribution to uncertainty in quasi-isentrope reduction corrections. We present a new room-temperature isotherm for Pt up to 444 GPa, with 1-sigma uncertainty at that pressure of just under ± 1.2 % ; the latter is about a factor of three smaller than uncertainty previously reported for multi-megabar ramp-compression experiments on Pt. The result is well represented by a Vinet-form compression curve with (isothermal) bulk modulus K 0 = 270.3 ± 3.8 GPa, pressure derivative K 0 ′ = 5.66 ± 0.10 , and correlation coefficient R K 0 , K 0 ′ = − 0.843 .
The thorium fuel cycle is emerging as an attractive alternative to conventional nuclear fuel cycles, as it does not require the enrichment of uranium for long-term sustainability. The operating principle of this fuel cycle is the irradiation of 232Th to produce 233U, which is fissile and sustains the fission chain reaction. 233U poses unique challenges for nuclear safeguards, as it is associated with a uniquely extreme γ-ray environment from 232U contamination, which limits the feasibility of the γ-ray-based assay, as well as more conservative accountability requirements than for 235U set by the International Atomic Energy Agency. Consequently, instrumentation used for safeguarding 235U in traditional fuel cycles may be inapplicable. It is essential that the nondestructive signatures of 233U be characterized so that nuclear safeguards can be applied to thorium fuel-cycle facilities as they come online. In this work, a set of 233U3O8 plates, containing 984 g233U, was measured at the National Criticality Experiments Research Center. A high-pressure 4He gaseous scintillation detector, which is insensitive to γ-rays, was used to perform a passive fast neutron spectral signature measurement of 233U3O8, and was used in conjunction with a pulsed deuterium-tritium neutron generator to demonstrate the differential die-away signature of this material. Furthermore, an array of 3He detectors was used in conjunction with the same neutron generator to measure the delayed neutron time profile of 233U, which is unique to this nuclide. These measurements provide a benchmark for future nondestructive assay instrumentation development, and demonstrate a set of key neutron signatures to be leveraged for nuclear safeguards in the thorium fuel cycle.
The Roadrunner ion trap is a micro-fabricated surface-electrode ion trap based on silicon technology. This trap has one long linear section and a junction to allow for chain storage and reconfiguration. It uses a symmetric rf-rail design with segmented inner and outer control electrodes and independent control in the junction arms. The trap is fabricated on Sandia’s High Optical Access (HOA) platform to provide good optical access for tightly focused laser beams skimming the trap surface. It is packaged on our custom Bowtie-102 ceramic pin or land grid array packages using a 2.54 mm pitch for backside pins or pads. This trap also includes an rf sensing capacitive divider and tungsten wires for heating or temperature monitoring. The Roadrunner builds on the knowledge gained from previous surface traps fabricated at Sandia while improving ion control capabilities.
Li metal anodes are highly sought after for high energy density applications in both primary commercial batteries and next-generation rechargeable batteries. In this research, Li metal electrodes are aged in coin cells for a year with electrolytes relevant to both types of batteries. The aging response is monitored via electrochemical impedance spectroscopy, and Li electrodes are characterized post-mortem. It was found that the carbonate-based electrolytes exhibit the most severe aging effects, despite the use of LiBF4-based carbonate electrolytes in Li/CFx Li primary batteries. Highly concentrated LiFSI electrolytes exhibit the most minimal aging effects, with only a small impedance increase with time. This is likely due to the concentrated nature of the electrolyte causing fewer solvent molecules available to react with the electrode surface. LiI-based electrolytes also show improved aging behavior both on their own and as an additive, with a similar impedance response with time as the concentrated LiFSI electrolytes. Since I— is in its most reduced state, it likely prevents further reaction and may help protect the Li electrode surface with a primarily organic solid electrolyte interphase.
The precise positioning of dopants in semiconductors using scanning tunneling microscopes has led to the development of planar dopant-based devices, also known as δ layer-based devices, facilitating the exploration of new concepts in classical and quantum computing. Recently, it has been shown that two distinct conductivity regimes (low- and high-bias regimes) exist in δ-layer tunnel junctions due to the presence of quasi-discrete and continuous states in the conduction band of δ-layer systems. Furthermore, discrete charged impurities in the tunnel junction region significantly influence the tunneling rates in δ-layer tunnel junctions. Here we demonstrate that electrical dipoles, i.e. zero-charge defects, present in the tunnel junction region can also significantly alter the tunneling rate, depending, however, on the specific conductivity regime, and orientation and moment of the dipole. In the low-bias regime, with high-resistance tunneling mode, dipoles of nearly all orientations and moments can alter the current, indicating the extreme sensitivity of the tunneling current to the slightest imperfection in the tunnel gap. In the high-bias regime, with low-resistivity, only dipoles with high moments and oriented in the directions perpendicular to the electron tunneling direction can significantly affect the current, thus making this conductivity regime significantly less prone to the influence of dipole defects with low-moments or oriented in the direction parallel to the tunneling.
As additive manufacturing (AM) has become a reliable method for creating complex and unique hardware rapidly, the quality assurance of printed parts remains a priority. In situ process monitoring offers an approach for performing quality control while simultaneously minimizing post-production inspection. For extrusion printing processes, direct linkages between extrusion pressure fluctuations and print defects can be established by integrating pressure sensors onto the print head. In this work, the sensitivity of process monitoring is tested using engineered spherical defects. Pressure and force sensors located near an ink reservoir and just before the nozzle are shown to assist in identification of air bubbles, changes in height between the print head and build surface, clogs, and particle aggregates with a detection threshold of 60–70% of the nozzle diameter. Visual evidence of printed bead distortion is quantified using optical image analysis and correlated to pressure measurements. Importantly, this methodology provides an ability to monitor the quality of AM parts produced by extrusion printing methods and can be accomplished using commonly available pressure-sensing equipment.
Non-stoichiometric perovskite oxides have been studied as a new family of redox oxides for solar thermochemical hydrogen (STCH) production owing to their favourable thermodynamic properties. However, conventional perovskite oxides suffer from limited phase stability and kinetic properties, and poor cyclability. Here, we report a strategy of introducing A-site multi-principal-component mixing to develop a high-entropy perovskite oxide, (La1/6Pr1/6Nd1/6Gd1/6Sr1/6Ba1/6)MnO3 (LPNGSB_Mn), which shows desirable thermodynamic and kinetics properties as well as excellent phase stability and cycling durability. LPNGSB_Mn exhibits enhanced hydrogen production (?77.5 mmol moloxide?1) compared to (La2/3Sr1/3)MnO3 (?53.5 mmol moloxide?1) in a short 1 hour redox duration and high STCH and phase stability for 50 cycles. LPNGSB_Mn possesses a moderate enthalpy of reduction (252.51-296.32 kJ (mol O)?1), a high entropy of reduction (126.95-168.85 J (mol O)?1 K?1), and fast surface oxygen exchange kinetics. All A-site cations do not show observable valence changes during the reduction and oxidation processes. This research preliminarily explores the use of one A-site high-entropy perovskite oxide for STCH.
This paper explores the concept of predictive maturity for non-linear concrete constitutive models employed in the computational prediction of the structural response of reinforced concrete structures to impact from free-flying missiles. Such concrete constitutive models are widely varied in complexity. Three constitutive models were utilized within the same finite element structural model to simulate the response of the IRIS III experiment. Each of the models were individually calibrated with available material testing data and also re-calibrated assuming limited availability of test data. When full calibration is possible, more sophisticated constitutive models appear to provide more predictive maturity; however, when this data is not available (e.g. for an existing structure where representative test specimens may not be available), the expected maturity is reduced. Indeed, this hypothesis is supported by the simulations that indicate good agreement with measured experimental response quantities from the IRIS III tests with complex constitutive models and full calibration, and accordingly poor predictions when less complex models are used or when the more sophisticated models are poorly calibrated. Thus, predictions of structural response where complete material testing data is not obtainable should be understood as less predictive.
A key challenge in inverse problems is the selection of sensors to gather the most effective data. In this paper, we consider the problem of inferring the initial condition to a linear dynamical system and develop an efficient control-theoretical approach for greedily selecting sensors. Our method employs a Galerkin projection to reduce the size of the inverse problem, resulting in a computationally efficient algorithm for sensor selection. As a byproduct of our algorithm, we obtain a preconditioner for the inverse problem that enables the rapid recovery of the initial condition. We analyze the theoretical performance of our greedy sensor selection algorithm as well as the performance of the associated preconditioner. Finally, we verify our theoretical results on various inverse problems involving partial differential equations.
Foulk, James W.; Nemani, Venkat; Fink, Olga; Biggio, Luca; Huan, Xun; Wang, Yan; Du, Xiaoping; Zhang, Xiaoge; Hu, Chao
On top of machine learning (ML) models, uncertainty quantification (UQ) functions as an essential layer of safety assurance that could lead to more principled decision making by enabling sound risk assessment and management. The safety and reliability improvement of ML models empowered by UQ has the potential to significantly facilitate the broad adoption of ML solutions in high-stakes decision settings, such as healthcare, manufacturing, and aviation, to name a few. In this tutorial, we aim to provide a holistic lens on emerging UQ methods for ML models with a particular focus on neural networks and the applications of these UQ methods in tackling engineering design as well as prognostics and health management problems. Towards this goal, we start with a comprehensive classification of uncertainty types, sources, and causes pertaining to UQ of ML models. Next, we provide a tutorial-style description of several state-of-the-art UQ methods: Gaussian process regression, Bayesian neural network, neural network ensemble, and deterministic UQ methods focusing on spectral-normalized neural Gaussian process. Established upon the mathematical formulations, we subsequently examine the soundness of these UQ methods quantitatively and qualitatively (by a toy regression example) to examine their strengths and shortcomings from different dimensions. Then, we review quantitative metrics commonly used to assess the quality of predictive uncertainty in classification and regression problems. Afterward, we discuss the increasingly important role of UQ of ML models in solving challenging problems in engineering design and health prognostics. Two case studies with source codes available on GitHub are used to demonstrate these UQ methods and compare their performance in the life prediction of lithium-ion batteries at the early stage (case study 1) and the remaining useful life prediction of turbofan engines (case study 2).
This article characterises the effects of cathode photoemission leading to electrical discharges in an argon gas. We perform breakdown experiments under pulsed laser illumination of a flat cathode and observe Townsend to glow discharge transitions. The breakdown process is recorded by high-speed imaging, and time-dependent voltage and current across the electrode gap are measured for different reduced electric fields and laser intensities. We employ a 0D transient discharge model to interpret the experimental measurements. The fitted values of transferred photoelectron charge are compared with calculations from a quantum model of photoemission. The breakdown voltage is found to be lower with photoemission than without. When the applied voltage is insufficient for ion-induced secondary electron emission to sustain the plasma, laser driven photoemission can still create a breakdown where a sheath (i.e. a region near the electrode surfaces consisting of positive ions and neutrals) is formed. This photoemission induced plasma persists and decays on a much longer time scale ( ∼ 10 s μ s) than the laser pulse length ( 30 ps). The effects of different applied voltages and laser energies on the breakdown voltage and current waveforms are investigated. The discharge model can accurately predict the measured breakdown voltage curves, despite the existence of discrepancy in quantitatively describing the transient discharge current and voltage waveforms.
A major hurdle in utilizing carbon dioxide (CO2) lies in separating it from industrial flue gas mixtures and finding suitable storage methods that enable its application in various industries. To address this issue, we utilized a combination of molecular dynamics simulations and experiments to investigate the behavior of CO2 in common room-temperature ionic liquids (RTIL) when in contact with aqueous interfaces. Our investigation of RTILs, [EMIM][TFSI] and [OMIM][TFSI], and their interaction with a pure water layer mimics the environment of a previously developed ultrathin enzymatic liquid membrane for CO2 separation. We analyzed diffusion constants and viscosity, which reveals that CO2 molecules exhibit faster mobility within the selected ILs compared to what would be predicted solely based on the viscosity of the liquids using the standard Einstein-Stokes relation. Moreover, we calculated the free energy of translocation for various species across the aqueous-IL interface, including CO2 and HCO3-. Free energy profiles demonstrate that CO2 exhibits a more favorable partitioning behavior in the RTILs compared to that in pure water, while a significant barrier hinders the movement of HCO3- from the aqueous layer. Experimental measurement of the CO2 transport in the RTILs corroborates the model. These findings strongly suggest that hydrophobic RTILs could serve as a promising option for selectively transporting CO2 from aqueous media and concentrating it as a preliminary step toward storage.
Drinking water infrastructure in urban settings is increasingly affected by population growth and disruptions like extreme weather events. This study explores how the integration of direct wastewater reuse can help to maintain drinking water service when the system is compromised.
Here this study investigates the nonlinear frequency response of a shaft-bearing assembly with vibro-impacts occurring at the bearing clearances. The formation of nonlinear behavior as system parameters change is examined, along with the effects of asymmetries in the nominal, inherently symmetric system. The primary effect of increasing the forcing magnitude or decreasing the contact gap sizes is the formation of grazing-induced chaotic solution branches occurring over a wide frequency range near each system resonance. The system's nominal setup has very hard contact stiffness and shows no evidence of isolas or superharmonic resonances over the frequency ranges of interest. Moderate contact stiffnesses cause symmetry breaking and introduce superharmonic resonance branches of primary resonances. Even if some primary resonances are not present due to the system's inherent symmetry, their superharmonic resonances still manifest. Branches of quasiperiodic isolas (isolated resonance branches) are also discovered, along with a cloud of isolas near a high-frequency resonance. Parameter asymmetries are found to produce a few significant changes in behavior: asymmetric linear stiffness, contact stiffness, and gap size could affect the behavior of primary resonant frequencies and isolas.
Aqueous electrolytes composed of 0.1 M zinc bis-(trifluoromethyl-sulfonyl)-imide (Zn-(TFSI)2) and acetonitrile (ACN) were studied using combined experimental and simulation techniques. The electrolyte was found to be electrochemically stable when the ACN V% is higher than 74.4. In addition, it was found that the ionic conductivity of the mixed solvent electrolytes changes as a function of ACN composition, and a maximum was observed at 91.7 V% of ACN although the salt concentration is the same. This behavior was qualitatively reproduced by molecular dynamics (MD) simulations. Detailed analyses based on experiments and MD simulations show that at high ACN composition the water network existing in the high water composition solutions breaks. As a result, the screening effect of the solvent weakens and the correlation among ions increases, which causes a decrease in ionic conductivity at high ACN V%. Furthermore, this study provides a fundamental understanding of this complex mixed solvent electrolyte system.
A two-step solar thermochemical looping cycle based on Co3Mo3N/Co6Mo6N reduction/nitridation reactions offers a pathway for green NH3 production that utilizes concentrated solar irradiation, H2O, and air as feedstocks. The NH3 production cycle steps both derive process heat from concentrated solar irradiation and encompass 1) the reduction of Co3Mo3N in H2 to Co6Mo6N and NH3; and 2) nitridation of Co6Mo6N to Co3Mo3N with N2. Co3Mo3N reduction/nitridation reactions are examined at different H2 and/or N2 partial pressures and temperatures. NH3 production is quantified in situ using liquid conductivity measurements coupled with mass spectrometry (MS). Solid-state characterization is performed to identify a surface oxygen layer that necessitates the addition of H2 during cycling to prevent surface oxidation by trace amounts of O2. H2 concentrations of > 5% H2/Ar and temperatures >500 °C are required to reduce Co3Mo3N to Co6Mo6N and form NH3 at 1 bar. Complete regeneration of Co3Mo3N from Co6Mo6N is achieved at conditions of 700 °C under 25–75% H2/N2. H2 pressure-swings are observed to increase NH3 production during Co3Mo3N reduction. In conclusion, the results represent the first comprehensive characterization of and definitive non-catalytic production of NH3 via chemical looping with metal nitrides and provide insights for technology development.
Before residential photovoltaic (PV) systems are interconnected with the grid, various planning and impact studies are conducted on detailed models of the system to ensure safety and reliability are maintained. However, these model-based analyses can be time-consuming and error-prone, representing a potential bottleneck as the pace of PV installations accelerates. Data-driven tools and analyses provide an alternate pathway to supplement or replace their model-based counterparts. In this article, a data-driven algorithm is presented for assessing the thermal limitations of PV interconnections. Using input data from residential smart meters, and without any grid models or topology information, the algorithm can determine the nameplate capacity of the service transformer supplying those customers. The algorithm was tested on multiple datasets and predicted service transformer capacity with >98% accuracy, regardless of existing PV installations. This algorithm has various applications from model-free thermal impact analysis for hosting capacity studies to error detection and calibration of existing grid models.
Quantum cascade lasers (QCLs) have emerged as promising candidates for generating chip-scale frequency combs in mid-infrared and terahertz wavelengths. In this work, we demonstrate frequency comb formation in ring terahertz QCLs using the injection of light from a distributed feedback (DFB) laser. The DFB design frequency is chosen to match the modes of the ring cavity (near 3.3 THz), and light from the DFB is injected into the ring QCL via a bus waveguide. By controlling the power and frequency of the optical injection, we show that combs can be selectively formed and controlled in the ring cavity. Numerical modeling suggests that this comb is primarily frequency-modulated in character, with the injection serving to trigger comb formation. We also show that the ring can be used as a filter to control the output of the DFB QCL, potentially being of interest in terahertz photonic integrated circuits. Our work demonstrates that waveguide couplers are a compelling approach for injecting and extracting radiation from ring terahertz combs and offer exciting possibilities for the generation of new comb states in terahertz, such as frequency-modulated waves, solitons, and more.
Access to accurate solar resource data is critical for numerous applications, including estimating the yield of solar energy systems, developing radiation models, and validating irradiance datasets. However, lack of standardization in data formats and access interfaces across providers constitutes a major barrier to entry for new users. pvlib python's iotools subpackage aims to solve this issue by providing standardized Python functions for reading local files and retrieving data from external providers. All functions follow a uniform pattern and return convenient data outputs, allowing users to seamlessly switch between data providers and explore alternative datasets. The pvlib package is community-developed on GitHub: https://github.com/pvlib/pvlib-python. As of pvlib python version 0.9.5, the iotools subpackage supports 12 different datasets, including ground measurement, reanalysis, and satellite-derived irradiance data. The supported ground measurement networks include the Baseline Surface Radiation Network (BSRN), NREL MIDC, SRML, SOLRAD, SURFRAD, and the US Climate Reference Network (CRN). Additionally, satellite-derived and reanalysis irradiance data from the following sources are supported: PVGIS (SARAH & ERA5), NSRDB PSM3, and CAMS Radiation Service (including McClear clear-sky irradiance).
Computer vision models have great potential as tools for international nuclear safeguards verification activities, but off-the-shelf models require fine-tuning through transfer learning to detect relevant objects. Because open-source examples of safeguards-relevant objects are rare, and to evaluate the potential of synthetic training data for computer vision, we present the Limbo dataset. Limbo includes both real and computer-generated images of uranium hexafluoride containers for training computer vision models. We generated these images iteratively based on results from data validation experiments that are detailed here. The findings from these experiments are applicable both for the safeguards community and the broader community of computer vision research using synthetic data.
Behind-the-meter (BTM) battery energy storage systems (BESS) are undergoing rapid deployment. Simple equations to estimate the installed cost of BTM BESS are often necessary when a rigorous, bottom-up cost estimate is not available or not appropriate, in applications such as energy system modeling, informing a BESS sizing decision, and cost benchmarking. Drawing on project-level data from California, I estimate several predictive regression models of the installed cost of a BTM BESS as a function of energy capacity and power capacity. The models are evaluated for in-sample goodness-of-fit and out-of-sample predictive accuracy. The results of these analyses indicate stronger empirical support for models with natural log transformations of installed cost, energy, and power as compared against widely-used models that posit a linear relationship among the untransformed versions of these variables. Building on these results, I present a logarithmic model that can predict installed cost conditional on energy capacity, power capacity, AC or DC coupling with distributed generation, customer sector, and local wages for electricians. I document how the model can be easily extrapolated to future years, either with forecasts from other sources or by re-estimating the parameters with the latest data.
The properties of electrons in matter are of fundamental importance. They give rise to virtually all material properties and determine the physics at play in objects ranging from semiconductor devices to the interior of giant gas planets. Modeling and simulation of such diverse applications rely primarily on density functional theory (DFT), which has become the principal method for predicting the electronic structure of matter. While DFT calculations have proven to be very useful, their computational scaling limits them to small systems. We have developed a machine learning framework for predicting the electronic structure on any length scale. It shows up to three orders of magnitude speedup on systems where DFT is tractable and, more importantly, enables predictions on scales where DFT calculations are infeasible. Our work demonstrates how machine learning circumvents a long-standing computational bottleneck and advances materials science to frontiers intractable with any current solutions.
Leenheer, Andrew J.; Dominguez, Daniel; Eichenfield, Matt; Dong, Mark; Boyle, Julia M.; Palm, Kevin J.; Zimmermann, Matthew; Witte, Alex; Gilbert, Gerald; Englund, Dirk
Programmable photonic integrated circuits (PICs) are emerging as powerful tools for control of light, with applications in quantum information processing, optical range finding, and artificial intelligence. Low-power implementations of these PICs involve micromechanical structures driven capacitively or piezoelectrically but are often limited in modulation bandwidth by mechanical resonances and high operating voltages. Here we introduce a synchronous, micromechanically resonant design architecture for programmable PICs and a proof-of-principle 1×8 photonic switch using piezoelectric optical phase shifters. Our design purposefully exploits high-frequency mechanical resonances and optically broadband components for larger modulation responses on the order of the mechanical quality factor Q m while maintaining fast switching speeds. We experimentally show switching cycles of all 8 channels spaced by approximately 11 ns and operating at 4.6 dB average modulation enhancement. Future advances in micromechanical devices with high Qm, which can exceed 10000, should enable an improved series of low-voltage and high-speed programmable PICs.
Interface resistance has become a significant bottleneck for solid-state batteries (SSBs). Most studies of interface resistance have focused on extrinsic mechanisms such as interface reactions and imperfect contact between electrodes and solid electrolytes. Interface potentials are an important intrinsic mechanism that is often ignored. Here, we highlight Kelvin probe force microscopy (KPFM) as a tool to image the local potential at interfaces inside SSBs, examining the existing literature and discussing challenges in interpretation. Drawing analogies with electron transport in metal/semiconductor interfaces, we showcase a formalism that predicts intrinsic ionic resistance based on the properties of the contacting phases, and we emphasize that future battery designs should start from material pairs with low intrinsic resistance. We conclude by outlining future directions in the study of interface potentials through both theory and experiment. Graphic abstract: [Figure not available: see fulltext.]
The fraction of tritium converted to the water form in a fire scenario is one of the metrics of greatest interest for radiological safety assessments. The conversion fraction is one of the prime variables contributing to the hazard assessment. This paper presents measurements of oxidation rates for the non-radioactive hydrogen isotopes (protium and deuterium) at sub-flammable concentrations that are typical of many of the most likely tritium release scenarios. These measurements are fit to a simplified 1-step kinetic rate expression, and the isotopic trends for protium and deuterium are extrapolated to produce a model appropriate for tritium. The effects of the new kinetic models are evaluated via CFD simulations of an ISO-9705 standard room fire that includes a trace release of hydrogen isotope (tritium), illustrating the high importance of the correct (measurement-based) kinetics to the outcome of the simulated conversion.
A reduced order, nonlocal model is proposed for the contact force between initially spherical particles under compression. The model in effect provides the normal component of the interaction force between elements in the discrete element method (DEM). It is applicable to high relative density and large stress in powder compaction. It takes into account the mutual interaction between multiple points of contact, in contrast to the usual assumption in DEM of pair interactions. The mathematical form of the model is derived from a variational formulation that leads to the momentum balance for the forces on each grain. The model is calibrated mainly using detailed three dimensional peridynamic simulations of single grains under compressive loading by rigid plates that move radially with prescribed velocity. This calibration takes into account the large deformation and fracture of the grains. The interaction model also includes terms for the unloading behavior and adhesion. As validation, the model is applied to test data on the compaction of microcrystalline cellulose bulk powder.
International Journal of Networked and Distributed Computing
Shrestha, Madhukar; Kim, Yonghyun; Oh, Jeehyun; Rhee, Junghwan (John); Choe, Yung R.; Zuo, Fei; Park, Myungah; Qian, Gang
System provenance forensic analysis has been studied by a large body of research work. This area needs fine granularity data such as system calls along with event fields to track the dependencies of events. While prior work on security datasets has been proposed, we found a useful dataset of realistic attacks and details that are needed for high-quality provenance tracking is lacking. We created a new dataset of eleven vulnerable cases for system forensic analysis. It includes the full details of system calls including syscall parameters. Realistic attack scenarios with real software vulnerabilities and exploits are used. For each case, we created two sets of benign and adversary scenarios which are manually labeled for supervised machine-learning analysis. In addition, we present an algorithm to improve the data quality in the system provenance forensic analysis. We demonstrate the details of the dataset events and dependency analysis of our dataset cases.
Across many industries and engineering disciplines, systems of components are designed and deployed into their operational environments. It is the desire of the engineer to be able to predict if the component or system will survive its operational environment or if the component will fail due to mechanical stresses. One method to determine if the component will survive the operational environment is to expose the component to a simulation of the environment in a laboratory. One difficulty in executing such a test is that the component may not have the same boundary condition in both the laboratory and operational configurations. This paper presents a novel method of quantifying the error in the modal domain that occurs from the impedance difference between the laboratory test fixture and the operational configuration. The error is calculated from the projection from one mode shape space to the other, and the error is in terms of each mode of the operational configuration. The error provides insight into the effectiveness of the test fixture with respect to the ability to recreate the individual mode shapes of the operational configuration. A case study is presented to show the error in the modal projection between two configurations is a lower limit for the error that can be achieved by a laboratory test.
Lifetime-encoded materials are particularly attractive as optical tags, however examples are rare and hindered in practical application by complex interrogation methods. Here, we demonstrate a design strategy towards multiplexed, lifetime-encoded tags via engineering intermetallic energy transfer in a family of heterometallic rare-earth metal-organic frameworks (MOFs). The MOFs are derived from a combination of a high-energy donor (Eu), a low-energy acceptor (Yb) and an optically inactive ion (Gd) with the 1,2,4,5 tetrakis(4-carboxyphenyl) benzene (TCPB) organic linker. Precise manipulation of the luminescence decay dynamics over a wide microsecond regime is achieved via control over metal distribution in these systems. Demonstration of this platform’s relevance as a tag is attained via a dynamic double encoding method that uses the braille alphabet, and by incorporation into photocurable inks patterned on glass and interrogated via digital high-speed imaging. This study reveals true orthogonality in encoding using independently variable lifetime and composition, and highlights the utility of this design strategy, combining facile synthesis and interrogation with complex optical properties.
Perspectives for understanding the brain vary across disciplines and this has challenged our ability to describe the brain’s functions. In this comment, we discuss how emerging theoretical computing frameworks that bridge top-down algorithm and bottom-up physics approaches may be ideally suited for guiding the development of neural computing technologies such as neuromorphic hardware and artificial intelligence. Furthermore, we discuss how this balanced perspective may be necessary to incorporate the neurobiological details that are critical for describing the neural computational disruptions within mental health and neurological disorders.
Efficient conversion of pentose sugars remains a significant barrier to the replacement of petroleum-derived chemicals with plant biomass-derived bioproducts. While the oleaginous yeast Rhodosporidium toruloides (also known as Rhodotorula toruloides) has a relatively robust native metabolism of pentose sugars compared to other wild yeasts, faster assimilation of those sugars will be required for industrial utilization of pentoses. To increase the rate of pentose assimilation in R. toruloides, we leveraged previously reported high-throughput fitness data to identify potential regulators of pentose catabolism. Two genes were selected for further investigation, a putative transcription factor (RTO4_12978, Pnt1) and a homolog of a glucose transceptor involved in carbon catabolite repression (RTO4_11990). Overexpression of Pnt1 increased the specific growth rate approximately twofold early in cultures on xylose and increased the maximum specific growth by 18% while decreasing accumulation of arabitol and xylitol in fast-growing cultures. Improved growth dynamics on xylose translated to a 120% increase in the overall rate of xylose conversion to fatty alcohols in batch culture. Proteomic analysis confirmed that Pnt1 is a major regulator of pentose catabolism in R. toruloides. Deletion of RTO4_11990 increased the growth rate on xylose, but did not relieve carbon catabolite repression in the presence of glucose. Carbon catabolite repression signaling networks remain poorly characterized in R. toruloides and likely comprise a different set of proteins than those mainly characterized in ascomycete fungi.
Single-molecule stretching experiments are widely utilized within the fields of physics and chemistry to characterize the mechanics of individual bonds or molecules, as well as chemical reactions. Analytic relations describing these experiments are valuable, and these relations can be obtained through the statistical thermodynamics of idealized model systems representing the experiments. Since the specific thermodynamic ensembles manifested by the experiments affect the outcome, primarily for small molecules, the stretching device must be included in the idealized model system. Though the model for the stretched molecule might be exactly solvable, including the device in the model often prevents analytic solutions. In the limit of large or small device stiffness, the isometric or isotensional ensembles can provide effective approximations, but the device effects are missing. Here a dual set of asymptotically correct statistical thermodynamic theories are applied to develop accurate approximations for the full model system that includes both the molecule and the device. The asymptotic theories are first demonstrated to be accurate using the freely jointed chain model and then using molecular dynamics calculations of a single polyethylene chain.
Pseudomonads are ubiquitous bacteria with importance in medicine, soil, agriculture, and biomanufacturing. We report a novel Pseudomonas putida phage, MiCath, which is the first known phage infecting P. putida S12, a strain increasingly used as a synthetic biology chassis. MiCath was isolated from garden soil under a tomato plant using P. putida S12 as a host and was also found to infect four other P. putida strains. MiCath has a ~ 61 kbp double-stranded DNA genome which encodes 97 predicted open reading frames (ORFs); functions could only be predicted for 48 ORFs using comparative genomics. Functions include structural phage proteins, other common phage proteins (e.g., terminase), a queuosine gene cassette, a cas4 exonuclease, and an endosialidase. Restriction digestion analysis suggests the queuosine gene cassette encodes a pathway capable of modification of guanine residues. When compared to other phage genomes, MiCath shares at most 74% nucleotide identity over 2% of the genome with any sequenced phage. Overall, MiCath is a novel phage with no close relatives, encoding many unique gene products.
An interpretable machine learning method, physics-informed genetic programming-based symbolic regression (P-GPSR), is integrated into a continuum thermodynamic approach to developing constitutive models. The proposed strategy for combining a thermodynamic analysis with P-GPSR is demonstrated by generating a yield function for an idealized material with voids, i.e., the Gurson yield function. First, a thermodynamic-based analysis is used to derive model requirements that are exploited in a custom P-GPSR implementation as fitness criteria or are strongly enforced in the solution. The P-GPSR implementation improved accuracy, generalizability, and training time compared to the same GPSR code without physics-informed fitness criteria. The yield function generated through the P-GPSR framework is in the form of a composite function that describes a class of materials and is characteristically more interpretable than GPSR-derived equations. The physical significance of the input functions learned by P-GPSR within the composite function is acquired from the thermodynamic analysis. Fundamental explanations of why the implemented P-GPSR capabilities improve results over a conventional GPSR algorithm are provided.
Future machine learning strategies for materials process optimization will likely replace human capital-intensive artisan research with autonomous and/or accelerated approaches. Such automation enables accelerated multimodal characterization that simultaneously minimizes human errors, lowers costs, enhances statistical sampling, and allows scientists to allocate their time to critical thinking instead of repetitive manual tasks. Previous acceleration efforts to synthesize and evaluate materials have often employed elaborate robotic self-driving laboratories or used specialized strategies that are difficult to generalize. Herein we describe an implemented workflow for accelerating the multimodal characterization of a combinatorial set of 915 electroplated Ni and Ni–Fe thin films resulting in a data cube with over 160,000 individual data files. Our acceleration strategies do not require manufacturing-scale resources and are thus amenable to typical materials research facilities in academic, government, or commercial laboratories. The workflow demonstrated the acceleration of six characterization modalities: optical microscopy, laser profilometry, X-ray diffraction, X-ray fluorescence, nanoindentation, and tribological (friction and wear) testing, each with speedup factors ranging from 13–46x. In addition, automated data upload to a repository using FAIR data principles was accelerated by 64x.
Real-time time-dependent density functional theory (TDDFT) is presently the most accurate available method for computing electronic stopping powers from first principles. However, obtaining application-relevant results often involves either costly averages over multiple calculations or ad hoc selection of a representative ion trajectory. We consider a broadly applicable, quantitative metric for evaluating and optimizing trajectories in this context. This methodology enables rigorous analysis of the failure modes of various common trajectory choices in crystalline materials. Although randomly selecting trajectories is common practice in stopping power calculations in solids, we show that nearly 30% of random trajectories in an FCC aluminum crystal will not representatively sample the material over the time and length scales feasibly simulated with TDDFT, and unrepresentative choices incur errors of up to 60%. We also show that finite-size effects depend on ion trajectory via “ouroboros” effects beyond the prevailing plasmon-based interpretation, and we propose a cost-reducing scheme to obtain converged results even when expensive core-electron contributions preclude large supercells. This work helps to mitigate poorly controlled approximations in first-principles stopping power calculations, allowing 1–2 order of magnitude cost reductions for obtaining representatively averaged and converged results.
R. toruloides is an oleaginous yeast, with diverse metabolic capacities and high tolerance for inhibitory compounds abundant in plant biomass hydrolysates. While R. toruloides grows on several pentose sugars and alcohols, further engineering of the native pathway is required for efficient conversion of biomass-derived sugars to higher value bioproducts. A previous high-throughput study inferred that R. toruloides possesses a non-canonical l-arabinose and d-xylose metabolism proceeding through d-arabitol and d-ribulose. In this study, we present a combination of genetic and metabolite data that refine and extend that model. Chiral separations definitively illustrate that d-arabitol is the enantiomer that accumulates under pentose metabolism. Deletion of putative d-arabitol-2-dehydrogenase (RTO4_9990) results in > 75% conversion of d-xylose to d-arabitol, and is growth-complemented on pentoses by heterologous xylulose kinase expression. Deletion of putative d-ribulose kinase (RTO4_14368) arrests all growth on any pentose tested. Analysis of several pentose dehydrogenase mutants elucidates a complex pathway with multiple enzymes mediating multiple different reactions in differing combinations, from which we also inferred a putative l-ribulose utilization pathway. Our results suggest that we have identified enzymes responsible for the majority of pathway flux, with additional unknown enzymes providing accessory activity at multiple steps. Further biochemical characterization of the enzymes described here will enable a more complete and quantitative understanding of R. toruloides pentose metabolism. These findings add to a growing understanding of the diversity and complexity of microbial pentose metabolism.
The frequency, severity, and extent of climate extremes in future will have an impact on human well-being, ecosystems, and the effectiveness of emissions mitigation and carbon sequestration strategies. The specific objectives of this study were to downscale climate data for US weather stations and analyze future trends in meteorological drought and temperature extremes over continental United States (CONUS). We used data from 4161 weather stations across the CONUS to downscale future precipitation projections from three Earth System Models (ESMs) participating in the Coupled Model Intercomparison Project Phase Six (CMIP6), specifically for the high emission scenario SSP5 8.5. Comparing historic observations with climate model projections revealed a significant bias in total annual precipitation days and total precipitation amounts. The average number of annual precipitation days across CONUS was projected to be 205 ± 26, 184 ± 33, and 181 ± 25 days in the BCC, CanESM, and UKESM models, respectively, compared to 91 ± 24 days in the observed data. Analyzing the duration of drought periods in different ecoregions of CONUS showed an increase in the number of drought months in the future (2023–2052) compared to the historical period (1989–2018). The analysis of precipitation and temperature changes in various ecoregions of CONUS revealed an increased frequency of droughts in the future, along with longer durations of warm spells. Eastern temperate forests and the Great Plains, which encompass the majority of CONUS agricultural lands, are projected to experience higher drought counts in the future. Drought projections show an increasing trend in future drought occurrences due to rising temperatures and changes in precipitation patterns. Our high-resolution climate projections can inform policy makers about the hotspots and their anticipated future trajectories.
Emerging and re-emerging viral pathogens present a unique challenge for anti-viral therapeutic development. Anti-viral approaches with high flexibility and rapid production times are essential for combating these high-pandemic risk viruses. CRISPR-Cas technologies have been extensively repurposed to treat a variety of diseases, with recent work expanding into potential applications against viral infections. However, delivery still presents a major challenge for these technologies. Lipid-coated mesoporous silica nanoparticles (LCMSNs) offer an attractive delivery vehicle for a variety of cargos due to their high biocompatibility, tractable synthesis, and amenability to chemical functionalization. Here, we report the use of LCMSNs to deliver CRISPR-Cas9 ribonucleoproteins (RNPs) that target the Niemann–Pick disease type C1 gene, an essential host factor required for entry of the high-pandemic risk pathogen Ebola virus, demonstrating an efficient reduction in viral infection. We further highlight successful in vivo delivery of the RNP-LCMSN platform to the mouse liver via systemic administration.
Hierarchical optimization modeling in an algebraic modeling environment facilitates construction of large models with many interchangeable sub-models. However, for dynamic simulation and optimization applications, a flattened structure that preserves time indexing is preferred. To convert from a structure that facilitates model construction to a structure that facilitates dynamic optimization, the concept of reshaping an optimization model is introduced along with the recently developed utilities in the Pyomo algebraic modeling environment that make this possible. The application of these utilities to model predictive control simulations and partial differential equation (PDE) discretization stability analysis is discussed, and two challenging nonlinear model predictive control case studies are presented to demonstrate the advantages of this approach.
Recent developments integrating micromechanics and neural networks offer promising paths for rapid predictions of the response of heterogeneous materials with similar accuracy as direct numerical simulations. The deep material network is one such approaches, featuring a multi-layer network and micromechanics building blocks trained on anisotropic linear elastic properties. Once trained, the network acts as a reduced-order model, which can extrapolate the material’s behavior to more general constitutive laws, including nonlinear behaviors, without the need to be retrained. However, current training methods initialize network parameters randomly, incurring inevitable training and calibration errors. Here, we introduce a way to visualize the network parameters as an analogous unit cell and use this visualization to “quilt” patches of shallower networks to initialize deeper networks for a recursive training strategy. The result is an improvement in the accuracy and calibration performance of the network and an intuitive visual representation of the network for better explainability.
We report on the substantial advancement of long wavelength InAs-based interband cascade lasers (ICLs) utilizing advanced waveguides formed from hybrid cladding layers and targeting the 10-12μm wavelength region. Modifications in the hole injector have improved carrier transport in these ICLs, resulting in significantly reduced threshold voltages (Vth) as low as 3.62 V at 80 K. Consequently, much higher voltage efficiencies were observed, peaking at about 73% at 10.3μm and allowing for large output powers of more than 100 mW/facet. Also, low threshold current densities (Jth) of 8.8 A/cm2 in cw mode and 7.6 A/cm2 in pulsed mode near 10μm were observed; a result of adjustments in the GaInSb hole well composition intended to reduce the overall strain accumulation in the ICL. Furthermore, an ICL from the second wafer operating at a longer wavelength achieved a peak voltage efficiency of 57% at 11.7μm, with a peak output power of more than 27 mW/facet. This ICL went on to lase beyond 12μm in both cw and pulsed modes, representing a new milestone in long wavelength coverage for ICLs with the standard W-QW active region.
To understand the role of the grain boundary (GB) in plasticity at small scale, a concurrently coupled mesoscale plasticity model was developed to simulate micro-bending of bicrystalline micron-sized beams. By coupling dislocation dynamics (DD) with a finite element model (FEM), a novel defect dynamics model provides the means to investigate intricate interactions between dislocations and GBs under various loading conditions. Our simulations of micro-bending agree well with corresponding micro-bending experiments, and they show that mechanical response of bicrystals could have not only hardening but also softening depending on the characters of the GB. In addition, changing the location of the GB in the microbeams results in different mechanical responses; GBs located at the neutral plane show softening compared to single crystals, while inclined GBs located halfway along the length of the beam show little effect. Simulation results could provide a clear picture on detailed dislocation-GB interactions, and quantitative resolved shear stress analysis supplemented by dislocation density distribution is used to analyze the mechanical response of bicrystalline samples.
PyApprox is a Python-based one-stop-shop for probabilistic analysis of numerical models such as those used in the earth, environmental and engineering sciences. Easy to use and extendable tools are provided for constructing surrogates, sensitivity analysis, Bayesian inference, experimental design, and forward uncertainty quantification. The algorithms implemented represent a wide range of methods for model analysis developed over the past two decades, including recent advances in multi-fidelity approaches that use multiple model discretizations and/or simplified physics to significantly reduce the computational cost of various types of analyses. An extensive set of Benchmarks from the literature is also provided to facilitate the easy comparison of new or existing algorithms for a wide range of model analyses. This paper introduces PyApprox and its various features, and presents results demonstrating the utility of PyApprox on a benchmark problem modeling the advection of a tracer in groundwater.
A series of extensively instrumented tests was performed on the Structural Evaluation Test Unit in the early 1990s. The purpose of these tests was to determine the response of a minimally designed cask to impacts that were more severe than the design basis impact. This test series provides an excellent opportunity for benchmarking explicit dynamic finite element analysis programs for behaviors that may be experienced by casks during regulatory and extra-regulatory impact events. This report provides the results of the four tests that were conducted. It is meant to go along with a companion report that defines the benchmark problem and gives the locations for the instrumentation and inspection points.
Validation and verification of engineering models is important to understand potential weaknesses and issues in the model. This is accomplished through the application of constraint logic to the model. These models and the constraints put upon them can be represented through a graph structure. Here we give a visualization system to aid users understanding, locating, and fixing constraint violations in their systems. We give users several ways to narrow down on the specific errors and parts of the graph they’re interested in. Users have the opportunity to choose the types of errors that will be shown in the graph. Clustering is applied to the graph to help users narrow down their searches. Several other graph interactions are given to support discovery of constraint violations.
The 2-year Puerto Rico Grid Resilience and Transition to 100% Renewable Energy Study analyzed stakeholder-driven pathways to Puerto Rico’s clean energy future. Outputs relating to electricity demand modeling were partially informed by estimates of electric vehicle adoption across all classes of medium- and heavy-duty vehicles (MHDVs), and the ensuing charging loads. To create these estimates, the team developed a transportation model for MHDVs in Puerto Rico to estimate the amount and geospatial distribution of energy used. Charging schedules for the different end uses of MHDVs were then used to construct electric load shapes assuming a portion of those vehicles would be replaced by battery electric counterparts. Study results showed that, by 2050, electric vehicles may constitute roughly 50% of the MHDV population in Puerto Rico. The resulting electrical demand curve attributable to MHDV charging showed that, for solar energy-based electrical systems with limited energy storage, this demand may create challenges unless appropriately managed either on the demand or supply side.
This report covers an inquiry into seismoacoustic array processing using infrasound arrivals combined with resulting Ground Coupled Airwaves (GCA) that are present on collocated seismic sensors. In preparation, data calibration and denoising is completed for a seismoacoustic sensor array that was deployed at the Facility for Acceptance, Calibration, and Testing on Kirtland Airforce Base from August through September of 2021. The events of interest for this study are small, local explosive sources that lead to short duration, impulsive signals on the instruments. The goal is to determine if combining infrasound signals with the corresponding GCAs on collocated seismic sensors can be used to improve the results returned by automated signal detection and characterization (e.g., back azimuth estimates). Preparation for seismic and infrasound data involves removing the instrument response so that sensors have flat power spectra over the frequency range 0.1-10 Hz, where signal from events of interest may be detected. After instrument response removal, deployment conditions specific to this array require a retrospective noise analysis to determine station emplacement characteristics. Once all data is calibrated, a manual search is performed for possible GCA arrivals across the seismoacoustic network. These arrivals are then processed through beamforming and subsequent event identification, resulting in a catalogue of seismoacoustic GCA arrivals with corresponding back azimuth and trace velocity estimations.
There is currently very limited research into how experts analyze and assess potentially fraudulent content in their expertise areas, and most research within the disinformation space involves very limited text samples (e.g., news headlines). The overarching goal of the present study was to explore how an individual’s psychological profile and the linguistic features in text might influence an expert’s ability to discern disinformation/fraudulent content in academic journal articles. At a high level, the current design tasked experts with reading journal articles from their area of expertise and indicating if they thought an article was deceptive or not. Half the articles they read were journal papers that had been retracted due to academic fraud. Demographic and psychological inventory data collected on the participants was combined with performance data to generate insights about individual expert susceptibility to deception. Our data show that our population of experts were unable to reliably detect deception in formal technical writing. Several psychological dimensions such as comfort with uncertainty and intellectual humility may provide some protection against deception. This work informs our understanding of expert susceptibility to potentially fraudulent content within official, technical information and can be used to inform future mitigative efforts and provide a building block for future disinformation work.
The Strategic Petroleum Reserve (SPR) is the world’s largest supply of emergency crude oil. The reserve consists of four sites in Louisiana and Texas. Each site stores crude in deep, underground salt caverns. It is the mission of the SPR’s Enhanced Monitoring Program to examine available sensing data to inform our understanding of each site. This report discusses the monitoring data, processes, and results for each of the four sites for fiscal year 2023.
A series of extensively instrumented tests was performed on the Structural Evaluation Test Unit in the early 1990s. The purpose of these tests was to determine the response of a minimally designed cask to impacts that were more severe than the design basis impact. This test series provides an excellent opportunity for benchmarking explicit dynamic finite element analysis programs for behaviors that may be experienced by casks during regulatory and extra-regulatory impact events. This report provides the parameters of the test unit, the locations of instrumentation, the locations of inspection points, and the parameters of the four tests that were conducted. A companion report provides the results of the tests.
The subject of Task F of DECOVALEX-2023 concerns performance assessment modelling of radioactive waste disposal in deep mined repositories. The primary objectives of Task F are to build confidence in the models, methods, and software used for performance assessment (PA) of deep geologic nuclear waste repositories, and/or to bring to the fore additional research and development needed to improve PA methodologies. In Task F2-(salt), these objectives have been accomplished through staged development and comparison of the models and methods used by participating teams in their PA frameworks. Coupled-process submodels and deterministic simulations of the entire PA model for a reference scenario for waste disposal in domal salt have been conducted. The task specification has been updated continuously since the initiation of the project to reflect the staged development of the conceptual repository model and performance metrics.
Achieving robust and efficient drilling is a critical part of reducing the cost of geothermal energy exploration and extraction. Drilling performance is often evaluated using one or more of three key metrics: depth of cut (DOC), rate of penetration (ROP), and mechanical specific energy (MSE). All three of these quantities are related to each other. DOC refers to the depth a bit penetrates into rock during drilling. This is an important quantity for estimating bit behavior. ROP is the simply the DOC multiplied by the rotational rate, and represents how quickly the drill bit is advancing through the ground. ROP is often the parameter used for drilling control and optimization. Finally, MSE provides insight into drilling efficiency and rock type. MSE calculations rely on ROP, drilling force, and drilling torque. Surface-based sensors at the top of the drill are often used to measure all these quantities. However, top-hole measurements can deviate substantially from the behavior at the bit due to lag, vibrations, and friction. Therefore, relying only on top-hole information can lead to suboptimal drilling control. In this work, we describe recent progress towards estimating ROP, DOC, and MSE using down-hole sensing. We assume down-hole measurements of torque, weight-on-bit (WOB). Our hypothesis is that these measurements can provide more rapid and accurate measures of drilling performance. We show how a multi-layer perceptron (MLP) machine learning algorithm can provide rapid and accurate performance when evaluated on experimental data taken from Sandia’s Hard Rock Drilling Facility. In addition, we implement our algorithms on an embedded system intended to emulate a bottom-hole-assembly for sensing and estimation. Our experimental results show that DOC can be estimated accurately and in real-time. These estimates when combined with measurements for rotary speed, torque, and force can provide improved estimates for ROP and MSE. These results have the potential to enable better drilling assessment, improved control, and extended component lifetimes.
This technical report serves to summarize a literature search conducted that covered confidence calibration. This report is meant to serve as a solid starting reference for individuals interested in learning more about the confidence calibration domain as well as for individuals more familiar with this work – as a summarizing document for calibration metrics is notably lacking in the literature. This report is not meant to serve as a comprehensive review of everything that has been done in this field – in fact, the reader is encouraged to look further into this domain. We describe confidence and calibration and discuss properties of good calibration metrics. We detail various calibration and calibration-tangential metrics, presenting equations, algorithms, parameters, and an analysis of strengths and weaknesses. We apply a subset of these metrics to eight proxy confidence assessment datasets. We examine the various metrics in the context of model confidence. Finally, we discuss promising future directions and outstanding questions.
Redox flow batteries (RFBs) that incorporate solid energy-storing materials are attractive for high-capacity grid-scale energy storage due to their markedly higher theoretical energy densities compared to their fully liquid counterparts. However, this promise of higher energy density comes at the expense of rate capability. In this work we exploit a ZnO nanorod-decorated Ni foam scaffold to create a high surface area Li metal anode capable of rates up to 10 mA cm−2, a 10× improvement over traditional planar designs. The ZnO nanorods enhance Li metal wettability and promote uniform Li nucleation, allowing the RFB to be initially operated with a prelithiated (charged) anode, or with a safety-conscious, Li-less, fully discharged anode. 5 mgS cm−1 were cycled using a mediated S cathode, whereby redox mediators help oxidize and reduce solid S particles. At 2.4 mgS cm−2 and 10 mA cm−2, the RFB becomes limited by the mediation of solid S. Nevertheless, a respectable energy density of 20.3 Wh L−1 is demonstrated, allowing considerable increase if the S mediation rate can be further improved. Lessons learned here may be broadly applied to RFBs with alkali metal anodes, offering an avenue for safe, dense, grid-scale energy storage.
Sandia National Laboratories is a premier United States national security laboratory which develops science-based technologies in areas such as nuclear deterrence, energy production, and climate change. Computing plays a key role in its diverse missions, and within that environment, Research Software Engineers (RSEs) and other scientific software developers utilize testing automation to ensure quality and maintainability of their work. We conducted a Participatory Action Research study to explore the challenges and strategies for testing automation through the lens of academic literature. Through the experiences collected and comparison with open literature, we identify these challenges in testing automation and then present strategies for mitigation grounded in evidence-based practice and experience reports that other, similar institutions can assess for their automation needs.
This report summarizes the water inputs associated with four technologies playing diverse roles in energy transitions: hydrogen, solar photovoltaics (PV), wind, and batteries. Information in this report is drawn from multiple sources, including peer-reviewed literature, industry and international agency reports, EcoInvent life cycle inventory database, and subject matter expert (SME) consultations. Where possible, insights that characterized water requirements for specific stages of the technology development (e.g., operations, manufacturing, and mining) were prioritized over broader cradle-to-gate assessment values. Furthermore, both direct and indirect water requirements (i.e., associated with associated energy inputs) were considered in this literature review.
This report is a comprehensive guide to the nonlinear viscoelastic Spectacular model, which is an isotropic, thermo-rheologically simple constitutive model for glass-forming materials, such as amorphous polymers. Spectacular is intermediate in complexity to the previous PEC and SPEC models (Potential Energy Clock and Simplified Potential Energy Clock models, respectively). The model form consists of two parts: a Helmholtz free energy functional and a nonlinear material clock that controls the rate of viscoelastic relaxation. The Helmholtz free energy is derived from a series expansion about a reference state. Expressions for the stress and entropy functionals are derived from the Helmholtz free energy following the Rational Mechanics approach. The material clock depends on a simplified expression for the potential energy, which itself is a functional of the temperature and strain histories. This report describes the thermo-mechanical theory of Spectacular, the numerical methods for time-integrating the model, model verification for its implementation in LAMÉ, a user guide for its implementation in LAMÉ, and ideas for future work. A number of appendices provide supplementary mathematical details and a description of the procedure used to derive the simplified potential energy from the full expression for the potential energy. The goal of this report is create a convenient point-of-entry for engineers who wish to learn more about Spectacular, but also to serve as a reference manual for advanced users of the model.
This report represents completion of milestone deliverable M2SF-24SN010309082 Annual Status Update for OWL due on November 30, 2023. It contains the status of fiscal year 2023 (FY2023) updates for the Online Waste Library (OWL).
Here, a review of current trends in scientific computing reveals a broad shift to open-source and higher-level programming languages such as Python and growing career opportunities over the next decade. Open-source modeling tools accelerate innovation in equation-based and data-driven applications. Significant resources have been deployed to develop data-driven tools (PyTorch, TensorFlow, Scikit-learn) from tech companies that rely on machine learning services to meet business needs while keeping the foundational tools open. Open-source equation-based tools such as Pyomo, CasADi, Gekko, and JuMP are also gaining momentum according to user community and development pace metrics. Integration of data-driven and principles-based tools is emerging. New compute hardware, productivity software, and training resources have the potential to radically accelerate progress. However, long-term support mechanisms are still necessary to sustain the momentum and maintenance of critical foundational packages.
Wuestefeld, Andreas; Spica, Zack J.; Aderhold, Kasey; Huang, Hsin-Hua; Ma, Kuo-Fong; Lai, Voon H.; Miller, Meghan; Urmantseva, Lena; Zapf, Daniel; Bowden, Daniel C.; Edme, Pascal; Kiers, Tjeerd; Rinaldi, Antonio P.; Tuinstra, Katinka; Jestin, Camille; Diaz-Meza, Sergio; Jousset, Philippe; Wollin, Christopher; Ugalde, Arantza; Ruiz Barajas, Sandra; Gaite, Beatriz; Currenti, Gilda; Prestifilippo, Michele; Araki, Eiichiro; Tonegawa, Takashi; De Ridder, Sjoerd; Nowacki, Andy; Lindner, Fabian; Schoenball, Martin; Wetter, Christoph; Zhu, Hong-Hu; Baird, Alan F.; Rorstadbotnen, Robin A.; Ajo-Franklin, Jonathan; Ma, Yuanyuan; Abbott, Robert; Hodgkinson, Kathleen M.; Porritt, Robert W.; Stanciu, Adrian C.; Podrasky, Agatha; Hill, David; Biondi, Biondo; Yuan, Siyuan; Bin LuoBin; Nikitin, Sergei; Morten, Jan P.; Dumitru, Vlad-Andrei; Lienhart, Werner; Cunningham, Erin; Wang, Herbert
During February 2023, a total of 32 individual distributed acoustic sensing (DAS) systems acted jointly as a global seismic monitoring network. The aim of this Global DAS Month campaign was to coordinate a diverse network of organizations, instruments, and file formats to gain knowledge and move toward the next generation of earthquake monitoring networks. During this campaign, 156 earthquakes of magnitude 5 or larger were reported by the U.S. Geological Survey and contributors shared data for 60 min after each event’s origin time. Participating systems represent a variety of manufacturers, a range of recording parameters, and varying cable emplacement settings (e.g., shallow burial, borehole, subaqueous, and dark fiber). Monitored cable lengths vary between 152 and 120,129 m, with channel spacing between 1 and 49 m. The data has a total size of 6.8 TB, and are available for free download. Finally, organizing and executing the Global DAS Month has produced a unique dataset for further exploration and highlighted areas of further development for the seismological community to address.
Computational simulation is increasingly relied upon for high/consequence engineering decisions, which necessitates a high confidence in the calibration of and predictions from complex material models. However, the calibration and validation of material models is often a discrete, multi-stage process that is decoupled from material characterization activities, which means the data collected does not always align with the data that is needed. To address this issue, an integrated workflow for delivering an enhanced characterization and calibration procedure—Interlaced Characterization and Calibration (ICC)—is introduced and demonstrated. Further, this framework leverages Bayesian optimal experimental design (BOED), which creates a line of communication between model calibration needs and data collection capabilities in order to optimize the information content gathered from the experiments for model calibration. Eventually, the ICC framework will be used in quasi real-time to actively control experiments of complex specimens for the calibration of a high-fidelity material model. This work presents the critical first piece of algorithm development and a demonstration in determining the optimal load path of a cruciform specimen with simulated data. Calibration results, obtained via Bayesian inference, from the integrated ICC approach are compared to calibrations performed by choosing the load path a priori based on human intuition, as is traditionally done. The calibration results are communicated through parameter uncertainties which are propagated to the model output space (i.e. stress–strain). In these exemplar problems, data generated within the ICC framework resulted in calibrated model parameters with reduced measures of uncertainty compared to the traditional approaches.
Bimetallic, reactive multilayers are uniformly structured materials composed of alternating sputter-deposited layers that may be ignited to produce self-propagating mixing and formation reactions. These nanolaminates are most commonly used as rapid-release heat sources. The specific chemical composition at each metal/metal interface determines the rate of mass transport in a mixing and formation reaction. The inclusion of engineered diffusion barriers at each interface will not only inhibit solid-state mixing but also may impede the self-propagating reactions by introducing instabilities to wavefront morphology. This work examines the effect of adding diffusion barriers on the propagation of reaction waves in Co/Al multilayers. The Co/Al system has been shown to exhibit a reaction propagation instability that is dependent on the bilayer thickness, which allows for the occurrence of unstable modes in otherwise stable designs from the inclusion of diffusion barriers. Based on the known stability criteria in the Co/Al multilayer system, the way in which the inclusion of diffusion barriers changes a multilayer's heat of reaction, thermal conductivity, and material mixing mechanisms can be determined. These factors, in aggregate, lead to changes in the wavefront velocity and stability.
Photonic topological insulators exhibit bulk-boundary correspondence, which requires that boundary-localized states appear at the interface formed between topologically distinct insulating materials. However, many topological photonic devices share a boundary with free space, which raises a subtle but critical problem as free space is gapless for photons above the light line. Here, we use a local theory of topological materials to resolve bulk-boundary correspondence in heterostructures containing gapless materials and in radiative environments. In particular, we construct the heterostructure’s spectral localizer, a composite operator based on the system’s real-space description that provides a local marker for the system’s topology and a corresponding local measure of its topological protection; both quantities are independent of the material’s bulk band gap (or lack thereof). Moreover, we show that approximating radiative outcoupling as material absorption overestimates a heterostructure’s topological protection. Importantly, as the spectral localizer is applicable to systems in any physical dimension and in any discrete symmetry class (i.e., any Altland-Zirnbauer class), our results show how to calculate topological invariants, quantify topological protection, and locate topological boundary-localized resonances in topological materials that interface with gapless media in general.
Here, using atomistic molecular dynamics simulations, we investigate the morphology and transport properties of a new family of fluorine-free terpolymers designed as proton-exchange membranes. Simulated random terpolymers consist of three monomers with a 5-carbon backbone with a phenylsulfonate, phenyl, or no pendant group and have ion exchange capacities (IECs) ranging from 1.06–4.14 mmol/g. At a hydration level of 9, cluster analysis reveals macrophase separation between water and terpolymers with IEC < 2.1 mmol/g and continuous, percolated hydrophilic and hydrophobic nanoscale domains at higher IECs. Channel width distribution analysis of the percolated morphologies revealed that more hydrophobic units produce less uniform channels. Decreasing the surface area per sulfonate group and increasing the fractal dimension of the hydrophilic domains correlate with increased water diffusivity, due to a more acidic interface and more isotropic water channels. Relative to the previously studied phenylsulfonate homopolymer, these terpolymers with lower IECs have only modestly lower water diffusion, and we anticipate other advantages related to processability.
Nonlinear topological insulators have garnered substantial recent attention as they have both enabled the discovery of new physics due to interparticle interactions, and may have applications in photonic devices such as topological lasers and frequency combs. However, due to the local nature of nonlinearities, previous attempts to classify the topology of nonlinear systems have required significant approximations that must be tailored to individual systems. Here, we develop a general framework for classifying the topology of nonlinear materials in any discrete symmetry class and any physical dimension. Our approach is rooted in a numerical $K$ -theoretic method called the spectral localizer, which leverages a real-space perspective of a system to define local topological markers and a local measure of topological protection. Here, our nonlinear spectral localizer framework yields a quantitative definition of topologically nontrivial nonlinear modes that are distinguished by the appearance of a topological interface surrounding the mode. Moreover, we show how the nonlinear spectral localizer can be used to understand a system's topological dynamics, i.e., the time evolution of nonlinearly induced topological domains within a system. We anticipate that this framework will enable the discovery and development of novel topological systems across a broad range of nonlinear materials.
Derived from renewable feedstocks, such as biomass, polylactic acid (PLA) is considered a more environmentally friendly plastic than conventional petroleum-based polyethylene terephthalate (PET). However, PLA must still be recycled, and its growing popularity and mixture with PET plastics at the disposal stage poses a cross-contamination threat in existing recycling facilities and results in low-value and low-quality recycled products. Hybrid upcycling has been proposed as a promising sustainable solution for mixed plastic waste, but its techno-economic and life cycle environmental performance remain understudied. Here we propose a hybrid upcycling approach using a biocompatible ionic liquid (IL) to first chemically depolymerize plastics and then convert the depolymerized stream via biological upgrading with no extra separation. We show that over 95% of mixed PET/PLA was depolymerized into the respective monomers, which then served as the sole carbon source for the growth of Pseudomonas putida, enabling the conversion of the depolymerized plastics into biodegradable polyhydroxyalkanoates (PHAs). In comparison to conventional commercial PHAs, the estimated optimal production cost and carbon footprint are reduced by 62% and 29%, respectively.
The United States Department of Energy’s (DOE) Office of Nuclear Energy’s Spent Fuel and Waste Science and Technology Campaign seeks to better understand the technical basis, risks, and uncertainty associated with the safe and secure disposition of spent nuclear fuel (SNF) and high-level radioactive waste. Commercial nuclear power generation in the United States has resulted in thousands of metric tons of SNF, the disposal of which is the responsibility of DOE (Nuclear Waste Policy Act of 1982, as amended). Any repository licensed to dispose of SNF must meet requirements regarding the long-term performance of that repository. The evaluation of long-term performance of the repository may need to consider the SNF achieving a critical configuration during the postclosure period. Of particular interest is the potential for this situation to occur in dual-purpose canisters (DPCs), which are currently licensed and being used to store and transport SNF but were not designed for permanent geologic disposal. DOE has been considering disposing of SNF in DPCs to avoid the costs and worker dose associated with repackaging the SNF currently stored in DPCs into repository-specific canisters. This report examines the consequences of postclosure criticality to provide technical support to DOE in developing a disposal plan.
Sheldon, Craig S.; Salazar, Jorge; Palacios Diaz, Teresa; Morton, Katie; Davis, Ryan; Davies, James F.
Aerosol particles are known to exist in highly viscous amorphous states at a low relative humidity and temperature. The slow diffusion of molecules in viscous particles impacts the uptake and loss of volatile and semivolatile species and the rate of heterogeneous chemistry. Recent work has demonstrated that in particles containing organic molecules and salts, the formation of two-phase gel states is possible, leading to observations of rigid particles that resist coalescence. The way that molecules diffuse and transport in gel systems is not well-characterized. In this work, we use an electrodynamic balance to levitate sample particles containing a range of organic compounds in mixtures with calcium chloride and measure the rate of water diffusion. Particles of the pure organics have been shown to form viscous amorphous states, while in mixtures with divalent salts, coalescence measurements have revealed the apparent solidification of particles, consistent with the formation of a gel state facilitated by ion-molecule interactions. We report in several cases that water transport can actually be increased in the rigid gel state relative to the pure compound that forms a viscous state under similar conditions. These measurements reveal the limitations of using viscosity as a metric for predicting molecular diffusion and that the gel structure that forms is a much stronger controlling factor in the rate of diffusion. This underscores the need for diffusion measurements as well as a deeper understanding of noncovalent molecular assembly that leads to supramolecular structures in aerosol particles.
An optically recording velocity interferometer system has been used to measure acceleration histories and maximum velocities for laser-driven aluminum foil targets launched from the output face of optical fibers. Peak flyer velocities have been determined as a function of various parameters, including driving laser fluence, laser pulse duration and target thickness. The results at high fluences are consistent with a nearly constant efficiency of coupling optical energy into flyer kinetic energy and a small ablated mass fraction; however, the coupling efficiency falls off rapidly at fluences < 15 J-cm-2. Measurement of the time delay between laser pulse arrival at the target and the onset of flyer motion have also been performed. Significant delays are observed at low fluences, arising from the increased time required for plasma formation at the fiber/foil interface under these conditions.
Howard, Amanda A.; Perego, Mauro; Karniadakis, George E.; Stinis, Panos
Operator learning for complex nonlinear systems is increasingly common in modeling multi-physics and multi-scale systems. However, training such high-dimensional operators requires a large amount of expensive, high-fidelity data, either from experiments or simulations. In this work, we present a composite Deep Operator Network (DeepONet) for learning using two datasets with different levels of fidelity to accurately learn complex operators when sufficient high-fidelity data is not available. Additionally, we demonstrate that the presence of low-fidelity data can improve the predictions of physics-informed learning with DeepONets. We demonstrate the new multi-fidelity training in diverse examples, including modeling of the ice-sheet dynamics of the Humboldt glacier, Greenland, using two different fidelity models and also using the same physical model at two different resolutions.
The purpose of pvOps is to support empirical evaluations of data collected in the field related to the operations and maintenance (O&M) of photovoltaic (PV) power plants. pvOps presently contains modules that address the diversity of field data, including text-based maintenance logs, current-voltage (IV) curves, and timeseries of production information. The package functions leverage machine learning, visualization, and other techniques to enable cleaning, processing, and fusion of these datasets. These capabilities are intended to facilitate easier evaluation of field patterns and extraction of relevant insights to support reliability-related decision-making for PV sites. The open-source code, examples, and instructions for installing the package through PyPI can be accessed through the GitHub repository.
Electric fields are commonplace in plasmas and affect transport by driving currents and, in some cases, instabilities. The necessary condition for instability in collisionless plasmas is commonly understood to be described by the Penrose criterion, which quantifies a sufficient relative drift between different populations of particles that must be present for wave amplification via inverse Landau damping. For example, electric fields generate drifts between electrons and ions that can excite the ion-acoustic instability. Here, we use particle-in-cell simulations and linear stability analysis to show that the electric field can drive a fundamentally different type of kinetic instability, named the electron-field instability. This instability excites electron plasma waves with wavelengths ≳30λDe, has a growth rate that is proportional to the electric field strength, and does not require a relative drift between electrons and ions. The Penrose criterion does not apply when accounting for the electric field. Furthermore, the large value of the observed frequency, near the electron plasma frequency, further distinguishes it from the standard ion-acoustic instability, which oscillates near the ion plasma frequency. The ubiquity of macroscopic electric fields in quasineutral plasmas suggests that this instability is possible in a host of systems, including low-temperature and space plasmas. In fact, damping from neutral collisions in such systems is often not enough to completely damp the instability, adding to the robustness of the instability across plasma conditions.
Sandia National Laboratories (SNL) and the Institut de Radioprotection et de Sûreté Nucléaire (IRSN) have collaborated on the design and execution of a set of critical experiments that explore the effects of molybdenum in water moderated fuel-rod arrays. The molybdenum is included as sleeves (tubes) on some of the fuel rods in the arrays. The fuel used in the experiments is known at Sandia as the Seven Percent Critical Experiment (7uPCX) fuel. This fuel has been used is several published benchmark evaluations in including LEU-COMP-THERM-78 and LEU-COMP THERM-080.
Characterizing and quantifying microstructure evolution is critical to forming quantitative relationships between material processing conditions, resulting microstructure, and observed properties. Machine-learning methods are increasingly accelerating the development of these relationships by treating microstructure evolution as a pattern recognition problem, discovering relationships explicitly or implicitly. These methods often rely on identifying low-dimensional microstructural fingerprints as latent variables. However, using inappropriate latent variables can lead to challenges in learning meaningful relationships. In this work, we survey and discuss the ability of various linear and nonlinear dimensionality reduction methods including principal component analysis, autoencoders, and diffusion maps to quantify and characterize the learned latent space microstructural representations and their time evolution. We characterize latent spaces by their ability to represent high-dimensional microstructural data in terms of compression achieved as a function of the number of latent dimensions required to represent the data accurately, their accuracy based on their reconstruction performance, and the smoothness of the microstructural trajectories in latent dimension. We quantify these metrics for common microstructure evolution problems in material science including spinodal decomposition of a binary metallic alloy, thin film deposition of a binary metallic alloy, dendritic growth, and grain growth in a polycrystal. This study provides considerations and guidelines for choosing dimensionality reduction methods when considering materials problems that involve high dimensional data and a variety of features over a range of lengths and time scales.
Journal of Physical Chemistry. A, Molecules, Spectroscopy, Kinetics, Environment, and General Theory
Cho, Jaeyoung; Rosch, Daniel; Tao, Yujie; Osborn, David L.; Klippenstein, Stephen J.; Sheps, Leonid; Sivaramakrishnan, Raghu
Methyl formate (MF; CH3OCHO) is the smallest representative of esters, which are common components of biodiesel. The present study characterizes the thermal dissociation kinetics of the radicals formed by H atom abstraction from MF—CH3OCO and CH2OCHO—through a combination of modeling, experiment, and theory. For the experimental effort, excimer laser photolysis of Cl2 was used as a source of Cl atoms to initiate reactions with MF in the gas phase. Time-resolved species profiles of MF, Cl2, HCl, CO2, CH3, CH3Cl, CH2O, and CH2ClOCHO were measured and quantified using photoionization mass spectrometry at temperatures of 400–750 K and 10 Torr. The experimental data were simulated using a kinetic model, which was informed by ab initio-based theoretical kinetics calculations and included chlorine chemistry and secondary reactions of radical decomposition products. Here, we calculated the rate coefficients for the H-abstraction reactions Cl + MF → HCl + CH3OCO (R1a) and Cl + MF → HCl + CH2OCHO (R1b): k1a,theory = 6.71 × 10–15·T1.14·exp(—606/T) cm3/molecule·s; k1b,theory = 4.67 × 10–18·T2.21·exp(—245/T) cm3/molecule·s over T = 200–2000 K. Electronic structure calculations indicate that the barriers to CH3OCO and CH2OCHO dissociation are 13.7 and 31.6 kcal/mol and lead to CH3 + CO2 (R3) and CH2O + HCO (R5), respectively. The master equation-based theoretical rate coefficients are k3,theory (P = ∞) = 2.94 × 109·T1.21·exp(—6209/T) s–1 and k5,theory (P = ∞) = 8.45 × 108·T1.39·exp(—15132/T) s–1 over T = 300–1500 K. The calculated branching fractions into R1a and R1b and the rate coefficient for R5 were validated by modeling of the experimental species time profiles and found to be in excellent agreement with theory. Additionally, we found that the bimolecular reactions CH2OCHO + Cl, CH2OCHO + Cl2, and CH3 + Cl2 were critical to accurately model the experimental data and constrain the kinetics of MF-radicals. Inclusion of the kinetic parameters determined in this study showed a significant impact on combustion simulations of larger methyl esters, which are considered as biodiesel surrogates.