Binary Code Similarity Analysis (BCSA) has a wide spectrum of applications, including plagiarism detection, vulnerability discovery, and malware analysis, thus drawing significant attention from the security community. However, conventional techniques often face challenges in balancing both accuracy and scalability simultaneously. To overcome these existing problems, a surge of deep learning-based work has been recently proposed. Unfortunately, many researchers still find it extremely difficult to conduct relevant studies or extend existing approaches. First, prior work typically relies on proprietary benchmark without making the entire dataset publicly accessible. Consequently, a large-scale, well-labeled dataset for binary code similarity analysis remains precious and scarce. Moreover, previous work has primarily focused on comparing at the function level, rather than exploring other finer granularities. Therefore, we argue that the lack of a fine-grained dataset for BCSA leaves a critical gap in current research. To address these challenges, we construct a benchmark dataset for fine-grained binary code similarity analysis called BinSimDB, which contains equivalent pairs of smaller binary code snippets, such as basic blocks. Specifically, we propose BMerge and BPair algorithms to bridge the discrepancies between two binary code snippets caused by different optimization levels or platforms. Furthermore, we empirically study the properties of our dataset and evaluate its effectiveness for the BCSA research. The experimental results demonstrate that BinSimDB significantly improves the performance of binary code similarity comparison.
Ship tracks, long thin artificial cloud features formed from the pollutants in ship exhaust, are satellite-observable examples of aerosol-cloud interactions (ACI) that can lead to increased cloud albedo and thus increased solar reflectivity, phenomena of interest in solar radiation management. In addition to ship tracks being of interest to meteorologists and policy makers, their observed cloud perturbations provide benchmark evidence of ACI that remain poorly captured by climate models. To broadly analyze the effects of ship tracks, high-resolution satellite imagery data highlighting their presence are required. To support this, we provide a hand labelled dataset to serve as a benchmark for a variety of subsequent analyses. Established from a previous dataset that identified ship track presence using NASA’s MODIS Aqua satellite imager, our first-of-its-kind dataset is comprised of image masks: capturing full ship track regions, including their contours, emission points and dispersive patterns. In total, 300 images, or around 2,500 masked ship tracks, observed under varying conditions are provided, and may facilitate training of machine learning algorithms to automate extraction.
The trusted inertial terrain-aided navigation (TITAN) algorithm leverages an airborne vertical synthetic aperture radar to measure the range to the closest ground points along several prescribed iso-Doppler contours. These TITAN minimum-range, prescribed-Doppler measurements are the result of a constrained nonlinear optimization problem whose optimization function and constraints both depend on the radar position and velocity. Owing to the complexity of this measurement definition, analysis of the TITAN algorithm is lacking in prior work. This publication offers such an analysis, making the following three contributions: (1) an analytical solution to the TITAN constrained optimization measurement problem, (2) a derivation of the TITAN measurement function Jacobian, and (3) a derivation of the Cramér–Rao lower bound on the estimated position and velocity error covariance. These three contributions are verified via Monte Carlo simulations over synthetic terrain, which further reveal two remarkable properties of the TITAN algorithm: (1) the along-track positioning errors tend to be smaller than the cross-track positioning errors, and (2) the cross-track positioning errors are independent of the terrain roughness.
The June 1991 Mt. Pinatubo eruption resulted in a massive increase of sulfate aerosols in the atmosphere, absorbing radiation and leading to global changes in surface and stratospheric temperatures. A volcanic eruption of this magnitude serves as a natural analog for stratospheric aerosol injection, a proposed solar radiation modification method to combat a warming climate. The impacts of such an event are multifaceted and region-specific. Our goal is to characterize the multivariate and dynamic nature of the atmospheric impacts following the Mt. Pinatubo eruption. We developed a multivariate space-time dynamic linear model to understand the full extent of the spatially- and temporally-varying impacts. Specifically, spatial variation is modeled using a flexible set of basis functions for which the basis coefficients are allowed to vary in time through a vector autoregressive (VAR) structure. This novel model is cast in a Dynamic Linear Model (DLM) framework and estimated via a customized MCMC approach. We demonstrate how the model quantifies the relationships between key atmospheric parameters prior to and following the Mt. Pinatubo eruption with reanalysis data from MERRA-2 and highlight when such a model is advantageous over univariate models.
New concepts of symmetry related to topological order emerged from the discovery of the fractional quantum Hall effect and high-temperature superconductivity in strongly correlated electron systems. This led to the study of quantum materials-- materials exhibiting emergent quantum phenomena with no classical analogues. While these materials have engendered exciting basic materials science and physics, realizing novel devices is a key challenge in the field. The goal of this proposal is to harnes
Coulomb drag is a powerful tool to study interactions in coupled low-dimensional systems. Historically, Coulomb drag has been attributed to a frictional force arising from momentum transfer whose direction is dictated by the current flow. In the absence of electron-electron correlations, treating the Coulomb drag circuit as a rectifier of noise fluctuations yields similar conclusions about the reciprocal nature of Coulomb drag. In contrast, recent findings in one-dimensional systems have identified a nonreciprocal contribution to Coulomb drag that is independent of the current flow direction. In this work, we present Coulomb drag measurements between vertically coupled GaAs/AlGaAs quantum wires separated vertically by a hard barrier only 15 nm wide, where both reciprocal and nonreciprocal contributions to the drag signal are observed simultaneously, and whose relative magnitudes are temperature and gate tunable. Our study opens up the possibility of studying the physical mechanisms behind the onset of both Coulomb drag contributions simultaneously in a single device, ultimately leading to a better understanding of Luttinger liquids in multi-channel wires and paving the way for the creation of energy harvesting devices.
Characterization of induced microseismicity at a carbon dioxide (CO2) storage site is critical for preserving reservoir integrity and mitigating seismic hazards. We apply a multilevel machine learning (ML) approach that combines the nonnegative matrix factorization and hidden Markov model to extract spectral representations of microseismic events and cluster them to identify seismic patterns at the Illinois Basin-Decatur Project. Unlike traditional waveform correlation methods, this approach leverages spectral characteristics of first arrivals to improve event classification and detect previously undetected planes of weakness. By integrating ML-based clustering with focal mechanism analysis, we resolve small-scale fault structures that are below the detection limits of conventional seismic imaging. Our findings reveal temporal bursts of microseismicity associated with brittle failure, providing insights into the spatio-temporal evolution of fault reactivation during CO2 injection. This approach enhances seismic monitoring capabilities at CO2 injection sites by improving fault characterization beyond the resolution of standard geophysical surveys.
In this paper, we present a method for estimating the infection-rate of a disease as a spatial-temporal field. Our data comprises time-series case-counts of symptomatic patients in various areal units of a region. We extend an epidemiological model, originally designed for a single areal unit, to accommodate multiple units. The field estimation is framed within a Bayesian context, utilizing a parameterized Gaussian random field as a spatial prior. We apply an adaptive Markov chain Monte Carlo method to sample the posterior distribution of the model parameters condition on COVID-19 case-count data from three adjacent counties in New Mexico, USA. Our results suggest that the correlation between epidemiological dynamics in neighboring regions helps regularize estimations in areas with high variance (i.e., poor quality) data. Using the calibrated epidemic model, we forecast the infection-rate over each areal unit and develop a simple anomaly detector to signal new epidemic waves. Our findings show that anomaly detector based on estimated infection-rates outperforms a conventional algorithm that relies solely on case-counts.
To date, careful data treatment workflows and statistical detectors are used to perform hyperspectral image (HSI) detection of any gas contained in a spectral library, which is often expanded with physics models to incorporate different spectral characteristics. In general, surrounding evidence or known gas-release parameters are used to provide confidence in or confirm detection capability, respectively. This makes quantifying detection performance difficult as it is nearly impossible to develop an absolute ground truth for gas target pixel presence in collected HSI. Consequently, developing and comparing new detection methods, especially machine learning (ML)-based methods, is susceptible to subjectivity in derived detection map quality. In this work, we demonstrate the first use of transformer-based paired neural networks (PNNs) for one-shot gas target detection for multiple gases while providing quantitative classification and detection metrics for their use on labeled data. Terabytes of training data are generated from a database of long-wave infrared HSI obtained from historical Mako sensor campaigns over Los Angeles. By incorporating labels, singular signature representations, and a model development pipeline, we can tune and select PNNs to detect multiple gas targets that are not seen in training on a quantitative basis. We additionally assess our test set detections using interpretability techniques widely employed with ML-based predictors, but less common with detection methods relying on learned latent spaces.
This study investigates the fatigue crack growth rate (FCGR) behavior of pipeline and low-alloy pressure vessel steels in high-pressure gaseous hydrogen. Despite a broad range of yield strengths and microstructures ranging from ferrite/pearlite, acicular ferrite, bainite, and martensite, the FCGR in gaseous hydrogen remained consistent (falling within a factor of 2–3). Steels with higher fractions of pearlite, typical of older vintage pipeline steels, exhibited modestly lower crack growth rates in gaseous hydrogen compared to steels with lower fractions of pearlite. Crack growth rates in these materials exhibit a systematic dependence on stress ratio and partial pressure of hydrogen, as captured in the recently published fatigue design curves in ASME B31 code case 220 for pipeline steels and ASME BPVC code case 2938 for pressure-vessel steels.
Garner, Sean; Silling, Stewart; Ketterhagen, William; Strong, John
The pharmaceutical drug product development process can be greatly accelerated through the use of modeling and simulation techniques to predict the manufacturability and performance of a given formulation. The anticipation and possible mitigation of tablet damage due to manufacturing stresses represents a specific area of interest in the pharmaceutical industry for predicting formulation and tableting performance. While the finite element method (FEM) has been extensively used for predicting the mechanical behavior of powder material in the compaction processes, a shortcoming of the approach is the inherent difficulty to predict discontinuities (e.g., damage or cracking) within a tablet as FEM is a continuum-based approach. In this work, we propose a novel method utilizing peridynamics (PD), a numerical method that can capture discontinuities such as tablet fracture, to predict the evolution of damage and breakage in pharmaceutical tablets. The approach links (1) the finite element method – to elucidate the behavior of powders during die compaction – with (2) the peridynamics modeling technique – to model the discontinuous nature of damage and predict tablet breakage during the critical stages of unloading and ejection from the compression die. This short communication presents a proof of concept including a workflow to calibrate the linked FEM-PD simulation models. It demonstrates promising results from a preliminary experimental validation of the approach. Following further development, this approach could be used to guide the optimization of compression processes through targeted changes to formulation material properties, compression process conditions, and/or tooling geometries to deliver improved process efficiency and tablet robustness.
Background/Objectives: Children’s biological age does not always correspond to their chronological age. In the case of BMI trajectories, this can appear as phase variation, which can be seen as shift, stretch, or shrinking between trajectories. With maturation thought of as a process moving towards the final state - adult BMI, we assessed whether children can be divided into latent groups reflecting similar maturational age of BMI. The groups were characterised by early factors and time-related features of the trajectories. Subjects/Methods: We used data from two general population birth cohort studies, Northern Finland Birth Cohorts 1966 and 1986 (NFBC1966 and NFBC1986). Height (n = 6329) and weight (n = 6568) measurements were interpolated in 34 shared time points using B-splines, and BMI values were calculated between 3 months to 16 years. Pairwise phase distances of 2999 females and 3163 males were used as a similarity measure in k-medoids clustering. Results: We identified three clusters of trajectories in females and males (Type 1: females, n = 1566, males, n = 1669; Type 2: females, n = 1028, males, n = 973; Type 3: females, n = 405, males, n = 521). Similar distinct timing patterns were identified in males and females. The clusters did not differ by sex, or early growth determinants studied. Conclusions: Trajectory cluster Type 1 reflected to the shape of what is typically illustrated as the childhood BMI trajectory in literature. However, the other two have not been identified previously. Type 2 pattern was more common in the NFBC1966 suggesting a generational shift in BMI maturational patterns.
This work presents a data-driven method for learning low-dimensional time-dependent physics-based surrogate models whose predictions are endowed with uncertainty estimates. We use the operator inference approach to model reduction that poses the problem of learning low-dimensional model terms as a regression of state space data and corresponding time derivatives by minimizing the residual of reduced system equations. Standard operator inference models perform well with accurate training data that are dense in time, but producing stable and accurate models when the state data are noisy and/or sparse in time remains a challenge. Another challenge is the lack of uncertainty estimation for the predictions from the operator inference models. Our approach addresses these challenges by incorporating Gaussian process surrogates into the operator inference framework to (1) probabilistically describe uncertainties in the state predictions and (2) procure analytical time derivative estimates with quantified uncertainties. The formulation leads to a generalized least-squares regression and, ultimately, reduced-order models that are described probabilistically with a closed-form expression for the posterior distribution of the operators. The resulting probabilistic surrogate model propagates uncertainties from the observed state data to reduced-order predictions. We demonstrate the method is effective for constructing low-dimensional models of two nonlinear partial differential equations representing a compressible flow and a nonlinear diffusion–reaction process, as well as for estimating the parameters of a low-dimensional system of nonlinear ordinary differential equations representing compartmental models in epidemiology.
Miniature atomic clocks based on the interrogation of the ground state hyperfine splitting of buffer gas cooled ions confined in radio frequency Paul traps have shown great promise as high precision prototype clocks. We report on the performance of two miniature ion trap vacuum packages after being sealed for as much as 10 years. We find the lifetime of the ions within the trap has increased over time for both traps and can be as long as 50 days. We form two clocks using the two traps and compare their relative frequency instability one with another to demonstrate a short-term instability of 5×10-13$τ$-1/2 integrating down to 1×10-14 after 2 ks of integration. The trapped ion lifetime and clock instability demonstrated by these miniature devices despite only being passively pumped for many years represents a critical advance toward their proliferation in the clock community.
Chapare virus (CHAPV) is an emerging New World arenavirus that is the causative agent of Chapare hemorrhagic fever (CHHF) responsible for recent outbreaks with alarmingly high case fatality rates in Bolivia near the Brazilian border. Here, we describe a nonhuman primate (NHP) model of CHHF infection which represents an essential tool to understand this emerging biological threat agent. Cynomolgus macaques challenged intravenously with CHAPV develop clinical disease, which recapitulates several key features of human CHHF. All subjects lost weight and had clinical scores following CHAPV challenge. Notably, one of four NHPs developed lethal disease with viral hepatitis and hemorrhagic features. Clinical chemistry and hematology revealed leukopenia, anemia, thrombocytopenia, and increased transaminase levels. In all four subjects, viremia was detectable for the first week following challenge and viral RNA was detectable in serum and many tissues persisting 35 days-post challenge. Several medical countermeasures (MCM) have efficacy against CHAPV infection in vitro, but the current model for MCM testing and approval of new drugs is reliant on the availability of animal models. This work lays the foundation for future CHHF MCM development.
Tamper-indicating devices (TIDs), also known as seals, play a crucial role in various sectors including international nuclear safeguards, arms control, domestic security, and commercial products, by ensuring that monitored or high-value items are not accessed undetected. These devices do not block access but alert to unauthorized tampering. With adversaries' capabilities evolving, there's a pressing need for seals to advance in terms of effectiveness (e.g., better tamper indication and unique identification), and new technology can improve the efficiency of installation and verification. Passive loop seals, widely used in international nuclear safeguards to ensure that continuity of knowledge is maintained on declared items, face stringent International Atomic Energy Agency (IAEA) requirements that surpass those met by commercial products. The metal cup seal (Figure 1, left), a staple IAEA seal, is robust but requires significant resources for post-use verification – specifically, the seal’s unique identity can only be verified at IAEA headquarters after removal from facilities. Further, the seal has been in use for decades and seal types should periodically be replaced to counter adversarial efforts for defeating seals. In 2020, the IAEA outlined about 40 requirements for a new passive loop seal, aiming for in-situ verification, minimal external tool use, unique identification (UID), and clear tamper indication. In response, research and development efforts focused on creating a new passive loop seal that meets these criteria and in 2022 the IAEA announced the completion of the Field Verifiable Passive Loop Seal (FVPS) (Figure 1, right). Concurrently to the IAEA’s efforts, Sandia National Laboratories (SNL) and Oak Ridge National Laboratory (ORNL) designed, developed, and tested two seal versions – Puck and Puck/SAW, with Puck based on the IAEA’s requirements and including a novel visually-obvious tamper response, and Puck/SAW adding additional beneficial capabilities like the ability to receive a unique identifier from a standoff distance and monitoring the wire integrity. Puck/SAW was specifically designed and developed to address sealing applications in dry spent fuel storage facilities, where the number of sealed spent fuel containers results in heavy verification burden and inspector safety issues related to radiation exposure. These efforts are described in this Executive Summary.
Public-facing solar hosting capacity (HC) maps, which show the maximum amount of solar energy that can be installed at a location without adverse effects, have proven to be a key driver of solar soft cost reductions through a variety of pathways (e.g., streamlining interconnection, siting, and customer acquisition processes). However, current methods for generating HC maps require detailed grid models and time-consuming simulations that limit both their accuracy and scalability—today, only a handful out of almost 2,000 utilities provide these maps. This project developed and validated data-driven algorithms for calculating solar HC using data from AMI without the need of detailed grid models or simulations. The algorithms were validated on utility datasets and incorporated as an application into NRECA’s Open Modeling Framework (OMF.coop) for the over 260 coops and vendors throughout the US to use. The OMF is free and open-source for everyone.
Using a belt as a replacement for a rope on a rotary power take-offs (PTOs) system has become more common for wave energy converters, improving cyclic bend over sheave performance with a smaller bending thickness for belts. However, the service life predictions of PTOs are a major concern in design, because belt performance under harsh underwater environments is largely less studied. In this work, the effect of fleet and twist angles on wear life is being investigated both experimentally and numerically. Two three-dimensional equivalent static finite element models are constructed to evaluate the complex stress state of polyurethane-steel belts around steel drums. The first is to capture the response of the experimental investigation performed on the wear life, and the second to predict the wear life of an existing functional PTO. The results show a significant effect for fleet and twist angles on stress concentrations and estimated service life.
Autonomous manipulation is a challenging problem in field robotics due to uncertainty in object properties, constraints, and coupling phenomenon with robot control systems. Humans learn motion primitives over time to effectively interact with the environment. We postulate that autonomous manipulation can be enabled by basic sets of motion primitives as well, but do not necessitate mimicking human motion primitives. This work presents an approach to generalized optimal motion primitives using physics-informed neural networks. Our simulated and experimental results demonstrate that optimality is notionally maintained where the mean maximum observed final position percent error was 0.564% and the average mean error for all the trajectories was 1.53%. These results indicate that notional generalization is attained using a physics-informed neural network approach that enables near optimal real-time adaptation of primitive motion profiles.
Development of a defensible source-term model (STM), usual ly a thermodynamical model for radionuclide solubility calculations, is critical to a performance assessment (PA) of a geologic repository for nuclear waste disposal. Such a model is generally subjected to rigorous regulatory scrutiny. In this article, we highlight key guiding principles for STM model development and validation in nuclear waste management. We illustrate these principles by closely examining three recently developed thermodynamic models with the Pitzer formulism for aqueous H+—Nd3+—NO3−(—oxalate) systems in a reverse alphabetical order of the authors: the XW model developed by Xiong and Wang, the OWC model developed by Oakes et al., and the GLC model developed by Guignot et al., among which the XW model deals with trace activity coefficients for Nd(III), while the OWC and GLC models are for concentrated Nd(NO3)3 electrolyte solutions. The principles highlighted include the following: (1) Principle 1. Validation against independent experimental data: A model should be validated against experimental data or field observations that have not been used in the original model parameterization. We tested the XW model against multiple independent experimental data sets including electromotive force (EMF), solubility, water vapor, and water activity measurements. The results show that the XW model is accurate and valid for its intended use for predicting trace activity coefficients and therefore Nd solubility in repository environments. (2) Principle 2. Testing for relevant and sensitive variables: Solution pH is such a variable for an STM and easily acquirable. All three models are checked for their ability to predict pH conditions in Nd(NO3)3 electrolyte solutions. The OWC model fails to provide a reasonable estimate for solution pH conditions, thus casting serious doubt on its validity for a source-term calculation. In contrast, both the XW and GLC models predict close-to-neutral pH values, in agreement with experimental measurements. (3) Principle 3. Honoring physical constraints: Upon close examination, it is found that the Nd(III)-NO3 association schema in the OWC model suffers from two shortcomings. Firstly, its second stepwise stability constant for Nd(NO3)2+ (log K2) is much higher than the first stepwise stability constant for NdNO32+ (log K1), thus violating the general rule of (log K2–log K1) < 0, or (Formula presented.). Secondly, the OWC model predicts abnormally high activity coefficients for Nd(NO3)2+ (up to ~900) as the concentration increases. (4) Principle 4. Minimizing degrees of freedom for model fitting: The OWC model with nine fitted parameters is compared with the GLC model with five fitted parameters, as both models apply to the concentrated region for Nd(NO3)3 electrolyte solutions. The latter appears superior to the former because the latter can fit osmotic coefficient data equally well with fewer model parameters. The work presented here thus illustrates the salient points of geochemical model development, selection, and validation in nuclear waste management.
Brazing and soldering are metallurgical joining techniques that use a wetting molten metal to create a joint between two faying surfaces. The quality of the brazing process depends strongly on the wetting properties of the molten filler metal, namely the surface tension and contact angle, and the resulting joint can be susceptible to various defects, such as run-out and underfill, if the material properties or joining conditions are not suitable. In this work, we implement a finite element simulation to predict the formation of such defects in braze processes. This model incorporates both fluid–structure interaction through an arbitrary Eulerian–Lagrangian technique and free surface wetting through conformal decomposition finite element modeling. Upon validating our numerical simulations against experimental run-out studies on a silver-Kovar system, we then use the model to predict run-out and underfill in systems with variable surface tension, contact angles, and applied pressure. Finally, we consider variable joint/surface geometries and show how different geometrical configurations can help to mitigate run-out. This work aims to understand how brazing defects arise and validate a coupled wetting and fluid–structure interaction simulation that can be used for other industrial problems.
A striking example of frustration in physics is Hofstadter's butterfly, a fractal structure that emerges from the competition between a crystal's lattice periodicity and the magnetic length of an applied field. Current methods for predicting the topological invariants associated with Hofstadter's butterfly are challenging or impossible to apply to a range of materials, including those that are disordered or lack a bulk spectral gap. Here, we demonstrate a framework for predicting a material's local Chern markers using its position-space description and validate it against experimental observations of quantum transport in artificial graphene in a semiconductor heterostructure, inherently accounting for fabrication disorder strong enough to close the bulk spectral gap. By resolving local changes in the system's topology, we reveal the topological origins of antidot-localized states that appear in artificial graphene in the presence of a magnetic field. Moreover, we show the breadth of this framework by simulating how Hofstadter's butterfly emerges from an initially unpatterned 2D electron gas as the system's potential strength is increased and predict that artificial graphene becomes a topological insulator at the critical magnetic field. Overall, we anticipate that a position-space approach to determine a material's Chern invariant without requiring prior knowledge of its occupied states or bulk spectral gaps will enable a broad array of fundamental inquiries and provide a novel route to material discovery, especially in metallic, aperiodic, and disordered systems.
This paper develops a novel method for reconstructing the full-field response of structural dynamic systems using sparse measurements. The singular value decomposition is applied to a frequency response matrix relating the structural response to physical loads, base motion, or modal loads. The left singular vectors form a non-physical reduced basis that can be used for response reconstruction with far fewer sensors than existing methods. The contributions of the singular vectors to measured response are termed singular-vector loads (SVLs) and are used in a regularized Bayesian framework to generate full-field response estimates and confidence intervals. The reconstruction framework is applicable to the estimation of single data records and power spectral densities from multiple records. Reconstruction is successfully performed in configurations where the number of SVLs to identify is less than, equal to, and greater than the number of sensors used for reconstruction. In a simulation featuring a seismically excited shear structure, SVL reconstruction significantly outperforms modal FRF-based reconstruction and successfully estimates full-field responses with as few as two uniaxial accelerometers. SVL reconstruction is further verified in a simulation featuring an acoustically excited cylinder. Finally, response reconstruction and uncertainty quantification are performed on an experimental structure with three shaker inputs and 27 triaxial accelerometer outputs.
Hydrogen geo-storage is attracting substantial interdisciplinary interest as a cost-effective and sustainable option for medium- and long-term storage. Hydrogen can be stored underground in diverse formations, including aquifers, salt caverns, and depleted oil and gas reservoirs. The wetting dynamics of the hydrogen-brine-rock system are critical for assessing both structural and residual storage capacities, and ensuring containment safety. Through molecular dynamics simulations, we explore how varying concentrations of cushion gases (CO2 or CH4) influence the wetting properties of hydrogen-brine-clay systems under geological conditions (15 MPa and 333 K). We employed models of talc and the hydroxylated basal face of kaolinite (kaoOH) as clay substrates. Our findings reveal that the effect of cushion gases on hydrogen-brine-clay wettability is strongly dependent on the clay-brine interactions. Notably, CO2 and CH4 reduce the water wettability of talc in hydrogen-brine-talc systems, while exerting no influence on the wettability of hydrogen-brine-kaoOH systems. Detailed analysis of free energy of cavity formation near clay surfaces, clay-brine interfacial tensions, and the Willard-Chandler surface for gas-brine interfaces elucidate the molecular mechanisms underlying wettability changes. Our simulations identify empirical correlations between wetting properties and the average free energy required to perturb a flat interface when clay-brine interactions are less dominant. Our thorough thermodynamic analysis of rock-fluid and fluid-fluid interactions, aligning with key experimental observations, underscores the utility of simulated interfacial properties in refining contact angle measurements and predicting experimentally relevant properties. These insights significantly enhance the assessment of gas geo-storage potential. Prospectively, the approaches and findings obtained from this study could form a basis for more advanced multiscale simulations that consider a range of geological and operational variables, potentially guiding the development and improvement of geo-storage systems in general, with a particular focus on hydrogen storage.
We consider numerical approaches for deterministic, finite-dimensional optimal control problems whose dynamics depend on unknown or uncertain parameters. We seek to amortize the solution over a set of relevant parameters in an offline stage to enable rapid decision-making and be able to react to changes in the parameter in the online stage. To tackle the curse of dimensionality arising when the state and/or parameter are highdimensional, we represent the policy using neural networks. We compare two training paradigms: First, our model-based approach leverages the dynamics and definition of the objective function to learn the value function of the parameterized optimal control problem and obtain the policy using a feedback form. Second, we use actor-critic reinforcement learning to approximate the policy in a data-driven way. Using an example involving a two-dimensional convection-diffusion equation, which features high-dimensional state and parameter spaces, we investigate the accuracy and efficiency of both training paradigms. While both paradigms lead to a reasonable approximation of the policy, the model-based approach is more accurate and considerably reduces the number of PDE solves.
Yu, Xi; Wilhelm, Benjamin; Holmes, Danielle; Vaartjes, Arjen; Schwienbacher, Daniel; Nurizzo, Martin; Kringhoj, Anders; Van Blankenstein, Mark R.; Jakob, Alexander M.; Gupta, Pragati; Hudson, Fay E.; Itoh, Kohei M.; Murray, Riley J.; Blume-Kohout, Robin; Ladd, Thaddeus D.; Dzurak, Andrew S.; Sanders, Barry C.; Jamieson, David N.; Morello, Andrea
High-dimensional quantum systems are a valuable resource for quantum information processing. They can be used to encode error-correctable logical qubits, which has been demonstrated using continuous-variable states in microwave cavities or the motional modes of trapped ions. For example, high-dimensional systems can be used to realize ‘Schrödinger cat’ states, which are superpositions of widely displaced coherent states that can be used to illustrate quantum effects at large scales. Recent proposals have suggested encoding qubits in high-spin atomic nuclei, which are finite-dimensional systems that can host hardware-efficient versions of continuous-variable codes. Here we demonstrate the creation and manipulation of Schrödinger cat states using the spin-7/2 nucleus of an antimony atom embedded in a silicon nanoelectronic device. We use a multi-frequency control scheme to produce spin rotations that preserve the symmetry of the qudit, and we constitute logical Pauli operations for qubits encoded in the Schrödinger cat states. Our work demonstrates the ability to prepare and control non-classical resource states, which is a prerequisite for applications in quantum information processing and quantum error correction, using our scalable, manufacturable semiconductor platform.
We introduce physics-informed multimodal autoencoders (PIMA)-a variational inference framework for discovering shared information in multimodal datasets. Individual modalities are embedded into a shared latent space and fused through a product-of-experts formulation, enabling a Gaussian mixture prior to identify shared features. Sampling from clusters allows cross-modal generative modeling, with a mixture-of-experts decoder that imposes inductive biases from prior scientific knowledge and thereby imparts structured disentanglement of the latent space. This approach enables cross-modal inference and the discovery of features in high-dimensional heterogeneous datasets. Consequently, this approach provides a means to discover fingerprints in multimodal scientific datasets and to avoid traditional bottlenecks related to high-fidelity measurement and characterization of scientific datasets.
The objective of this work was to develop a machine learning ensemble that could assist pebble bed reactor verification by evaluating whether a given pebble circulating through a PBR was normal or anomalous using gamma spectroscopy measurements from a notional PBR burnup measurement system. Using a PBR reference design, data sets of synthetic gamma spectra representative of BUMS measurements of normal and anomalous pebbles that may be used to produce special fissile material were generated to train and test an ML anomaly detection ensemble on two reference scenarios – substitution of normal pebbles with target pebbles for production of Pu or 233U. The ML ensemble correctly identified all anomalous pebbles in the testing data set, and while perfect ensemble performance is normally indicative of overfitting, it was concluded that significantly lower photon intensity of target pebbles produced distinctly less intense photon spectra to where perfect ensemble performance was expected.
The purpose of this protocol is to define procedures and practices to be used by the PACT center for field testing of metal halide perovskite (MHP) photovoltaic (PV) modules. The protocol defines the physical, electrical, and analytical configuration of the tests and applies equally to mounting systems at a fixed orientation or sun tracking systems. While standards exist for outdoor testing of conventional PV modules, these do not anticipate the unique electrical behavior of perovskite cells. Further, the existing standards are oriented toward mature, relatively stable products with lifetimes that can be measured on the scale of years to decades. The state of the art for MHP modules is still immature with considerable sample to sample variation among nominally identical modules. Version 0.0 of this protocol does not define a minimum test duration, although the intent is for modules to be fielded for periods ranging for weeks to months. This protocol draws from relevant parts of existing standards, and where necessary includes modifications specific to the behavior of perovskites.
Underground caverns in a salt dome are promising geologic features to store hydrogen because of salt's extremely low permeability and self-healing behavior. The salt cavern storage community, however, has not fully understood the geomechanical behaviors of salt rock driven by quick operation cycles of injection–production, which may significantly impact the cost-effective storage-recovery performance of multiple caverns. Our field-scale generic model captures the impact of cyclic loading–unloading on the salt creep behavior and deformation under different cycle frequencies, operating pressure, and spatial order of operating cavern(s). This systematic simulation study indicates that the initial operation cycle and arrangement of multiple caverns play a significant role in the creep-driven loss of cavern volumes and cavern deformation. Our future study will develop a new salt constitutive model based on geomechanical tests of site-specific salt rock to probe the cyclic behaviors of salt precisely both beneath and above the dilatancy boundary, including reverse (inverse transient) creep, the Bauschinger effect, and damage-healing mechanism.
The (a)-type screw dislocations are known to be significant mediators of plasticity in hexagonal-close-packed (HCP) metals. These dislocations have polymorphic core structures, and subtle changes in the relative energies of these core structures are known to have a large impact on the dynamics of the dislocations. This work identifies a previously neglected long-range elastic interstitial-solute/dislocation interaction that influences the core structures. Essentially, interstitial solutes induce a change in the dislocation core structure to minimize the energy of interaction between the solutes and the dislocation. Molecular dynamics simulations, continuum linear elasticity, and statistical analysis show that this long-range interaction can locally alter the dislocation cores so that many different polymorphs appear along a single dislocation not only because of direct contact between interstitials and the dislocation core but also because of this long-range elastic interaction.
This article describes the theory, analysis, and initial bench-top testing of a minimally invasive, rotational resonator designed to produce small amounts of electrical energy for use in oceanic observation buoys. This work details the systems of equations that govern such a resonator, its potential power production, and its predicted effects on the modified motion of the buoy. Finally, a bench-top test apparatus is designed and experimented upon to identify the system and verify the system of equations empirically.
Low-velocity impact of 2D woven glass fiber reinforced polymer (GFRP) and carbon fiber reinforced polymer (CFRP) composite laminates was studied experimentally and numerically. Hybrid laminates containing blocked layers of GFRP/CFRP/GFRP with all plies oriented at 0° were investigated. Relatively high impact energies were used to obtain full perforation of the laminate in a low-velocity impact setup. Numerical simulations were carried out using the in-house transient dynamics finite element code, Sierra/SM, developed at Sandia National Laboratories. A three-dimensional continuum damage model was used to describe the response of a woven composite ply. Two methods for handling delamination were considered and compared: (1) cohesive zone modeling and (2) continuum damage mechanics. The reduced model size achieved by omission of the cohesive zone elements produced acceptable results at reduced computational cost. The comparison between different modeling techniques can be used to inform modeling decisions relevant to low velocity impact scenarios. The modeling was validated by comparing with the experimental results and showed good agreement in terms of predicted damage mechanisms and impactor velocity and force histories.
Epitaxial regrowth processes are presented for achieving Al-rich aluminum gallium nitride (AlGaN) high electron mobility transistor (HEMTs) with p-type gates with large, positive threshold voltage for enhancement mode operation and low resistance Ohmic contacts. Utilizing a deep gate recess etch into the channel and an epitaxial regrown p-AlGaN gate structure, an Al0.85Ga0.15N barrier/Al0.50Ga0.50N channel HEMT with a large positive threshold voltage (VTH = +3.5 V) and negligible gate leakage is demonstrated. Epitaxial regrowth of AlGaN avoids the use of gate insulators which can suffer from charge trapping effects observed in typical dielectric layers deposited on AlGaN. Low resistance Ohmic contacts (minimum specific contact resistance = 4 × 10−6 Ω cm2, average = 1.8 × 10−4 Ω cm2) are demonstrated in an Al0.85Ga0.15N barrier/Al0.68Ga0.32N channel HEMT by employing epitaxial regrowth of a heavily doped, n-type, reverse compositionally graded epitaxial structure. The combination of low-leakage, large positive threshold p-gates and low resistance Ohmic contacts by the described regrowth processes provide a pathway to realizing high-current, enhancement-mode, Al-rich AlGaN-based ultra-wide bandgap transistors.
Here we look at various forms of spectrum and associated pseudospectrum that can be defined for noncommuting d-tuples of Hermitian elements of a C$\ast$-algebra. In particular, we focus on the forms of multivariable pseudospectra that are finding applications in physics. The emphasis is on theoretical calculations of examples, in particular for noncommuting pairs and triple of operators on infinite dimensional Hilbert space. In particular, we look at the universal pair of projections in a C$\ast$ -algebra, the usual position and momentum operators, and triples of tridiagonal operators. We prove a relation between the quadratic pseudospectrum and Clifford pseudospectra, as well as results about how symmetries in a tuple of operators can lead to a symmetry in the various pseudospectra.
A new particle-based reweighting method is developed and demonstrated in the Aleph Particle-in-Cell with Direct Simulation Monte Carlo (PIC-DSMC) program. Novel splitting and merging algorithms ensure that modified particles maintain physically consistent positions and velocities. This method allows a single reweighting simulation to efficiently model plasma evolution over orders of magnitude variation in density, while accurately preserving energy distribution functions (EDFs). Demonstrations on electrostatic sheath and collisional rate dynamics show that reweighting simulations achieve accuracy comparable to fixed weight simulations with substantial computational time savings. This highly performant reweighting method is recommended for modeling plasma applications that require accurate resolution of EDFs or exhibit significant density variations in time or space.
Simulating subsurface contaminant transport at the kilometer-scale often entails modeling reactive flow and transport within and through complex geologic structures. These structures are typically meshed by hand and as a result geologic structure is usually represented by one or a few deterministically generated geological models for uncertainty studies of flow and transport in the subsurface. Uncertainty in geologic structure can have a significant impact on contaminant transport. In this study, the impact of geologic structure on contaminant tracer transport in a shale formation is investigated for a simplified generic deep geologic repository for permanent disposal of spent nuclear fuel. An open-source modeling framework is used to perform a sensitivity analysis study on transport of two tracers from a generic spent nuclear fuel repository with uncertain location of the interfaces between the stratum of the geologic structure. The automated workflow uses sampled realizations of the geological structural model in addition to uncertain flow parameters in a nested sensitivity analysis. Concentration of the tracers at observation points within, in line with, and downstream of the repository are used as the quantities of interest for determining model sensitivity to input parameters and geological realization. Finally, the results of the study indicate that the location of strata interfaces in the geological structure has a first-order impact on tracer transport in the example shale formation, and that this impact may be greater than that of the uncertain flow parameters.
There is growing interest to extend low-rank matrix decompositions to multi-way arrays, or tensors. One fundamental low-rank tensor decomposition is the canonical polyadic decomposition (CPD). The challenge of fitting a low-rank, nonnegative CPD model to Poisson-distributed count data is of particular interest. Several popular algorithms use local search methods to approximate the maximum likelihood estimator (MLE) of the Poisson CPD model. This work presents two new algorithms that extend state-of-the-art local methods for Poisson CPD. Hybrid GCP-CPAPR combines Generalized Canonical Decomposition (GCP) with stochastic optimization and CP Alternating Poisson Regression (CPAPR), a deterministic algorithm, to increase the probability of converging to the MLE over either method used alone. Restarted CPAPR with SVDrop uses a heuristic based on the singular values of the CPD model unfoldings to identify convergence toward optimizers that are not the MLE and restarts within the feasible domain of the optimization problem, thus reducing overall computational cost when using a multi-start strategy. We provide empirical evidence that indicates our approaches outperform existing methods with respect to converging to the Poisson CPD MLE.
Engineering and applied science rely on computational experiments to rigorously study physical systems. The mathematical models used to probe these systems are highly complex, and sampling-intensive studies often require prohibitively many simulations for acceptable accuracy. Surrogate models provide a means of circumventing the high computational expense of sampling such complex models. In particular, polynomial chaos expansions (PCEs) have been successfully used for uncertainty quantification studies of deterministic models where the dominant source of uncertainty is parametric. We discuss an extension to conventional PCE surrogate modeling to enable surrogate construction for stochastic computational models that have intrinsic noise in addition to parametric uncertainty. We develop a PCE surrogate on a joint space of intrinsic and parametric uncertainty, enabled by Rosenblatt transformations, which are evaluated via kernel density estimation of the associated conditional cumulative distributions. Furthermore, we extend the construction to random field data via the Karhunen-Loève expansion. We then take advantage of closed-form solutions for computing PCE Sobol indices to perform a global sensitivity analysis of the model which quantifies the intrinsic noise contribution to the overall model output variance. Additionally, the resulting joint PCE is generative in the sense that it allows generating random realizations at any input parameter setting that are statistically approximately equivalent to realizations from the underlying stochastic model. The method is demonstrated on a chemical catalysis example model and a synthetic example controlled by a parameter that enables a switch from unimodal to bimodal response distributions.
Water security and climate change are important priorities for communities and regions worldwide. The intersections between water and climate change extend across many environmental and human activities. This Primer is intended as an introduction, grounded in examples, for students and others considering the interactions between climate, water, and society. In this Primer, we summarize key intersections between water and climate across four sectors: environment; drinking water, sanitation, and hygiene; food and agriculture; and energy. We begin with an overview of the fundamental water dynamics within each of these four sectors, and then discuss how climate change is impacting water and society within and across these sectors. Emphasizing the relationships and interconnectedness between water and climate change can encourage systems thinking, which can show how activities in one sector may influence activities or outcomes in other sectors. We argue that to achieve a resilient and sustainable water future under climate change, proposed solutions must consider the water–climate nexus to ensure the interconnected roles of water across sectors are not overlooked. Toward that end, we offer an initial set of guiding questions that can be used to inform the development of more holistic climate solutions. This article is categorized under: Science of Water > Water and Environmental Change Engineering Water > Water, Health, and Sanitation Human Water > Value of Water.
We demonstrate magnetic anomaly detection (MAD) using an array of 24 commercial induction coil magnetometers with stand-off distances from a pulsed 99.8(3) kA·m2 magnetic dipole source of 260-1200 m. The sparse array is used to estimate the magnetic dipole location, magnitude, and orientation. We demonstrate how independent component analysis (ICA) improves the accuracy and precision of the magnetometer array when estimating the dipole parameters. Using sensor responses recorded from individual source pulses, we estimate the dipole location to within 29 ±; 2 m, the magnitude to within 3 ± kA ·m2, and dipole orientation error to within 19 ± 0.6°.
In [R. J. Baraldi and D. P. Kouri, Mathematical Programming, (2022), pp. 1-40], we introduced an inexact trust-region algorithm for minimizing the sum of a smooth nonconvex and nonsmooth convex function. The principle expense of this method is in computing a trial iterate that satisfies the so-called fraction of Cauchy decrease condition—a bound that ensures the trial iterate produces sufficient decrease of the subproblem model. In this paper, we expound on various proximal trust-region subproblem solvers that generalize traditional trust-region methods for smooth unconstrained and convex-constrained problems. We introduce a simplified spectral proximal gradient solver, a truncated nonlinear conjugate gradient solver, and a dogleg method. We compare algorithm performance on examples from data science and PDE-constrained optimization.
Carbon capture, utilization, and storage (CCUS) is an important pathway for meeting climate mitigation goals. While the economic viability of CCUS is well understood, previous studies do not evaluate the economic feasibility of carbon capture and storage (CCS) in the Permian Basin specifically regarding the new Section 45Q tax credits. We developed a technoeconomic analysis method, evaluated the economic feasibility of CCS at the acid gas injection (AGI) wells, and assessed the implication of Section 45Q tax credits for CCS at the AGIs. We find that the compressors, well depth, and the permit and monitoring costs drive the facility costs. Compressors are the predominant contributors to capital and operating expenditure driving the levelized cost of CO2 storage. Strategic cost reduction measures identified include 1) sourcing of low-cost electricity and 2) optimizing operational efficiency in well operations. In evaluating the impact of the tax credits on CCS projects, facility scale proved decisive. We found that facilities with an annual injection rate exceeding 10,000 MT storage capacity demonstrate economic viability contingent upon the procurement of inputs at the least cost. The new construction of AGI wells were found to be economically viable at a storage capacity of 100,000 MT. The basin is heavily focused on CCUS (tax credit – $65/MT CO2), which overshadows CCS ($85/MT CO2) opportunities. Balancing the dual objectives of CCS and CCUS requires planning and coordination for optimal resource and pore space utilization to attain the basin's decarbonization potential. We also found that CCS on AGI is a lower cost CCS option as compared to CCS on other industries.
Legacy and modern-day ablation codes typically assume equilibrium pyrolysis gas chemistry. Yet, experimental data suggest that speciation from resin decomposition is far from equilibrium. A thermal and chemical kinetic study was performed on pyrolysis gas advection through a porous char, using the Theoretical Ablative Composite for Open Testing (TACOT) as a demonstrator material. The finite-element tool SIERRA/ Aria simulated the ablation of TACOT under various conditions. Temperature and phenolic decomposition rates generated from Aria were applied as inputs to a simulated network of perfectly stirred reactors (PSRs) in the chemical solver Cantera. A high-fidelity combustion mechanism computed the gas composition and thermal properties of the advecting pyrolyzate. The results indicate that pyrolysis gases do not rapidly achieve chemical equilibrium while traveling through the simulated material. Instead, a highly chemically reactive zone exists in the ablator between 1400 and 2500 K, wherein the modeled pyrolysis gases transition from a chemically frozen state to chemical equilibrium. These finite-rate results demonstrate a significant departure in computed pyrolysis gas properties from those derived from equilibrium solvers. Under the same conditions, finite-rate-derived gas is estimated to provide up to 50% less heat absorption than equilibrium-derived gas. This discrepancy suggests that nonequilibrium pyrolysis gas chemistry could substantially impact ablator material response models.
Krack, Malte; Brake, Matthew R.W.; Schwingshackl, Christoph; Gross, Johann; Hippold, Patrick; Lasen, Matias; Dini, Daniele; Salles, Loic; Allen, Matthew S.; Shetty, Drithi; Payne, Courtney A.; Willner, Kai; Lengger, Michael; Khan, Moheimin Y.; Ortiz, Jonel; Najera-Flores, David A.; Kuether, Robert J.; Miles, Paul R.; Xu, Chao; Yang, Huiyi; Jalali, Hassan; Taghipour, Javad; Khodaparast, Hamed H.; Friswell, Michael I.; Tiso, Paolo; Morsy, Ahmed A.; Bhattu, Arati; Hermann, Svenja; Jamia, Nidhal; Ozguven, H.N.; Muller, Florian; Scheel, Maren
The present article summarizes the submissions to the Tribomechadynamics Research Challenge announced in 2021. The task was a blind prediction of the vibration behavior of a system comprising a thin plate clamped on two sides via bolted joints. Both geometric and frictional contact nonlinearities are expected to be relevant. Provided were the CAD models and technical drawings of all parts as well as assembly instructions. The main objective was to predict the frequency and damping ratio of the lowest-frequency mode as function of the amplitude. Many different prediction approaches were pursued, ranging from well-known methods to very recently developed ones. After the submission deadline, the system has been fabricated and tested. The aim of this article is to evaluate the current state of the art in modeling and vibration prediction, and to provide directions for future methodological advancements.
The sensitivity analysis algorithms that have been developed by the radiation transport community in multiple neutron transport codes, such as MCNP and SCALE, are extensively used by fields such as the nuclear criticality community. However, these techniques have seldom been considered for electron transport applications. In the past, the differential-operator method with the single scatter capability has been implemented in Sandia National Laboratories’ Integrated TIGER Series (ITS) coupled electron-photon transport code. This work is meant to extend the available sensitivity estimation techniques in ITS by implementing an adjoint-based sensitivity method, GEAR-MC, to strengthen its sensitivity analysis capabilities. To ensure the accuracy of this method being extended to coupled electron-photon transport, it is compared against the central-difference and differential-operator methodologies to estimate sensitivity coefficients for an experiment performed by McLaughlin and Hussman. Energy deposition sensitivities were calculated using all three methods, and the comparison between them has provided confidence in the accuracy of the newly implemented method. Unlike the current implementation of the differential-operator method in ITS, the GEAR-MC method was implemented with the option to calculate the energy-dependent energy deposition sensitivities, which are the sensitivity coefficients for energy deposition tallies to energy-dependent cross sections. The energy-dependent cross sections could be the cross sections for the material, elements in the material, or reactions of interest for the element. These sensitivities were compared to the energy-integrated sensitivity coefficients and exhibited a maximum percentage difference of 2.15%.
A variational phase field model for dynamic ductile fracture is presented. The model is designed for elasto-viscoplastic materials subjected to rapid deformations in which the effects of heat generation and material softening are dominant. The variational framework allows for the consistent inclusion of plastic dissipation in the heat equation as well as thermal softening. It employs a coalescence function to degrade fracture energy during regimes of high plastic flow. A variationally consistent form of the Johnson–Cook model is developed for use with the framework. Results from various benchmark problems in dynamic ductile fracture are presented to demonstrate capabilities. In particular, the ability of the model to regularize shear band formation and subsequent damage evolution in two- and three-dimensional problems is demonstrated. Importantly, these phenomena are naturally captured through the underlying physics without the need for phenomenological criteria such as stability thresholds for the onset of shear band formation.
Shands, Emerson W.; Morel, Jim E.; Ahrens, Cory D.; Franke, Brian C.
We derive a new Galerkin quadrature (GQ) method for S (Formula presented.) calculations that differs from the two methods preceding it in that a matrix inverse for an (Formula presented.) matrix, where (Formula presented.) is the number of directions in the quadrature set, is no longer required. Galerkin quadrature methods are designed for calculations with highly anisotropic scattering. Such methods are not simply special angular quadratures but also are methods for representing the S (Formula presented.) scattering source that offers several advantages relative to the standard scattering source representation when highly truncated Legendre cross-section expansions must be used. Galerkin quadrature methods are also useful when the scattering is moderately anisotropic, but the quadrature being used is not sufficiently accurate for the order of the scattering source expansion that is required. We derive the new method and present computational results showing that its performance for two challenging problems is comparable to those of the two GQ methods that preceded it.
This work introduces a comprehensive simulation tool that provides a robust 1D Schrödinger - Poisson solver for modeling the electrostatics of heterostructures with an arbitrary number of layers, and non-uniform doping profiles along with the treatment of partial ionization of dopants at low temperatures. The effective masses are derived from the first-principles calculations. The solver is used to characterize three Ge1-xSnx/Ge heterostructures with non-uniform doping profiles and determine the subband structure at various temperatures. The simulation results of the sheet carrier densities show excellent agreement with the experimentally extracted data, thus demonstrating the capabilities of the solver.
We introduce a new training algorithm for deep neural networks that utilize random complex exponential activation functions. Our approach employs a Markov chain Monte Carlo sampling procedure to iteratively train network layers, avoiding global and gradient-based optimization while maintaining error control. It consistently attains the theoretical approximation rate for residual networks with complex exponential activation functions, determined by network complexity. Additionally, it enables efficient learning of multiscale and high-frequency features, producing interpretable parameter distributions. Despite using sinusoidal basis functions, we do not observe Gibbs phenomena in approximating discontinuous target functions.
The impact of high-altitude electromagnetic pulse events on the electric grid is not fully understood, and validated modeling of mitigations, such as lightning surge arresters (LSAs) is necessary to predict the propagation of very fast transients on the grid. Experimental validation of high frequency models for surge arresters is an active area of research. This article serves to experimentally validate a previously defined ZnO LSA model using four metal-oxide varistor pucks and nanosecond scale pulses to measure voltage and current responses. The SPICE circuit models of the pucks showed good predictability when compared to the measured arrester response when accounting for a testbed inductance of approximately 100 nH. Additionally, the comparatively high capacitance of low-profile arresters show a favorable response to high-speed transients that indicates the potential for effective electromagnetic pulse mitigation with future materials design.
The Direct Simulation Monte Carlo (DSMC) method is utilized to numerically simulate test conditions in the Sandia Hypersonic Shock Tunnel (HST) facility. The setup consists of a hypersonic flow over a cylinder with the freestream at flow speeds of 4-5 km/s in a state of thermal non-equilibrium. We present comparisons of temperatures derived from spectrographic measurements of Nitric Oxide (NO) emission in the ultraviolet (UV) region with predictions from the DSMC solver. Furthermore, we present differences between spectrally banded imaging measurements taken during experiments in the infrared (IR) and UV regions with those obtained from numerical simulations.
Legacy and modern-day ablation codes typically assume equilibrium pyrolysis gas chemistry. Yet, experimental data suggest that speciation from resin decomposition is far from equilibrium. A thermal and chemical kinetic study was performed on pyrolysis gas advection through a porous char, using the Theoretical Ablative Composite for Open Testing (TACOT) as a demonstrator material. The finite-element tool SIERRA/ Aria simulated the ablation of TACOT under various conditions. Temperature and phenolic decomposition rates generated from Aria were applied as inputs to a simulated network of perfectly stirred reactors (PSRs) in the chemical solver Cantera. A high-fidelity combustion mechanism computed the gas composition and thermal properties of the advecting pyrolyzate. The results indicate that pyrolysis gases do not rapidly achieve chemical equilibrium while traveling through the simulated material. Instead, a highly chemically reactive zone exists in the ablator between 1400 and 2500 K, wherein the modeled pyrolysis gases transition from a chemically frozen state to chemical equilibrium. These finite-rate results demonstrate a significant departure in computed pyrolysis gas properties from those derived from equilibrium solvers. Under the same conditions, finite-rate-derived gas is estimated to provide up to 50% less heat absorption than equilibrium-derived gas. This discrepancy suggests that nonequilibrium pyrolysis gas chemistry could substantially impact ablator material response models.
Traditional Monte Carlo methods for particle transport utilize source iteration to express the solution, the flux density, of the transport equation as a Neumann series. Our contribution is to show that the particle paths simulated within source iteration are associated with the adjoint flux density and the adjoint particle paths are associated with the flux density. We make our assertion rigorous through the use of stochastic calculus by representing the particle path used in source iteration as a solution to a stochastic differential equation (SDE). The solution to the adjoint Boltzmann equation is then expressed in terms of the same SDE, and the solution to the Boltzmann equation is expressed in terms of the SDE associated with the adjoint particle process. An important consequence is that the particle paths used within source iteration simultaneously provide Monte Carlo samples of the flux density and adjoint flux density in the detector and source regions, respectively. The significant practical implication is that particle trajectories can be reused to obtain both forward and adjoint quantities of interest. To the best our knowledge, the reuse of entire particles paths has not appeared in the literature. Monte Carlo simulations are presented to support the reuse of the particle paths.
Tank farm workers involved in nuclear cleanup activities perform physically demanding tasks, typically while wearing heavy personal protective equipment (PPE). Exoskeleton devices have the potential to bring considerable benefit to this industry but have not been thoroughly studied in the context of nuclear cleanup. In this paper, we examine the performance of exoskeletons during a series of tasks emulating jobs performed on tank farms while participants wore PPE commonly deployed by tank farm workers. The goal of this study was to evaluate the effects of commercially available lower-body exoskeletons on a user’s gait kinematics and user perceptions. Three participants each tested three lower-body exoskeletons in a 70-min protocol consisting of level treadmill walking, incline treadmill walking, weighted treadmill walking, a weight lifting session, and a hand tool dexterity task. Results were compared to a no exoskeleton baseline condition and evaluated as individual case studies. The three participants showed a wide spectrum of user preferences and adaptations toward the devices. Individual case studies revealed that some users quickly adapted to select devices for certain tasks while others remained hesitant to use the devices. Temporal effects on gait change and perception were also observed for select participants in device usage over the course of the device session. Device benefit varied between tasks, but no conclusive aggregate trends were observed across devices for all tasks. Evidence suggests that device benefits observed for specific tasks may have been overshadowed by the wide array of tasks used in the protocol.
Vertical-axis wind turbines (VAWTs) have been the subject of research and development for nearly a century. However, this turbine architecture has fallen in and out of favor on multiple occasions. Beginning in the late 1970s, the U.S. Department of Energy sponsored an extensive experimental program through Sandia National Laboratories which produced a mass of experimental data from several highly instrumented turbines. Turbines designed, built, and tested include the 2 meter, 5 meter, 17 meter, and 34 meter and their respective configurations. This program kicked off a commercial collaboration and resulted in the FloWind turbines. The FloWind turbines had several notable design changes from the experimental turbines that, in conjunction with a general lack of understanding regarding predicting fatigue at the time, led to the majority of the turbines failing prematurely during the late 80s.
As quantum computing hardware becomes more complex with ongoing design innovations and growing capabilities, the quantum computing community needs increasingly powerful techniques for fabrication failure root-cause analysis. This is especially true for trapped-ion quantum computing. As trapped-ion quantum computing aims to scale to thousands of ions, the electrode numbers are growing to several hundred, with likely integrated photonic components also adding to the electrical and fabrication complexity, making faults even harder to locate. In this work, we used a high-resolution quantum magnetic imaging technique, based on nitrogen-vacancy centers in diamond, to investigate short-circuit faults in an ion trap chip. We imaged currents from these short-circuit faults to ground and compared them to intentionally created faults, finding that the root cause of the faults was failures in the on-chip trench capacitors. This work, where we exploited the performance advantages of a quantum magnetic sensing technique to troubleshoot a piece of quantum computing hardware, is a unique example of the evolving synergy between emerging quantum technologies to achieve capabilities that were previously inaccessible.
Stochastic collocation (SC) is a well-known non-intrusive method of constructing surrogate models for uncertainty quantification. In dynamical systems, SC is especially suited for full-field uncertainty propagation that characterizes the distributions of the high-dimensional solution fields of a model with stochastic input parameters. However, due to the highly nonlinear nature of the parameter-to-solution map in even the simplest dynamical systems, the constructed SC surrogates are often inaccurate. This work presents an alternative approach, where we apply the SC approximation over the dynamics of the model, rather than the solution. By combining the data-driven sparse identification of nonlinear dynamics framework with SC, we construct dynamics surrogates and integrate them through time to construct the surrogate solutions. We demonstrate that the SC-over-dynamics framework leads to smaller errors, both in terms of the approximated system trajectories as well as the model state distributions, when compared against full-field SC applied to the solutions directly. We present numerical evidence of this improvement using three test problems: a chaotic ordinary differential equation, and two partial differential equations from solid mechanics.
Statistical analysis of tensor-valued data has largely used the tensor-variate normal (TVN) distribution that may be inadequate for data arising from distributions with heavier or lighter tails. We study a general family of elliptically contoured (EC) TV distributions and derive its characterizations, moments, marginal, and conditional distributions. We describe procedures for maximum likelihood estimation from data that are (1) uncorrelated draws from an EC distribution, (2) from a scale mixture of the TVN distribution, and (3) from an underlying but unknown EC distribution, for which we extend Tyler’s robust estimator. A detailed simulation study highlights the benefits of choosing an EC distribution over the TVN for heavier-tailed data. We develop TV classification rules using discriminant analysis and EC errors and show that they better predict cats and dogs from images in the Animal Faces-HQ dataset than the TVN-based rules. A novel tensor-on-tensor regression and TV analysis of variance (TANOVA) framework under EC errors is also demonstrated to better characterize gender, age, and ethnic origin than the usual TVN-based TANOVA in the celebrated labeled faces of the wild dataset.
Entropy is a state variable that may be obtained from any thermodynamically complete equation of state (EOS). However, hydrocode calculations that output the entropy often contain numerical errors; this is not because of the EOS, but rather the solution techniques that are used in hydrocodes (especially Eulerian) such as convection, remapping, and artificial viscosity. In this work, empirical correlations are investigated to reduce the errors in entropy without altering the solution techniques for the conservation of mass, momentum, and energy. Specifically, these correlations are developed for the function of entropy ZS, and they depend upon the net artificial viscous work, as determined via Sandia National Laboratories’ shock physics hydrocode CTH. These results are a continuation of a prior effort to implement the entropy-based CREST reactive burn model in CTH, and they are presented here to stimulate further interest from the shock physics community. Future work is planned to study higher-dimensional shock waves, shock wave interactions, and possible ties between the empirical correlations and a physical law.
Ab initio molecular dynamics (AIMD) simulations were carried out to investigate the equation of state of Nb2O5 and its pressure-density relationship under shock conditions. The focus of this study is on the monoclinic B−Nb2O5 (C2/c) polymorph. Enthalpy calculations from AIMD trajectories at 300 K show that the pressure-induced transformation between the thermodynamically most stable crystalline monoclinic parent phase H−Nb2O5 (P2/m) and B−Nb2O5 occurs at ∼1.9 GPa. This H→B transition is energetically more favorable than the H→L(Pmm2) pressure-induced transition recently observed at ∼5.9−9.0 GPa. The predicted shock properties of Nb2O5 polymorphs are also compared to their Nb and NbO2 counterparts to assess the impact of niobium oxidation on shock response.
Entropy is a state variable that may be obtained from any thermodynamically complete equation of state (EOS). However, hydrocode calculations that output the entropy often contain numerical errors; this is not because of the EOS, but rather the solution techniques that are used in hydrocodes (especially Eulerian) such as convection, remapping, and artificial viscosity. Here, in this work, empirical correlations are investigated to reduce the errors in entropy without altering the solution techniques for the conservation of mass, momentum, and energy. Specifically, these correlations are developed for the function of entropy ZS, and they depend upon the net artificial viscous work, as determined via Sandia National Laboratories’ shock physics hydrocode CTH. These results are a continuation of a prior effort to implement the entropy-based CREST reactive burn model in CTH, and they are presented here to stimulate further interest from the shock physics community. Future work is planned to study higher-dimensional shock waves, shock wave interactions, and possible ties between the empirical correlations and a physical law.
Using coarse graining, the upscaled mechanical properties of a solid with small scale heterogeneities are derived. The method maps internal forces at the small scale onto peridynamic bond forces in the coarse grained mesh. These upscaled bond forces are used to calibrate a peridynamic material model with position-dependent parameters. These parameters incorporate mesoscale variations in the statistics of the small scale system. The upscaled peridynamic model can have a much coarser discretization than the original small scale model, allowing larger scale simulations to be performed efficiently. The convergence properties of the method are investigated for representative random microstructures. A bond breakage criterion for the upscaled peridynamic material model is also demonstrated.
Information security and computing, two critical technological challenges for post-digital computation, pose opposing requirements – security (encryption) requires a source of unpredictability, while computing generally requires predictability. Each of these contrasting requirements presently necessitates distinct conventional Si-based hardware units with power-hungry overheads. This work demonstrates Cu0.3Te0.7/HfO2 (‘CuTeHO’) ion-migration-driven memristors that satisfy the contrasting requirements. Under specific operating biases, CuTeHO memristors generate truly random and physically unclonable functions, while under other biases, they perform universal Boolean logic. Using these computing primitives, this work experimentally demonstrates a single system that performs cryptographic key generation, universal Boolean logic operations, and encryption/decryption. Circuit-based calculations reveal the energy and latency advantages of the CuTeHO memristors in these operations. This work illustrates the functional flexibility of memristors in implementing operations with varying component-level requirements.
Laser powder bed fusion (LPBF) additive manufacturing makes near-net-shaped parts with reduced material cost and time, rising as a promising technology to fabricate Ti-6Al-4 V, a widely used titanium alloy in aerospace and medical industries. However, LPBF Ti-6Al-4 V parts produced with 67° rotation between layers, a scan strategy commonly used to reduce microstructure and property inhomogeneity, have varying grain morphologies and weak crystallographic textures that change depending on processing parameters. This study predicts LPBF Ti-6Al-4 V solidification at three energy levels using a finite difference-Monte Carlo method and validates the simulations with large-area electron backscatter diffraction (EBSD) scans. The developed model accurately shows that a 〈001〉 texture forms at low energy and a 〈111〉 texture occurs at higher energies parallel to the build direction but with a lower strength than the textures observed from EBSD. A validated and well-established method of combining spatial correlation and general spherical harmonics representation of texture is developed to calculate a difference score between simulations and experiments. The quantitative comparison enables effective fine-tuning of nucleation density (N0) input, which shows a nonlinear relationship with increasing energy level. Future improvements in texture prediction code and a more comprehensive study of N0 with different energy levels will further advance the optimization of LPBF Ti-6Al-4 V components. These developments contribute a novel understanding of crystallographic texture formation in LPBF Ti-6Al-4 V, the development of robust model validation and calibration pipeline methodologies, and provide a platform for mechanical property prediction and process parameter optimization.
Modern lens designs are capable of resolving greater than 10 gigapixels, while advances in camera frame-rate and hyperspectral imaging have made data acquisition rates of Terapixel/second a real possibility. The main bottlenecks preventing such high data-rate systems are power consumption and data storage. In this work, we show that analog photonic encoders could address this challenge, enabling high-speed image compression using orders-of-magnitude lower power than digital electronics. Our approach relies on a silicon-photonics front-end to compress raw image data, foregoing energy-intensive image conditioning and reducing data storage requirements. The compression scheme uses a passive disordered photonic structure to perform kernel-type random projections of the raw image data with minimal power consumption and low latency. A back-end neural network can then reconstruct the original images with structural similarity exceeding 90%. This scheme has the potential to process data streams exceeding Terapixel/second using less than 100 fJ/pixel, providing a path to ultra-high-resolution data and image acquisition systems.
X-rays can provide images when an object is visibly obstructed, allowing for motion measurements via x-ray digital image correlation (DIC). However, x-ray images are path-integrated and contain data for all objects between the source and detector. If multiple objects are present in the x-ray path, conventional DIC algorithms may fail to correlate the x-ray images. A new DIC algorithm called path-integrated (PI)-DIC addresses this issue by reformulating the matching criterion for DIC to account for multiple, independently-moving objects. PI-DIC requires a set of reference x-ray images of each independent object. However, due to experimental constraints, such reference images might not be obtainable from the experiment. This work focuses on the reliability of synthetically-generated reference images, in such cases. A simplified exemplar is used for demonstration purposes, consisting of two aluminum plates with tantalum x-ray DIC patterns undergoing independent rigid translations. Synthetic reference images based on the “as-designed” DIC patterns were generated. However, PI-DIC with the synthetic images suffered some biases due to manufacturing defects of the patterns. A systematic study of seven identified defect types found that an incorrect feature diameter was the most influential defect. Synthetic images were re-generated with the corrected feature diameter, and PI-DIC errors were improved by a factor of 3-4. Final biases ranged from 0.00-0.04 px, and standard uncertainties ranged from 0.06-0.11 px. In conclusion, PI-DIC accurately measured the independent displacement of two plates from a single series of path-integrated x-ray images using synthetically-generated reference images, and the methods and conclusions derived here can be extended to more generalized cases involving stereo PI-DIC for arbitrary specimen geometry and motion. This work thus extends the application space of x-ray imaging for full-field DIC measurements of multiple surfaces or objects in extreme environments where optical DIC is not possible.
Additive manufacturing (AM) technology, specifically 3D printing, holds great promise for in-orbit manufacturing. In-space printing can significantly reduce the mass, cost, and risk of long-term space exploration by enabling replacement parts to be made as needed and reducing dependence on Earth. However, printing in a zero-gravity environment poses challenges due to the absence of a rigid ground for the print platform, which can result in vibrational and rotational forces that may impact printing integrity. To address this issue, this paper proposes a novel linear magnetic position tracking algorithm, named Navigation Integrating Magnets By Linear Estimation (NIMBLE), for dynamic vibration compensation during 3D printing of truss structures in space. Compared to the most commonly used nonlinear optimization method, the NIMBLE algorithm is more than two orders of magnitude faster. With only a single 3-axis magnet sensor and a small NdFeB magnet, the NIMBLE algorithm provides a simple and easily implemented tracking solution for in-orbit 3D printing.
The bulk-boundary correspondence in topological crystalline insulators (TCIs) links the topological properties of the bulk to robust observables on the edges, e.g., the existence of robust edge modes or fractional charge. In one dimension, TCIs protected by reflection symmetry have been realized in a variety of systems in which each unit cell has spatially distributed degrees of freedom (SDOF). However, these realizations exhibit sensitivity of the resulting edge modes to variations in edge termination and to the local breaking of the protective spatial symmetries by inhomogeneity. Here we demonstrate topologically protected edge states in a monoatomic, orbital-based TCI that mitigates both of these issues. By collapsing all SDOF within the unit cell to a singular point in space, we eliminate the ambiguity in unit-cell definition and hence remove a prominent source of boundary termination variability. The topological observables are also more tolerant to disorder in the orbital energies. To validate this concept, we experimentally realize a lattice of mechanical resonators where each resonator acts as an "atom"that harbors two key orbital degrees of freedom having opposite reflection parity. Our measurements of this system provide direct visualization of the sp-hybridization between orbital modes that leads to a nontrivial band inversion in the bulk.
Composite materials with different microstructural material symmetries are common in engineering applications where grain structure, alloying and particle/fiber packing are optimized via controlled manufacturing. In fact these microstructural tunings can be done throughout a part to achieve functional gradation and optimization at a structural level. To predict the performance of particular microstructural configuration and thereby overall performance, constitutive models of materials with microstructure are needed. In this work we provide neural network architectures that provide effective homogenization models of materials with anisotropic components. These models satisfy equivariance and material symmetry principles inherently through a combination of equivariant and tensor basis operations. We demonstrate them on datasets of stochastic volume elements with different textures and phases where the material undergoes elastic and plastic deformation, and show that the these network architectures provide significant performance improvements.
Rimsza, Jessica; Maksimov, Vasilii; Welch, Rebecca S.; Potter, Arron R.; Mauro, John C.; Wilkinson, Collin J.
Decarbonizing the glass industry requires alternative melting technology, as current industrial melting practices rely heavily on fossil fuels. Hydrogen has been proposed as an alternative to carbon-based fuels, but the ensuing consequences on the mechanical behavior of the glass remain to be clarified. A critical distinction between hydrogen and carbon-based fuels is the increased generation of water during combustion, which raises the equilibrium solubility of water in the melt and alters the behavior of the resulting glass. A series of five silicate glasses with 80% silica and variable [Na2O]/([H2O] + [Na2O]) ratios were simulated using molecular dynamics to elucidate the effects of water on fracture. Several fracture toughness calculation methods were used in combination with atomistic fracture simulations to examine the effects of hydroxyl content on fracture behavior. This study reveals that the crack propagation pathway is a key metric to understanding fracture toughness. Notably, the fracture propagation path favors hydrogen sites over sodium sites, offering a possible explanation of the experimentally observed effects of water on fracture properties.
Material Testing 2.0 (MT2.0) is a paradigm that advocates for the use of rich, full-field data, such as from digital image correlation and infrared thermography, for material identification. By employing heterogeneous, multi-axial data in conjunction with sophisticated inverse calibration techniques such as finite element model updating and the virtual fields method, MT2.0 aims to reduce the number of specimens needed for material identification and to increase confidence in the calibration results. To support continued development, improvement, and validation of such inverse methods—specifically for rate-dependent, temperature-dependent, and anisotropic metal plasticity models—we provide here a thorough experimental data set for 304L stainless steel sheet metal. The data set includes full-field displacement, strain, and temperature data for seven unique specimen geometries tested at different strain rates and in different material orientations. Commensurate extensometer strain data from tensile dog bones is provided as well for comparison. We believe this complete data set will be a valuable contribution to the experimental and computational mechanics communities, supporting continued advances in material identification methods.
Ostrove, Corey I.; Rudinger, Kenneth M.; Blume-Kohout, Robin; Young, Kevin; Stemp, Holly G.; Asaad, Serwan; Van Blankenstein, Mark R.; Vaartjes, Arjen; Johnson, Mark A.I.; Madzik, Mateusz T.; Heskes, Amber J.A.; Firgau, Hannes R.; Su, Rocky Y.; Yang, Chih H.; Laucht, Arne; Hudson, Fay E.; Dzurak, Andrew S.; Itoh, Kohei M.; Jakob, Alexander M.; Johnson, Brett C.; Jamieson, David N.; Morello, Andrea
Scalable quantum processors require high-fidelity universal quantum logic operations in a manufacturable physical platform. Donors in silicon provide atomic size, excellent quantum coherence and compatibility with standard semiconductor processing, but no entanglement between donor-bound electron spins has been demonstrated to date. Here we present the experimental demonstration and tomography of universal one- and two-qubit gates in a system of two weakly exchange-coupled electrons, bound to single phosphorus donors introduced in silicon by ion implantation. We observe that the exchange interaction has no effect on the qubit coherence. We quantify the fidelity of the quantum operations using gate set tomography (GST), and we use the universal gate set to create entangled Bell states of the electrons spins, with fidelity 91.3 ± 3.0%, and concurrence 0.87 ± 0.05. These results form the necessary basis for scaling up donor-based quantum computers.
In the machine learning problem of multilabel classification, the objective is to determine for each test instance which classes the instance belongs to. In this work, we consider an extension of multilabel classification, called multilabel proportion prediction, in the context of radioisotope identification (RIID) using gamma spectra data. We aim to not only predict radioisotope proportions, but also identify out-of-distribution (OOD) spectra. We achieve this goal by viewing gamma spectra as discrete probability distributions, and based on this perspective, we develop a custom semi-supervised loss function that combines a traditional supervised loss with an unsupervised reconstruction error function. Our approach was motivated by its application to the analysis of short-lived fission products from spent nuclear fuel. In particular, we demonstrate that a neural network model trained with our loss function can successfully predict the relative proportions of 37 radioisotopes simultaneously. The model trained with synthetic data was then applied to measurements taken by Pacific Northwest National Laboratory (PNNL) to conduct analysis typically done by subject-matter experts. We also extend our approach to successfully identify when measurements are OOD, and thus should not be trusted, whether due to the presence of a novel source or novel proportions.
The current present in a galvanic couple can define its resistance or susceptibility to corrosion. However, as the current is dependent upon environmental, material, and geometrical parameters it is experimentally costly to measure. To reduce these costs, Finite Element (FE) simulations can be used to assess the cathodic current but also require experimental inputs to define boundary conditions. Due to these challenges, it is crucial to accelerate predictions and accurately predict the current output for different environments and geometries representative of in-service conditions. Machine learned surrogate models provides a means to accelerate corrosion predictions. However, a one-time cost is incurred in procuring the simulation and experimental dataset necessary to calibrate the surrogate model. Therefore, an active learning protocol is developed through calibration of a low-cost surrogate model for the cathodic current of an exemplar galvanic couple (AA7075-SS304) as a function of environmental and geometric parameters. The surrogate model is calibrated on a dataset of FE simulations, and calculates an acquisition function that identifies specific additional inputs with the maximum potential to improve the current predictions. This is accomplished through a staggered workflow that not only improves and refines prediction, but identifies the points at which the most information is gained, thus enabling expansion to a larger parameter space. The protocols developed and demonstrated in this work provide a powerful tool for screening various forms of corrosion under in-service conditions.
A combined Mode I-II cohesive zone (CZ) elasto-plastic constitutive model, and a two-dimensional (2D) cohesive interface element (CIE) are formulated and implemented at small strain within an ABAQUS User Element (UEL) for simulating 2D crack nucleation and propagation in fluid-saturated porous media. The CZ model mitigates problems of convergence for the global Newton-Raphson solver within ABAQUS, which when combined with a viscous stabilization procedure allows for simulation of post-peak response under load control for coupled poromechanical finite element analysis, such as concrete gravity dam stability analysis. Verification examples are presented, along with a more complex ambient limestone-concrete wedge fracture experiment, water-pressurized concrete wedge experiment, and concrete gravity dam stability analyses. A calibration procedure for estimating the CZ parameters is demonstrated with the limestone-concrete wedge fracture process. For the water-pressurized concrete wedge fracture experiment it is shown that the inherent time-dependence of the poromechanical CIE analysis provides a good match with experimental force versus displacement results at various crack mouth opening rates, yet misses the pore water pressure evolution ahead of the crack tip propagation. This is likely a result of the concrete being partially-saturated in the experiment, whereas the finite element analysis assumes fully water saturated concrete. For the concrete gravity dam analysis, it is shown that base crack opening and associated water uplift pressure leads to a reduced Factor of Safety, which is confirmed by separate analytical calculations.
Barium titanate (BTO) is a ferroelectric perovskite used in electronics and energy storage systems because of its high dielectric constant. Decreasing the BTO particle size was shown to increase the dielectric constant of the perovskite, which is an intriguing but contested result. We investigated this result by fabricating silicone-matrix nanocomposite specimens containing BTO particles of decreasing diameter. Furthermore, density functional theory modeling was used to understand the interactions at the BTO particle surface. Combining results from experiments and modeling indicated that polymer type, particle surface interactions, and particle surface structure can influence the dielectric properties of polymer-matrix nanocomposites containing BTO.
The spatial distribution of electric field due to an imposed electric charge density profile in an infinite slab of dielectric material is derived analytically by integrating Gauss's law. Various charge density distributions are considered, including exponential and power-law forms. The Maxwell stress tensor is used to compute a notional static stress in the material due to the charge density and its electric field. Characteristics of the electric field and stress distributions are computed for example cases in polyethylene, showing that field magnitudes exceeding the dielectric strength would be required in order to achieve a stress exceeding the ultimate tensile strength.
Searfus, O.; Meert, C.; Clarke, S.; Pozzi, S.; Jovanovic, I.
The use of photon active interrogation to detect special nuclear material has held significant theoretical promise, as the interrogating source particles, photons, are fundamentally different from one of the main signatures of special nuclear material: neutrons produced in nuclear fission. However, neutrons produced by photonuclear reactions in the accelerator target, collimator, and environment can obscure the fission neutron signal. These (γ,n) neutrons could be discriminated from fission neutrons by their energy spectrum, but common detectors sensitive to the neutron spectrum, like organic scintillators, are typically hampered by the intense photon background characteristic of photon-based active interrogation. In contrast, high-pressure 4He-based scintillation detectors are well -suited to photon active interrogation, as they are similarly sensitive to fast neutrons and can measure their spectrum, but show little response to gamma rays. In this work, a photon active interrogation system utilizing a 4He scintillation detector and a 9 MeV linac-bremsstrahlung x-ray source was experimentally evaluated. The detector was shown to be capable of operating in intense gamma-ray environments and detecting photofission neutrons from 238U when interrogated by this x-ray source. The photofission neutrons show clear spectral separation from (γ,n) neutrons produced in lead, a common shielding material.
We present a machine-learning strategy for finite element analysis of solid mechanics wherein we replace complex portions of a computational domain with a data-driven surrogate. In the proposed strategy, we decompose a computational domain into an “outer” coarse-scale domain that we resolve using a finite element method (FEM) and an “inner” fine-scale domain. We then develop a machine-learned (ML) model for the impact of the inner domain on the outer domain. In essence, for solid mechanics, our machine-learned surrogate performs static condensation of the inner domain degrees of freedom. This is achieved by learning the map from displacements on the inner-outer domain interface boundary to forces contributed by the inner domain to the outer domain on the same interface boundary. We consider two such mappings, one that directly maps from displacements to forces without constraints, and one that maps from displacements to forces by virtue of learning a symmetric positive semi-definite (SPSD) stiffness matrix. We demonstrate, in a simplified setting, that learning an SPSD stiffness matrix results in a coarse-scale problem that is well-posed with a unique solution. We present numerical experiments on several exemplars, ranging from finite deformations of a cube to finite deformations with contact of a fastener-bushing geometry. We demonstrate that enforcing an SPSD stiffness matrix drastically improves the robustness and accuracy of FEM–ML coupled simulations, and that the resulting methods can accurately characterize out-of-sample loading configurations with significant speedups over the standard FEM simulations.
In this study we present a replication method to determine surface roughness and to identify surface features when a sample cannot be directly analyzed by conventional techniques. As a demonstration, this method was applied to an unused spent nuclear fuel dry storage canister to determine variation across different surface features. In this study, an initial material down-selection was performed to determine the best molding agent and determined that non-modified Polytek PlatSil23-75 provided the most accurate representation of the surface while providing good usability. Other materials that were considered include Polygel Brush-On 35 polyurethane rubber (with and without Pol-ease 2300 release agent), Polytek PlatSil73-25 silicone rubber (with and without PlatThix thickening agent and Pol-ease 2300 release agent), and Express STD vinylpolysiloxane impression putty. The ability of PlatSil73-25 to create an accurate surface replica was evaluated by creating surface molds of several locations on surface roughness standards representing ISO grade surfaces N3, N5, N7, and N8. Overall, the molds were able to accurately reproduce the expected roughness average (Ra) values, but systematically over-estimated the peak-valley maximum roughness (Rz) values. Using a 3D printed sample cell, several locations across the stainless steel spent nuclear fuel canister were sampled to determine the surface roughness. These measurements provided information regarding variability in normal surface roughness across the canister as well as a detailed evaluation on specific surface features (e.g., welds, grind marks, etc.). The results of these measurements can support development of dry storage canister ageing management programs, as surface roughness is an important factor for surface dust deposition and accumulation. This method can be applied more broadly to different surfaces beyond stainless steel to provide rapid, accurate surface replications for analytical evaluation by profilometry.
Cyber-physical systems have behaviour that crosses domain boundaries during events such as planned operational changes and malicious disturbances. Traditionally, the cyber and physical systems are monitored separately and use very different toolsets and analysis paradigms. The security and privacy of these cyber-physical systems requires improved understanding of the combined cyber-physical system behaviour and methods for holistic analysis. Therefore, the authors propose leveraging clustering techniques on cyber-physical data from smart grid systems to analyse differences and similarities in behaviour during cyber-, physical-, and cyber-physical disturbances. Since clustering methods are commonly used in data science to examine statistical similarities in order to sort large datasets, these algorithms can assist in identifying useful relationships in cyber-physical systems. Through this analysis, deeper insights can be shared with decision-makers on what cyber and physical components are strongly or weakly linked, what cyber-physical pathways are most traversed, and the criticality of certain cyber-physical nodes or edges. This paper presents several types of clustering methods for cyber-physical graphs of smart grid systems and their application in assessing different types of disturbances for informing cyber-physical situational awareness. The collection of these clustering techniques provide a foundational basis for cyber-physical graph interdependency analysis.
Ilgen, Anastasia G.; Borguet, Eric; Geiger, Franz M.; Gibbs, Julianne M.; Grassian, Vicki H.; Jun, Young S.; Kabengi, Nadine; Kubicki, James D.
Solid–water interfaces are crucial for clean water, conventional and renewable energy, and effective nuclear waste management. However, reflecting the complexity of reactive interfaces in continuum-scale models is a challenge, leading to oversimplified representations that often fail to predict real-world behavior. This is because these models use fixed parameters derived by averaging across a wide physicochemical range observed at the molecular scale. Recent studies have revealed the stochastic nature of molecular-level surface sites that define a variety of reaction mechanisms, rates, and products even across a single surface. To bridge the molecular knowledge and predictive continuum-scale models, we propose to represent surface properties with probability distributions rather than with discrete constant values derived by averaging across a heterogeneous surface. This conceptual shift in continuum-scale modeling requires exponentially rising computational power. By incorporating our molecular-scale understanding of solid–water interfaces into continuum-scale models we can pave the way for next generation critical technologies and novel environmental solutions.
Downhole logging tools are commonly used to characterize multi-thousand-foot geothermal wells. The elevated temperatures, pressures, and harsh chemical environments present significant challenges for the long-term operation of these tools, especially when real-time data transmission to the surface is required via data cable lines. Teflon-based single or multi-conductor cables with grease-filled cable heads are typically used for downhole tools. However, over extended periods of operation, the grease used to seal the conductors can slowly dissolve into the well fluid, creating electrical shorts and disabling data transmission. Additionally, when temperatures exceed 260 °C, Teflon can soften, potentially allowing parallel conductors to make contact and cause shorts. Between 2009 and 2015, Draka Cableteq USA, now part of the Prysmian Group, developed a multi-conductor/fiber cable and a four-conductor cable capable of operating above 300 °C. While a full study was conducted on the conductor/fiber cable, the evaluation of the four-conductor cable remained incomplete. With the increasing need for long-term high-temperature (HT) operation of logging tools, Sandia National Laboratories is now completing the evaluation of the four-conductor cable. The four-conductor cable has two major novel aspects. Firstly, its glass braid insulation can operate above 300 °C, eliminating the potential for shorts. Secondly, the insulated conductors are encased in metal tubing along the full length of the cable, creating a high-pressure seal between the cable and the tool. This metal tubing eliminates the need for a grease seal, a major limiting factor in the operation time of common cable lines. Sandia National Laboratories will conduct multiple tests to characterize the cable at temperatures above 300 °C and pressures up to 5,000 psi. This cable would enable tools to operate continuously at elevated temperatures, pressures, and in harsh fluids for extended periods, potentially lasting months.
Herein, we report on the ultrafast photodissociation of nickel tetracarbonyl─a prototypical metal-ligand model system─at 197 nm. Using mid-infrared transient absorption spectroscopy to probe the bound C≡O stretching modes, we find evidence for the picosecond time scale production of highly vibronically excited nickel dicarbonyl and nickel monocarbonyl, in marked contrast with a prior investigation at 193 nm. Further spectral evolution with a 50 ps time constant suggests an additional dissociation step; the absence of any corresponding growth in signal strongly indicates the production of bare Ni, a heretofore unreported product from single-photon excitation of nickel tetracarbonyl. Thus, by probing the deep UV-induced photodynamics of a prototypical metal carbonyl, this Letter adds time-resolved spectroscopic signatures of these dynamics to the sparse literature at high excitation energies.
High-entropy ceramics have garnered interest due to their remarkable hardness, compressive strength, thermal stability, and fracture toughness; yet the discovery of new high-entropy ceramics (out of a tremendous number of possible elemental permutations) still largely requires costly, inefficient, trial-and-error experimental and computational approaches. The entropy forming ability (EFA) factor was recently proposed as a computational descriptor that positively correlates with the likelihood that a 5-metal high-entropy carbide (HECs) will form the desired single phase, homogeneous solid solution; however, discovery of new compositions is computationally expensive. If you consider 8 candidate metals, the HEC EFA approach uses 49 optimizations for each of the 56 unique 5-metal carbides, requiring a total of 2744 costly density functional theory calculations. Here, we describe an orders-of-magnitude more efficient active learning (AL) approach for identifying novel HECs. To begin, we compared numerous methods for generating composition-based feature vectors (e.g., magpie and mat2vec), deployed an ensemble of machine learning (ML) models to generate an average and distribution of predictions, and then utilized the distribution as an uncertainty. We then deployed an AL approach to extract new training data points where the ensemble of ML models predicted a high EFA value or was uncertain of the prediction. Our approach has the combined benefit of decreasing the amount of training data required to reach acceptable prediction qualities and biases the predictions toward identifying HECs with the desired high EFA values, which are tentatively correlated with the formation of single phase HECs. Using this approach, we increased the number of 5-metal carbides screened from 56 to 15,504, revealing 4 compositions with record-high EFA values that were previously unreported in the literature. Our AL framework is also generalizable and could be modified to rationally predict optimized candidate materials/combinations with a wide range of desired properties (e.g., mechanical stability, thermal conductivity).
Carbon dots have attracted widespread interest for sensing applications based on their low cost, ease of synthesis, and robust optical properties. We investigate structure-function evolution on multiemitter fluorescence patterns for model carbon-nitride dots (CNDs) and their implications on trace-level sensing. Hydrothermally synthesized CNDs with different reaction times were used to determine how specific functionalities and their corresponding fluorescence signatures respond upon the addition of trace-level analytes. Archetype explosives molecules were chosen as a testbed due to similarities in substituent groups or inductive properties (i.e., electron withdrawing), and solution-based assays were performed using ratiometric fluorescence excitation-emission mapping (EEM). Analyte-specific quenching and enhancement responses were observed in EEM landscapes that varied with the CND reaction time. We then used self-organizing map models to examine EEM feature clustering with specific analytes. The results reveal that interactions between carbon-nitride frameworks and molecular-like species dictate response characteristics that may be harnessed to tailor sensor development for specific applications.
There is growing interest in material candidates with properties that can be engineered beyond traditional design limits. Compositionally complex oxides (CCO), often called high entropy oxides, are excellent candidates, wherein a lattice site shares more than four cations, forming single-phase solid solutions with unique properties. However, the nature of compositional complexity in dictating properties remains unclear, with characteristics that are difficult to calculate from first principles. Here, compositional complexity is demonstrated as a tunable parameter in a spin-transition oxide semiconductor La1− x(Nd, Sm, Gd, Y)x/4CoO3, by varying the population x of rare earth cations over 0.00≤ x≤ 0.80. Across the series, increasing complexity is revealed to systematically improve crystallinity, increase the amount of electron versus hole carriers, and tune the spin transition temperature and on-off ratio. At high a population (x = 0.8), Seebeck measurements indicate a crossover from hole-majority to electron-majority conduction without the introduction of conventional electron donors, and tunable complexity is proposed as new method to dope semiconductors. First principles calculations combined with angle resolved photoemission reveal an unconventional doping mechanism of lattice distortions leading to asymmetric hole localization over electrons. Thus, tunable complexity is demonstrated as a facile knob to improve crystallinity, tune electronic transitions, and to dope semiconductors beyond traditional means.
Harmonic and subharmonic RF injection locking is demonstrated in a terahertz (THz) quantum-cascade vertical-external-cavity surface-emitting laser (QC-VECSEL). By tuning the RF injection frequency around integer multiples and submultiples of the cavity round-trip frequency, different harmonic and subharmonic orders can be excited in the same device. Modulation-dependent behavior of the device has been studied with recorded lasing spectral broadening and locking bandwidths in each case. In particular, harmonic injection locking results in the observation of harmonic spectra with bandwidths over 200 GHz. A semiclassical Maxwell-density matrix formalism has been applied to interpret QC-VECSEL dynamics, which aligns well with experimental observations.
Conceptual models of smectite hydration include planar (flat) clay layers that undergo stepwise expansion as successive monolayers of water molecules fill the interlayer regions. However, X-ray diffraction (XRD) studies indicate the presence of interstratified hydration states, suggesting non-uniform interlayer hydration in smectites. Additionally, recent theoretical studies have shown that clay layers can adopt bent configurations over nanometer-scale lateral dimensions with minimal effect on mechanical properties. Therefore, in this study we used molecular simulations to evaluate structural properties and water adsorption isotherms for montmorillonite models composed of bent clay layers in mixed hydration states. Results are compared with models consisting of planar clay layers with interstratified hydration states (e.g. 1W–2W). The small degree of bending in these models (up to 1.5 Å of vertical displacement over a 1.3 nm lateral dimension) had little or no effect on bond lengths and angle distributions within the clay layers. Except for models that included dry states, porosities and simulated water adsorption isotherms were nearly identical for bent or flat clay layers with the same averaged layer spacing. Similar agreement was seen with Na- and Ca-exchanged clays. In conclusion, while the small bent models did not retain their configurations during unconstrained molecular dynamics simulation with flexible clay layers, we show that bent structures are stable at much larger length scales by simulating a 41.6×7.1 nm2 system that included dehydrated and hydrated regions in the same interlayer.
The stochastic weighted particle method (SWPM) is a generalization of the Direct Simulation Monte Carlo (DSMC) method where particle weights are variable and dynamic. SWPM is backed by a strong theoretical foundation but has not been critically evaluated for problems of practical interest. A thorough assessment of SWPM for boundary-driven flows reveals significant numerical artifacts near the boundary, notably a diverging heat flux. To correct the boundary heat flux, two modifications to SWPM are proposed: separated grouping and a spatially-dependent weight transfer function. To gauge the relative efficiency of SWPM in comparison to DSMC, a high-Mach-number wheel flow which forms a strong density gradient is also simulated.
Researchers are exploring adding wave energy converters to existing oceanographic buoys to provide a predictable source of renewable power. A ”pitch resonator” power take-off system has been developed that generates power using a geared flywheel system designed to match resonance with the pitching motion of the buoy. However, the novelty of the concept leaves researchers uncertain about various design aspects of the system. This work presents a novel design study of a pitch resonator to inform design decisions for an upcoming deployment of the system. The assessment uses control co-design via WecOptTool to optimize control trajectories for maximal electrical power production while varying five design parameters of the pitch resonator. Given the large search space of the problem, the control trajectories are optimized within a Monte Carlo analysis to identify optimal designs, followed by parameter sweeps around the optimum to identify trends between the design parameters. The gear ratio between the pitch resonator spring and flywheel are found to be the most sensitive design variables to power performance. The assessment also finds similar power generation for various sizes of resonator components, suggesting that correctly designing for optimal control trajectories at resonance is more critical to the design than component sizing.
A neutron fluence map and a total ionizing dose map of the Los Alamos National Laboratory Godiva IV fast burst critical assembly was generated using passive reactor dosimetry, comprised of sulfur pellets and thermoluminescent dosimeters. Godiva IV is an unmoderated, fast burst, critical assembly constructed of approximately 65 kg of highly enriched uranium fuel alloyed with 1.5 % molybdenum for strength. [1] The mapping was performed during a single 75.6 ºC temperature rise burst operation, with the top and sides of the cylindrical Godiva-IV Top Hat covered in passive dosimetry. Dosimetry was placed in a symmetric pattern around the Top Hat, with higher concentrations near the control rods and burst rod. A specific portion of the lower quadrant of the burst rod was mapped to confirm a testing region where the neutron fluence varied by no more than ± 5%. The results will be used to assess the neutron, gamma, and total ionizing dose environment in three-dimensional space around the assembly for higher fidelity experiment placement, active dosimetry positioning, and radiation field characterization.
The characterization of the neutron, prompt gamma-ray, and delayed gamma-ray radiation fields for the White Sands Missile Range (WSMR) Fast Burst Reactor, also known as molybdenum-alloy Godiva (Molly-G) has been assessed at the 6-inch irradiation location. The neutron energy spectra, uncertainties, and common radiation metrics are presented. Code-dependent recommended constants are given to facilitate the conversion of various dosimetry readings into radiation metrics desired by experimenters. The Molly-G core was designed and configured similarly to Godiva II, as an unreflected, unmoderated, cylindrical annulus of uranium-molybdenum-alloy fuel with molybdenum loading of 10%. At the 6-inch position, the axial fluence maximum is about 2.4×1013 n/cm2 per MJ of reactor energy; about 0.1% of the neutron fluence is below 1 keV and 96% is above 100 keV. The 1-MeV Damage-Equivalent Silicon (DES) fluence is estimated at 2.2×1013 n/cm2 per MJ of reactor energy. The prompt gamma-ray dose is roughly 2.5E+03 rad(Si) per MJ and the delayed gamma-ray dose is about 1.3E+03 rad(Si) per MJ.
The characterization of the neutron, prompt gamma-ray, and delayed gamma-ray radiation fields in the University of Texas at Austin Nuclear Engineering Teaching Laboratory (NETL) TRIGA reactor for the beam port (BP) 1/5 free-field environment at the 128-inch location adjacent to the core centerline has been accomplished. NETL is being explored as an auxiliary neutron test facility for the Sandia National Laboratories radiation effects sciences research and development campaigns. The NETL reactor is a TRIGA Mark-II pulse and steady-state, above-ground pool-type reactor. NETL is intended as a university research reactor typically used to perform irradiation experiments for students and customers, radioisotope production, as well as a training reactor. Initial criticality of the NETL TRIGA reactor was achieved on March 12, 1992, making it one of the newest test reactor facilities in the US. The neutron energy spectra, uncertainties, and covariance matrices are presented as well as a neutron fluence map of the experiment area of the cavity. For an unmoderated condition, the neutron fluence at the center of BP 1/5, at the adjacent core axial centerline, is about 8.2×1012 n/cm2 per MJ of reactor energy. About 67% of the neutron fluence is below 1 keV and 22% above 100 keV. The 1-MeV Damage-Equivalent Silicon (DES) fluence is roughly 1.6×1012 n/cm2 per MJ of reactor energy.
High-entropy materials (HEMs) emerged as promising candidates for a diverse array of chemical transformations, including CO2 utilization. However, traditional HEMs catalysts are nonporous, limiting their activity to surface sites. Designing HEMs with intrinsic porosity can open the door toward enhanced reactivity while maintaining the many benefits of high configurational entropy. Here, a synergistic experimental, analytical, and theoretical approach to design the first high-entropy metal-organic frameworks (HEMOFs) derived from polynuclear metal clusters is implemented, a novel class of porous HEMs that is highly active for CO2 fixation under mild conditions and short reaction times, outperforming existing heterogeneous catalysts. HEMOFs with up to 15 distinct metals are synthesized (the highest number of metals ever incorporated into a single MOF) and, for the first time, homogenous metal mixing within individual clusters is directly observed via high-resolution scanning transmission electron microscopy. Importantly, density functional theory studies provide unprecedented insight into the electronic structures of HEMOFs, demonstrating that the density of states in heterometallic clusters is highly sensitive to metal composition. This work dramatically advances HEMOF materials design, paving the way for further exploration of HEMs and offers new avenues for the development of multifunctional materials with tailored properties for a wide range of applications.
The cells in battery energy storage systems are monitored, protected, and controlled by battery management systems whose sensors are susceptible to cyberattacks. False data injection attacks (FDIAs) targeting batteries’ voltage sensors affect cell protection functions and the estimation of critical battery states like the state of charge (SoC). Inaccurate SoC estimation could result in battery overcharging and over discharging, which can have disastrous consequences on grid operations. This paper proposes a three-pronged online and offline method to detect, identify, and classify FDIAs corrupting the voltage sensors of a battery stack. To accurately model the dynamics of the series-connected cells a single particle model is used and to estimate the SoC, the unscented Kalman filter is employed. FDIA detection, identification, and classification was accomplished using a tuned cumulative sum (CUSUM) algorithm, which was compared with a baseline method, the chi-squared error detector. Online simulations and offline batch simulations were performed to determine the effectiveness of the proposed approach. Throughout the batch simulations, the CUSUM algorithm detected attacks, with no false positives, in 99.83% of cases, identified the corrupted sensor in 97% of cases, and determined if the attack was positively or negatively biased in 97% of cases.
Thermal spray processes can benefit from cooling to maintain substrate temper, reduce processing times, and manage thermally induced residual stresses. “Plume quenching” is a plume-targeted cooling technique which has been shown to reduce substrate temperatures by redirection of hot plume gases using a lateral argon curtain injected into the plume, while limiting interaction with the substrate or affecting coating properties. Here, this study explores the use of this technique for residual stress management by reducing the thermally driven component in nickel and tantalum coatings on titanium and aluminum substrates. The in-situ residual stress profiles were measured for all substrate and coating pairings during spraying and cooling, and the deposition and thermal stresses recorded. For substrate and coating pairings where the predominant component of residual stress was thermal (driven by a large difference in coefficient of thermal expansion, Δα, between coating and substrate), plume quenching reduced both the thermal stress and the final stress state of the coating. This was seen primarily in tantalum on aluminum coatings where the Δα was -17 × 10-6 /°C, and thermal stress was reduced by 7.5% and 22.4% for the plume quenching rates of 50 and 100 slpm, respectively.
Pozzolans rich in silica and alumina react with lime to form cementing compounds and are incorporated into portland cement as supplementary cementitious materials (SCMs). However, pozzolanic reactions progress slower than portland cement hydration, limiting their use in modern construction due to insufficient early-age strength. Hence, alternative SCMs that enable faster pozzolanic reactions are necessary including synthetic zeolites, which have high surface areas and compositional purity that indicate the possibility of rapid pozzolanic reactivity. Synthetic zeolites with varying cation composition (Na-zeolite, H-zeolite), SiO2/Al2O3 ratio, and framework type were evaluated for pozzolanic reactivity via Ca(OH)2 consumption using ion exchange and in-situ X-ray diffraction experiments. Na-zeolites exhibited limited exchange reactions with KOH and Ca(OH)2 due to the occupancy of acid sites by Na+ and hydroxyl groups. Meanwhile, H-zeolites readily adsorbed K+ and Ca2+ from a hydroxide solution by exchanging cations with H+ at Brønsted acid sites or cation adsorption at vacant acid sites. By adsorbing cations, the H-zeolite reduced the pH and increased Ca2+ solubility to promote pozzolanic reactions in a system where Ca(OH)2 dissolution/diffusion was a rate limiting factor. High H-zeolite reactivity resulted in 0.8 g of Ca(OH)2 consumed per 1 g of zeolites after 16 h of reaction versus 0.4 g of Ca(OH)2 consumed per 1 g of Na-zeolite. The H-zeolite modulated the pore fluid alkalinity and created a low-density amorphous silicate phase via mechanisms analogous to two-step C-S-H nucleation experiments. Controlling these reaction mechanisms is key to developing next generation pozzolanic cementitious systems with comparable hydration rates to portland cement.
Biaxial stress is identified to play an important role in the polar orthorhombic phase stability in hafnium oxide-based ferroelectric thin films. However, the stress state during various stages of wake-up has not yet been quantified. In this work, the stress evolution with field cycling in hafnium zirconium oxide capacitors is evaluated. The remanent polarization of a 20 nm thick hafnium zirconium oxide thin film increases from 9.80 to 15.0 µC cm−2 following 106 field cycles. This increase in remanent polarization is accompanied by a decrease in relative permittivity that indicates that a phase transformation has occurred. The presence of a phase transformation is supported by nano-Fourier transform infrared spectroscopy measurements and scanning transmission electron microscopy that show an increase in ferroelectric phase content following wake-up. The stress of individual devices field cycled between pristine and 106 cycles is quantified using the sin2(ψ) technique, and the biaxial stress is observed to decrease from 4.3 ± 0.2 to 3.2 ± 0.3 GPa. The decrease in stress is attributed, in part, to a phase transformation from the antipolar Pbca phase to the ferroelectric Pca21 phase. This work provides new insight into the mechanisms controlling and/or accompanying polarization wake-up in hafnium oxide-based ferroelectrics.
Machine-learning function representations such as neural networks have proven to be excellent constructs for constitutive modeling due to their flexibility to represent highly nonlinear data and their ability to incorporate constitutive constraints, which also allows them to generalize well to unseen data. In this work, we extend a polyconvex hyperelastic neural network framework to (isotropic) thermo-hyperelasticity by specifying the thermodynamic and material theoretic requirements for an expansion of the Helmholtz free energy expressed in terms of deformation invariants and temperature. Different formulations which a priori ensure polyconvexity with respect to deformation and concavity with respect to temperature are proposed and discussed. The physics-augmented neural networks are furthermore calibrated with a recently proposed sparsification algorithm that not only aims to fit the training data but also penalizes the number of active parameters, which prevents overfitting in the low data regime and promotes generalization. The performance of the proposed framework is demonstrated on synthetic data, which illustrate the expected thermomechanical phenomena, and existing temperature-dependent uniaxial tension and tension-torsion experimental datasets.
In this paper, we present a Riemannian geometric derivation of the governing equations of motion of nonholonomic dynamic systems. A geometric form of the work-energy principle is first derived. The geometric form can be realized in appropriate generalized quantities, and the independent equations of motion can be obtained if the subspace of generalized speeds allowable by nonholonomic constraints can be determined. We provide a geometric perspective of the governing equations of motion and demonstrate its effectiveness in studying dynamic systems subjected to nonholonomic constraints.
See, Judi E.; Handley, Holly A.H.; Savage-Knepshield, Pamela A.
The Human Readiness Level (HRL) scale is a simple nine-level scale that brings structure and consistency to the real-world application of user-centered design. It enables multidisciplinary consideration of human-focused elements during the system development process. Use of the standardized set of questions comprising the HRL scale results in a single human readiness number that communicates system readiness for human use. The Human Views (HVs) are part of an architecture framework that provides a repository for human-focused system information that can be used during system development to support the evaluation of HRL levels. This paper illustrates how HRLs and HVs can be used in combination to support user-centered design processes. A real-world example for a U.S. Army software modernization program is described to demonstrate application of HRLs and HVs in the context of user-centered design.
Distributed Acoustic Sensing (DAS) can record acoustic wavefields at high sampling rates and with dense spatial resolution difficult to achieve with seismometers. Using optical scattering induced by cable deformation, DAS can record strain fields with spatial resolution of a few meters. However, many experiments utilizing DAS have relied on unused, dark telecommunication fibers. As a result, the geophysical community has not fully explored DAS survey parameters to characterize the ideal array design. This limits our understanding of guiding principles in array design to deploy DAS effectively and efficiently in the field. A better quantitative understanding of DAS array behavior can improve the quality of the data recorded by guiding the DAS array design. Here we use steered response functions, which account for DAS fiber’s directional sensitivity, as well as beamforming and back-projection results from forward modelling calculations to assess the performance of varying DAS array geometries to record regional and local sources. A regular heptagon DAS array demonstrated improved capabilities for recording regional sources over other polygonal arrays, with potential improvements in recording and locating local sources. These results help reveal DAS array performance as a function of geometry and can guide future DAS deployments.
Seelinger, Linus; Reinarz, Anne; Lykkegaard, Mikkel B.; Alghamdi, Amal M.A.; Aristoff, David; Bangerth, Wolfgang; Benezech, Jean; Diez, Matteo; Frey, Kurt; Jakeman, John D.; Jorgensen, Jakob S.; Kim, Ki-Tae; Martinelli, Massimiliano; Parno, Matthew; Pellegrini, Riccardo; Petra, Noemi; Riis, Nicolai A.B.; Rosenfeld, Katherine; Serani, Andrea; Tamellini, Lorenzo; Villa, Umberto; Dodwell, Tim J.; Scheichl, Robert
Uncertainty Quantification (UQ) is vital to safety-critical model-based analyses, but the widespread adoption of sophisticated UQ methods is limited by technical complexity. In this paper, we introduce UM-Bridge (the UQ and Modeling Bridge), a high-level abstraction and software protocol that facilitates universal interoperability of UQ software with simulation codes. It breaks down the technical complexity of advanced UQ applications and enables separation of concerns between experts. UM-Bridge democratizes UQ by allowing effective interdisciplinary collaboration, accelerating the development of advanced UQ methods, and making it easy to perform UQ analyses from prototype to High Performance Computing (HPC) scale. In addition, we present a library of ready-to-run UQ benchmark problems, all easily accessible through UM-Bridge. These benchmarks support UQ methodology research, enabling reproducible performance comparisons. We demonstrate UM-Bridge with several scientific applications, harnessing HPC resources even using UQ codes not designed with HPC support.
Welding processes used in the production of pressure vessels impart residual stresses in the manufactured component. Computational modeling is critical to predicting these residual stress fields and understanding how they interact with notches and flaws to impact pressure vessel durability. Here, in this work, we present a finite element model for a resistance forge weld and validate it using laboratory measurements. Extensive microstructural changes, near-melt temperatures, and large localized deformations along the weld interface pose significant challenges to Lagrangian finite element modeling. The proposed modeling approach overcomes these roadblocks in order to provide a high-fidelity simulation that can predict the residual stress state in the manufactured pressure vessel; a rich microstructural constitutive model accounts for material recrystallization dynamics, a frictional-to-tied contact model is coordinated with the constitutive model to represent interfacial bonding, and adaptive remeshing is employed to alleviate severe mesh distortion. An interrupted-weld approach is applied to the simulation to facilitate comparison to displacement measures. Several techniques are employed for residual stress measurement in order to validate the finite element model: neutron diffraction, the contour method, and the slitting method. Model-measurement comparisons are supplemented with detailed simulations that reflect the configurations of the residual-stress measurement processes themselves. The model results show general agreement with experimental measurements, and we observe some similarities in the features around the weld region. Factors that contribute to model-measurement differences are identified. Finally, we conclude with some discussion of the model development and residual stress measurement strategies, including how to best leverage the efforts put forth here for other weld problems.
Redox flow batteries (RFBs) are an attractive choice for stationary energy storage of renewables such as solar and wind. Non-aqueous redox flow batteries (NARFBs) have garnered broad interest due to their high voltage operation compared to their aqueous counterparts. Further, the utilization of bipolar redox-active molecules (BRMs) is a practical way to alleviate crossover faced by asymmetric RFBs. In this work, ferrocene (Fc) and phthalimide (PI) are covalently linked with various tethering groups which vary in structure and length. The compiled results suggest that the length and steric shielding ability of the linker group can greatly influence the stability and overall performance of Fc-n-PI BRM-based NARFBs. Primary sources of capacity loss are found to be BRM degradation for straight chain spacers <6 carbons and membrane (Nafion) fouling. Fc-hexyl-PI provided the most stable battery cycling and coulombic efficiencies of >98 % over 100 cycles (~13 days). NARFB using Fc-hexyl-PI as an active material exhibited high working voltage (1.93 V) and maximum capacity (1.28 Ah L−1). Additionally, this work highlights rational strategies to improve cycling stability and optimize NARFB performance.
Data-consistent inversion is designed to solve a class of stochastic inverse problems where the solution is a pullback of a probability measure specified on the outputs of a quantities of interest (QoI) map. Here, this work presents stability and convergence results for the case where finite QoI data result in an approximation of the solution as a density. Given their popularity in the literature, separate results are proven for three different approaches to measuring discrepancies between probability measures: f-divergences, integral probability metrics, and Lp metrics. In the context of integral probability metrics, we also introduce a pullback probability metric that is well-suited for data-consistent inversion. This fills a theoretical gap in the convergence and stability results for data-consistent inversion that have mostly focused on convergence of solutions associated with approximate maps. Numerical results are included to illustrate key theoretical results with intuitive and reproducible test problems that include a demonstration of convergence in the measure-theoretic "almost" sense.
The performance and reliability of many structures and components depend on the integrity of interfaces between dissimilar materials. Interfacial toughness Γ is the key material parameter that characterizes resistance to interfacial crack growth, and Γ is known to depend on many factors including temperature. For example, previous work showed that the toughness of an epoxy/aluminum interface decreased 40 % as the test temperature was increased from −60 °C to room temperature (RT). Interfacial integrity at elevated temperatures is of considerable practical importance. Recent measurements show that instead of continuing to decrease with increasing temperature, Γ increases when test temperature is above RT. Cohesive zone finite element calculations of an adhesively bonded, asymmetric double cantilever beam specimen of the type used to measure Γ suggest that this increase in toughness may be a result of R-curve behavior generated by plasticity-enhanced toughening during stable subcritical crack growth with interfacial toughness defined as the critical steady-state limit value. In these calculations, which used an elastic-perfectly plastic epoxy model with a temperature-dependent yield strength, the plasticity-enhanced increase in Γ above its intrinsic value Γo depended on the ratio of interfacial strength σ* to the yield strength σyb of the bond material. There is a nonlinear relationship between Γ/Γo and σ*/σyb with the value Γ/Γo increasing rapidly above a threshold value of σ*/σyb. The predicted increase in toughness can be significant. For example, there is nearly a factor of two predicted increase in Γ/Γo during micrometer-scale crack-growth when σ*/σyb = 2 (a reasonable choice for σ*/σyb). Furthermore, contrary to other reported results, plasticity-enhanced toughening can occur prior to crack advance as the cohesive zone forms and the peak stress at the tip of the original crack tip translates to the tip of the fully formed cohesive zone. These results suggest that plasticity-enhanced toughening should be considered when modeling interfaces at elevated temperatures.
We investigate hydrodynamic fluctuations in the flow past a circular cylinder near the critical Reynolds number Rec for the onset of vortex shedding. Starting from the fluctuating Navier-Stokes equations, we perform a perturbation expansion around Rec to derive analytical expressions for the statistics of the fluctuating lift force. Molecular-level simulations using the direct simulation Monte Carlo method support the theoretical predictions of the lift power spectrum and amplitude distribution. Notably, we have been able to collect sufficient statistics at distances Re/Rec-1=O(10-3) from the instability that confirm the appearance of non-Gaussian fluctuations, and we observe that they are associated with intermittent vortex shedding. These results emphasize how unavoidable thermal-noise-induced fluctuations become dramatically amplified in the vicinity of oscillatory flow instabilities and that their onset is fundamentally stochastic.
A series of drained and undrained water-saturated constant mean-stress tests were performed to investigate the strength, elasticity, and poroelastic response of a water-saturated high porosity nonwelded tuff. Drained strengths are found to increase with increasing effective confining pressures. Elastic moduli increase with increasing mean stress. Undrained strengths are small due to development of high pore pressures that generate low effective confining pressures. Skempton’s values are pressure dependent and appear to reflect the onset of inelastic deformation. Permeabilities decrease after deformation from ∼ 10–14 to ∼ 10–16 m2 and are a function of the applied confining pressure. Deformation is dominated by pore collapse, compaction, and intense microfracturing, with the undrained tests favoring microfracture-dominant deformation and the drained tests favoring compaction-dominant deformation. These property determinations and observations are used to develop/parameterize physics-based models for underground explosives testing.
Nematic liquid crystal elastomers (LCEs) are a unique class of network polymers with the potential for enhanced mechanical energy absorption and dissipation capacity over conventional network polymers because they exhibit both conventional viscoelastic behavior and soft-elastic behavior (nematic director changes under shear loading). This additional inelastic mechanism makes them appealing as candidate damping materials in a variety of applications from vibration to impact. The lattice structures made from the LCEs provide further mechanical energy absorption and dissipation capacity associated with packing out the porosity under compressive loading. Understanding the extent of mechanical energy absorption, which is the work per unit mass (or volume) absorbed during loading, versus dissipation, which is the work per unit mass (or volume) dissipated during a loading cycle, requires measurement of both loading and unloading response. In this study, a bench-top linear actuator was employed to characterize the loading-unloading compressive response of polydomain and monodomain LCE polymers and polydomain LCE lattice structures with two different porosities (nominally, 62% and 85%) at both low and intermediate strain rates at room temperature. As a reference material, a bisphenol-A (BPA) polymer with a similar glass transition temperature (9 °C) as the nematic LCE (4 °C) was also characterized at the same conditions for comparing to the LCE polymers. Based on the loading-unloading stress-strain curves, the energy absorption and dissipation for each material at different strain rates (0.001, 0.1, 1, 10 and 90 s-1) were calculated with considerations of maximum stress and material mass/density. The strain-rate effect on the mechanical response and energy absorption and dissipation behaviors was determined. The energy dissipation ratio was also calculated from the resultant loading and unloading stress-strain curves. All five materials showed significant but different strain rate effects on energy dissipation ratio. The solid LCE and BPA materials showed greater energy dissipation capabilities at both low (0.001 s−1) and high (above 1 s−1) strain rates, but not at the strain rates in between. The polydomain LCE lattice structure showed superior energy dissipation performance compared with the solid polymers especially at high strain rates.
A novel algorithm for explicit temporal discretization of the variable-density, low-Mach Navier-Stokes equations is presented here. Recognizing there is a redundancy between the mass conservation equation, the equation of state, and the transport equation(s) for the scalar(s) which characterize the thermochemical state, and that it destabilizes explicit methods, we demonstrate how to analytically eliminate the redundancy and propose an iterative scheme to solve the resulting transformed scalar equations. The method obtains second-order accuracy in time regardless of the number of iterations, so one can terminate this subproblem once stability is achieved. Hence, flows with larger density ratios can be simulated while still retaining the efficiency, low cost, and parallelizability of an explicit scheme. The temporal discretization algorithm is used within a pseudospectral direct numerical simulation which extends the method of Kim, Moin, and Moser for incompressible flow [17] to the variable-density, low-Mach setting, where we demonstrate stability for density ratios up to ∼25.7.
Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment
Fjeldsted, Aaron P.; Morrow, Tyler; Scott, Clayton; Zhu, Yilun; Holland, Darren E.; Hanks, Ephraim M.; Wolfe, Douglas E.
The adoption of machine learning approaches for gamma-ray spectroscopy has received considerable attention in the literature. Many studies have investigated the deployment of various algorithm architectures to a specific task. However, little attention has been afforded to the development of the datasets leveraged to train the models. Such training datasets typically span a set of environmental or detector parameters to encompass a problem space of interest to a user. Variations in these measurement parameters will also induce fluctuations in the detector response, including expected pile-up and ground scatter effects. Fundamental to this work is the understanding that 1) the underlying spectral shape varies as the measurement parameters change and 2) the statistical uncertainties associated with two spectra impact their level of similarity. While previous studies attribute some arbitrary discretization to the measurement parameters for the generation of their synthetic training data, this work introduces a principled methodology for efficient spectral-based discretization of a problem space. A signal-to-noise ratio (SNR) respective spectral comparison measure and a Gaussian Process Regression (GPR) model are used to predict the spectral similarity across a range of measurement parameters. This innovative approach effectively showcased its capability by dividing a problem space, ranging from 5 cm to 100 cm standoff distances and 5 μCi–100 μCi of 137Cs, into three unique combinations of measurement parameters. The findings from this work will aid in creating more robust datasets, which incorporate many possible measurement scenarios, reduce the number of required experimental test set measurements, and possibly enable experimental training data collection for gamma-ray spectroscopy.
We propose a method to couple local and nonlocal diffusion models. By inheriting desirable properties such as patch tests, asymptotic compatibility and unintrusiveness from related splice and optimization-based coupling schemes, it enables the use of weak (or variational) formulations, is computationally efficient and straightforward to implement. We prove well-posedness of the coupling scheme and demonstrate its properties and effectiveness in a variety of numerical examples.
Rattlesnake is a combined-environments, multiple input/multiple output control system for dynamic excitation of structures under test. It provides capabilities to control multiple responses on the part using multiple exciters using various control strategies. Rattlesnake is written in the Python programming language to facilitate multiple input/multiple output vibration research by allowing users to prescribe custom control laws to the controller. Rattlesnake can target multiple hardware devices, or even perform synthetic control to simulate a test virtually. Rattlesnake has been used to execute control problems with up to 200 response channels and 24 shaker drives. This document describes the functionality, architecture, and usage of the Rattlesnake controller to perform combined environments testing.
We numerically investigate the mechanisms that resulted in induced seismicity occurrence associated with CO2 injection at the Illinois Basin–Decatur Project (IBDP). We build a geologi-cally consistent model that honors key stratigraphic horizons and 3D fault surfaces inter-preted using surface seismic data and microseismicity locations. We populate our model with reservoir and geomechanical properties estimated using well-log and core data. We then performed coupled multiphase flow and geomechanics modeling to investigate the impact of CO2 injection on fault stability using the Coulomb failure criteria. We calibrate our flow model using measured reservoir pressure during the CO2 injection phase. Our model results show that pore-pressure diffusion along faults connecting the injection inter-val to the basement is essential to explain the destabilization of the regions where micro-seismicity occurred, and that poroelastic stresses alone would result in stabilization of those regions. Slip tendency analysis indicates that, due to their orientations with respect to the maximum horizontal stress direction, the faults where the microseismicity occurred were very close to failure prior to injection. These model results highlight the importance of accurate subsurface fault characterization for CO2 sequestration operations.
Accident analysis and ensuring power plant safety are pivotal in the nuclear energy sector. Significant strides have been achieved over the past few decades regarding fire protection and safety, primarily centered on design and regulatory compliance. Yet, after the Fukushima accident a decade ago, the imperative to enhance measures against fire, internal flooding, and power loss has intensified. Hence, a comprehensive, multilayered protection strategy against severe accidents is needed. Consequently, gaining a deeper insight into pool fires and their behavior through extensive validated data can greatly aid in improving these measures using advanced validation techniques. A model validation study was performed at Sandia National Laboratories (SNL) in which a 30-cm diameter methanol pool fire was modeled using the SIERRA/Fuego turbulent reacting flow code. This validation study used a standard validation experiment to compare model results against, and conclusions have been published. The fire was modeled with a large eddy simulation (LES) turbulence model with subgrid turbulent kinetic energy closure. Combustion was modeled using a strained laminar flamelet library approach. Radiative heat transfer was accounted for with a model utilizing the gray-gas approximation. In this study, additional validation analysis is performed using the area validation metric (AVM). These activities are done on multiple datasets involving different variables and temporal/spatial ranges and intervals. The results provide insight into the use of the area validation metric on such temporally varying datasets and the importance of physics-aware use of the metric for proper analysis.
Exploding bridgewire detonators (EBWs) containing pentaerythritol tetranitrate (PETN) exposed to high temperatures may not function following discharge of the design electrical firing signal from a charged capacitor. Knowing functionality of these arbitrarily facing EBWs is crucial when making safety assessments of detonators in accidental fires. Orientation effects are only significant when the PETN is partially melted. The melting temperature can be measured with a differential scanning calorimeter. Nonmelting EBWs will be fully functional provided the detonator never exceeds 406 K (133 °C) for at least 1 h. Conversely, EBWs will not be functional once the average input pellet temperature exceeds 414 K (141 °C) for a least 1 min which is long enough to cause the PETN input pellet to completely melt. Functionality of the EBWs at temperatures between 406 and 414 K will depend on orientation and can be predicted using a stratification model for downward facing detonators but is more complex for arbitrary orientations. A conservative rule of thumb would be to assume that the EBWs are fully functional unless the PETN input pellet has completely melted.
This paper details a computational framework to produce automated, graphical workflows, and how this framework can be deployed to support complex modeling problems like those in nuclear engineering. Key benefits of the framework include: automating previously manual workflows; intuitive construction and communication of workflows through a graphical interface; and automated file transfer and handling for workflows deployed across heterogeneous computing resources. This paper demonstrates the framework's application to probabilistic post-closure performance assessment of systems for deep geologic disposal of nuclear waste. However, the framework is a general capability that can help users running a variety of computational studies.
Koper, Keith D.; Burlacu, Relu; Murray, Riley; Baker, Ben; Tibi, Rigobert; Mueen, Abdullah
Determining the depths of small crustal earthquakes is challenging in many regions of the world, because most seismic networks are too sparse to resolve trade-offs between depth and origin time with conventional arrival-time methods. Precise and accurate depth estimation is important, because it can help seismologists discriminate between earthquakes and explosions, which is relevant to monitoring nuclear test ban treaties and producing earthquake catalogs that are uncontaminated by mining blasts. Here, we examine the depth sensitivity of several physics-based waveform features for ∼8000 earthquakes in southern California that have well-resolved depths from arrival-time inversion. We focus on small earthquakes (2 < ML < 4) recorded at local distances (< 150 km), for which depth estimation is especially challenging. We find that differential magnitudes (Mw= ML–Mc) are positively correlated with focal depth, implying that coda wave excitation decreases with focal depth. We analyze a simple proxy for relative frequency content, Φ≡ log10 (M0)+3log10 (fc (,and find that source spectra are preferentially enriched in high frequencies, or “blue-shifted,” as focal depth increases. We also find that two spectral amplitude ratios Rg 0.5–2 Hz/Sg 0.5–8 Hz and Pg/Sg at 3–8 Hz decrease as focal depth increases. Using multilinear regression with these features as predictor variables, we develop models that can explain 11%–59% of the variance in depths within 10 subregions and 25% of the depth variance across southern California as a whole. We suggest that incorporating these features into a machine learning workflow could help resolve focal depths in regions that are poorly instrumented and lack large databases of well-located events. Some of the waveform features we evaluate in this study have previously been used as source discriminants, and our results imply that their effectiveness in discrimination is partially because explosions generally occur at shallower depths than earthquakes.
A series of reactive-transport models of Enhanced Geothermal Systems (EGS) were constructed using the reactive transport code PFLOTRAN to examine the effect of matrix thermal contraction and mineral dissolution/precipitation on fracture flow in the context of grid cell size and model complexity. It was found that for thermal drawdown at production well, the impact of fracture zone grid cell size is negligible.
Although there are increasing numbers of distributed energy resources (DERs) and microgrids being deployed, current IEEE and utility standards generally strictly limit their interconnection inside secondary networks. Secondary networks are low-voltage meshed (non-radial) distribution systems that create redundancy in the path from the main grid source to each load. This redundancy provides a high level of immunity to disruptions in the distribution system, and thus extremely high reliability of electric power service. There are two main types of secondary networks, called grid and spot secondary networks, both of which are used worldwide. In the future, primary networks in distribution systems that might include looped or meshed distribution systems at the primary-voltage (medium-voltage) level may also become common as a means for improving distribution reliability and resilience.
Here, we used a combined molecular dynamics/active learning (AL) approach to create machine learning models that can predict the diffusion coefficient of epichlorohydrin and chloropropene carbonate, the reactant and product of a common CO2 cycloaddition reaction, in metal-organic frameworks (MOFs). Nanoporous MOFs are effective catalysts for the cycloaddition of CO2 to epoxides. The diffusion rates within nanoporous catalysts can control the rate of reaction as the reactants and products must diffuse to the active sites within the MOF and then out of the nanoporous material for reusability. However, the diffusion process is routinely ignored when searching for new materials in catalytic applications. We verified improvement during the AL process by consistently tracking metrics on the same groups of MOFs to ensure consistency. Metal identity was found to have little impact on diffusion rates, while structural features like pore limiting diameter act as a threshold where a minimum value is needed for high diffusion rates. We identified the MOFs with the highest epichlorohydrin and chloropropene carbonate diffusion coefficients which can be used for further studies of reaction energetics.
Tin-lead-antimony (50Sn–47Pb–3Sb wt.%) soldered assemblies were mechanically tested approximately 30 years after initial production and found to have solder joints of reduced strength. The microstructure of this solder alloy exhibits a ternary eutectic structure with Sn-rich, Pb-rich, and SnSb phases. Accelerated aging was performed to evaluate solder microstructural coarsening and associated strength of laboratory solder joints to correlate these properties to the “naturally aged” solder joints. Isothermal aging was conducted at room temperature, 55, 70, 100, and 135 °C and aging times that ranged from 0.1 to 365 days. The coarsening kinetics of the Pb-rich phase were determined through optical microscopy and image analysis methods established in previous studies on binary Sn–Pb solder. A kinetic equation was developed with time exponent n of 0.43 and activation energy of 24000 J/mol, suggesting grain boundary diffusion or other fast diffusion pathways controlling the microstructural evolution. Compression testing and Vickers microhardness showed significant strength loss within the first 20–30 days after soldering; then, the microstructure and mechanical properties changed more slowly over long periods of time. Further, by combining accelerated aging data and the microstructure-based kinetics, strength predictions were made that match well with the properties of the actual soldered assemblies naturally aged for 30 years. However, aging at the highest temperature of 135 °C produced anomalous behavior suggesting that extraneous aging mechanisms are active. Therefore, data obtained at this temperature or higher should not be used. Overall, the combined microstructural and mechanical property methods used in this study confirmed that the observed reduction in strength of ~ 30-year-old solder joints can be accounted for by the microstructural coarsening that takes place during long-term solid-state aging.
Significant vibration amplitudes and cycles can be produced when traffic signal structures with low inherent damping are excited near one of their natural frequencies. For the mitigation of wind-induced vibrations, dynamic vibration absorbers coupled to the structure are often used. Here, this research investigates the performance of a tapered impact damper, consisting of a hanging spring-mass oscillator inside a housing capable of reducing vibration amplitude over a broader frequency range than the conventional tuned mass damper. A nonlinear, two degree-of-freedom model is developed with coordinates representing the traffic structure and the tapered impact damper. This research focuses on the application of the harmonic balance method to approximate the periodic solutions of the nonlinear equations to compute the nonlinear dynamics of the damped traffic signal structure. After designing and manufacturing a tapered impact damper, the traffic signal structure is tested with and without the damper using free vibration snapback tests. The experimental frequency and damping backbone curves are used to validate the analytical model, and the effectiveness of the damper is discussed.
Janicki, Tesia D.; Liu, Rui; Im, Soohyun; Wan, Zhongyi; Butun, Serkan; Lu, Shaoning; Basit, Nasir; Voyles, Paul M.; Evans, Paul G.; Schmidt, J.R.
Strontium titanate (SrTiO3, STO) is a complex metal oxide with a cubic perovskite crystal structure. Due to its easily described and understood crystal structure in the cubic phase, STO is an ideal model system for exploring the mechanistic details of solid-phase epitaxy (SPE) in complex oxides. SPE is a crystallization approach that aims to guide crystal growth at low homologous temperatures to achieve targeted microstructures. Beyond planar thin films, SPE can also exploit the addition of a chemically inert, noncrystallizing, amorphous obstacle in the path of crystallization to generate complex three-dimensional structures. The introduction of this mask fundamentally alters the SPE process, inducing a transition from two- to three-dimensional geometries and from vertical to lateral crystal growth under the influence of the crystal/mask/amorphous boundary. Using a combination of molecular dynamics simulations and experiments, we identify several unique phenomena in the nanoscale growth behaviors in both conventional (unmasked) and masked SPE. Examining conventional SPE of STO, we find that crystallization at the interface is strongly correlated to, and potentially driven by, density fluctuations in the region of the amorphous STO near the crystalline/amorphous interface with a strong facet dependence. In the masked case, we find that the crystalline growth front becomes nonplanar near contact with the mask. We also observe a minimum vertical growth requirement prior to lateral crystallization. Both phenomena depend on the relative bulk and interfacial free energies of the three-phase (crystal/mask/amorphous) system.
Many technologies require stable or metastable surface morphology. In this paper we study the factors that control the metastability of a common feature of rough surfaces: "hillocks."We use low energy electron microscopy to follow the evolution of the individual atomic steps in hillocks on Pd(111). We show that the uppermost island in the stack often adopts a static, metastable configuration. Modeling this result shows that the degree of the metastability depends on the configuration of steps dozens of atomic layers lower. Our model allows us to link surface metastability to the atomic processes of surface evolution.
Polymers are an effective test bed for studying topological constraints in condensed matter due to a wide array of synthetically available chain topologies. When linear and ring polymers are blended together, emergent rheological properties are observed as the blend can be more viscous than either of the individual components. This emergent behavior arises since ring-linear blends can form long-lived topological constraints as the linear polymers thread the ring polymers. Here, we demonstrate how the Gauss linking integral can be used to efficiently evaluate the relaxation of topological constraints in ring-linear polymer blends. For majority-linear blends, the relaxation rate of topological constraints depends primarily on reptation of the linear polymers, resulting in the diffusive time τd,R for rings of length NR blended with linear chains of length Nl to scale as τd,R∼NR2NL3.4.
The transmission interference fringe (TIF) technique was developed to visualize the dynamics of evaporating droplets based on the Reflection Interference Fringe (RIF) technique for micro-sized droplets. The geometric formulation was conducted to determine the contact angle (CA) and height of macro-sized droplets without the need for the prism used in RIF. The TIF characteristics were analyzed through experiments and simulations to demonstrate a wider range of contact angles from 0 to 90°, in contrast to RIF's limited range of 0-30°. TIF was utilized to visualize the dynamic evaporation of droplets in the constant contact radius (CCR) mode, observing the droplet profile change from convex-only to convex-concave at the end of dry-out from the interference fringe formation. The TIF also observed the contact angle increase from the fringe radius increase. This observation is uniquely reported as the interference fringe (IF) technique can detect the formation of interference fringe between the reflection from the center convex profile and the reflection from the edge concave profile on the far-field screen. Unlike general microscopy techniques, TIF can detect far-field interference fringes as it focuses beyond the droplet-substrate interface. The formation of the convex-concave profile during CCR evaporation is believed to be influenced by the non-uniform evaporative flux along the droplet surface.
Caskey, Susan; Keating, Charles B.; Katina, Polinpapilinho F.; Bradley, Joseph M.; Hodge, Richard; Martin, James N.
The purpose of this paper is to explore the concept of ‘enterprise’ in the context of Systems Engineering (SE). The term ‘enterprise’ has been used extensively to generally describe large complex entities that have an extensive scope of operations. However, a deeper examination of ‘enterprise’ significance for SE can provide insights as our challenges continue with increasingly complex, uncertain, ambiguous, and integrated entities struggling to thrive in the future. The paper explores three central topics. First, the concept of enterprise is introduced as a central aspect of the future focus for SE, as recognized in the INCOSE SE Vision 2035. Second, a more detailed examination of the enterprise concept is developed in relationship to SE. The thrust of this examination is to understand the nature and role of ‘enterprise’ across a broad spectrum of literature and knowledge, ultimately providing a more informed perspective of enterprise for SE. As part of this exploration, a bibliometric analysis of the term ‘enterprise’ is performed. This exploration extracts key themes (clusters) in the ‘enterprise’ literature. Third, challenges for further development and inculcation of ‘enterprise’ within the SE discipline and support for realization of the SE 2035 Vision are suggested. These challenges point out the need to ‘think differently’ about ‘enterprise’ within the SE context. ‘Enterprise’ is proposed as a central, albeit different, perspective for the SE discipline. Finally, the paper closes with a first–generation perspective for ‘enterprise’ in pursuit of the SE Vision 2035.
Organizations play a key role in supporting various societal functions, ranging from environmental governance to the manufacturing of goods. Here, the behaviors of organization are impacted by various influences, including information, technology, authority, economic leverage, historical experiences, and external factors, such as regulations. This paper introduces a generalized framework, focused on the relative structure of an organization (tight vs. loose), that can be used to understand how different influence pathways can impact decision-making within differently structured organizations. This generalized framework is then translated into a modeling and simulation platform to support and assess implications of these structural differences in resilience to disinformation (measured by organizational behaviors of timeliness and inclusion of quality information) using a systems dynamics approach Preliminary results indicate that a tightly structured organization may be less timely at processing information but could be more resilient against using poor quality information in organizational decisions compared to a loosely structured organization. Ongoing work is underway to understand the robustness of these findings and to validate current model design activities with empirical insights.
Estimating spatially distributed properties such as permeability from available sparse measurements is a great challenge in efficient subsurface CO2 storage operations. In this paper, a deep generative model that can accurately capture complex subsurface structure is tested with an ensemble-based inversion method for accurate and accelerated characterization of CO2 storage sites. We chose Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP) for its realistic reservoir property representation and Ensemble Smoother with Multiple Data Assimilation (ES-MDA) for its robust data fitting and uncertainty quantification capability. WGAN-GP are trained to generate high-dimensional permeability fields from a low-dimensional latent space and ES-MDA then updates the latent variables by assimilating available measurements. Several subsurface site characterization examples including Gaussian, channelized, and fractured reservoirs are used to evaluate the accuracy and computational efficiency of the proposed method and the main features of the unknown permeability fields are characterized accurately with reliable uncertainty quantification. Furthermore, the estimation performance is compared with a widely-used variational, i.e., optimization-based, inversion approach, and the proposed approach outperforms the variational inversion method in several benchmark cases. We explain such superior performance by visualizing the objective function in the latent space: because of nonlinear and aggressive dimension reduction via generative modeling, the objective function surface becomes extremely complex while the ensemble approximation can smooth out the multi-modal surface during the minimization. This suggests that the ensemble-based approach works well over the variational approach when combined with deep generative models at the cost of forward model runs unless convergence-ensuring modifications are implemented in the variational inversion.
The Gamma Detector Response and Analysis Software (GADRAS) package includes an inverse modeling tool that is helpful in identifying characteristics of unknown radioactive materials. Traditionally, uncertainties in this analysis were derived solely from measurement data quality and the fit of synthetic spectra. This paper aims to rigorously quantify additional sources of uncertainty, focusing on uncertainties arising from measurements being analyzed, Detector Response Function (DRF) characterization, and DRF extrapolation. Applying these findings to the BeRPBall benchmark data set, we demonstrated the impact of these uncertainties on plutonium and polyethylene estimates. The results underscore the importance of incorporating diverse uncertainty sources to enhance the accuracy and reliability of GADRAS’s inverse modeling capabilities.
A novel approach is presented for parametric analysis of remotely-sensed ground and cloud clutter. A spatial-frequency-domain clutter model is generated from an extensive, one-year database of weather imagery and statistics are given for each spatial frequency. This approach is useful for the analysis and design of spatial and temporal clutter-rejection filters, which can also be analyzed in this domain.
Anomalous behavior poses serious risks to assured performance and reliability of complex, high-consequence systems. For spaceborne assets and their state-of-health (SOH) telemetry, the challenges of high-dimensional data of varying data types are compounded by computational limitations from size, weight, and power (SWaP) constraints as well as data availability. Automated anomaly detection methods tend to perform poorly under these constraints, while current operational approaches can introduce delays in response time due to the manual, retrospective processes for understanding system failures. As a result, presently deployed space systems, and those deployed in the near future, face situations where mission operations might be delayed or only be able to operate under degraded capabilities. Here, we examine a near-term lightweight solution that provides real-time detection capabilities for rare events and assess state-of-the-art anomaly detection techniques against real SOH telemetry from space platforms. This report describes our methodology and research, which could support more automated capabilities for comprehensive space operations as well as for other resource-constrained edge applications.
We study the problem of multifidelity uncertainty propagation for computationally expensive models. In particular, we consider the general setting where the high-fidelity and low-fidelity models have a dissimilar parameterization both in terms of number of random inputs and their probability distributions, which can be either known in closed form or provided through samples. We derive novel multifidelity Monte Carlo estimators which rely on a shared subspace between the high-fidelity and low-fidelity models where the parameters follow the same probability distribution, i.e., a standard Gaussian. We build the shared space employing normalizing flows to map different probability distributions into a common one, together with linear and nonlinear dimensionality reduction techniques, active subspaces and autoencoders, respectively, which capture the subspaces where the models vary the most. We then compose the existing low-fidelity model with these transformations and construct modified models with an increased correlation with the high-fidelity model, which therefore yield multifidelity estimators with reduced variance. A series of numerical experiments illustrate the properties and advantages of our approaches.
In this study, different approaches in performance assessment (PA) of the long-term safety of a repository for radioactive waste were examined. This investigation was carried out as part of the DECOVALEX-2023 project, an international collaborative effort for research and model comparison. One specific task of the DECOVALEX-2023 project was the Salt Performance Assessment Modelling task (Salt PA), which aimed at comparing various models and methods employed in the performance assessment of deep geological repositories in salt. In the context of the Salt PA task, three distinct teams from SNL (United States), Quintessa Ltd (United Kingdom), and GRS (Germany) examined the consequences of employing different levels of abstractions when modelling the repository's geometry and implementing various features and processes, using the example of a simple hypothetical repository structure in domal salt. Each team applied their own tools: PFLOTRAN (SNL), QPAC (Quintessa) and LOPOS (GRS). These differ essentially regarding numerical concept and degree of detail in the representation of the underlying physical processes. The discussion focused on when simplifications can be appropriately applied and what consequences result from them. Furthermore, it was explored when and if a higher level of fidelity in geometry or physical processes is required.
For multi-scale multi-physics applications e.g., the turbulent combustion code Pele, robust and accurate dimensionality reduction is crucial to solving problems at exascale and beyond. A recently developed technique, Co-Kurtosis based Principal Component Analysis (CoK-PCA) which leverages principal vectors of co-kurtosis, is a promising alternative to traditional PCA for complex chemical systems. To improve the effectiveness of this approach, we employ Artificial Neural Networks for reconstructing thermo-chemical scalars, species production rates, and overall heat release rates corresponding to the full state space. Our focus is on bolstering confidence in this deep learning based non-linear reconstruction through Uncertainty Quantification (UQ) and Sensitivity Analysis (SA). UQ involves quantifying uncertainties in inputs and outputs, while SA identifies influential inputs. One of the noteworthy challenges is the computational expense inherent in both endeavors. To address this, we employ the Monte Carlo methods to effectively quantify and propagate uncertainties in our reduced spaces while managing computational demands. Our research carries profound implications not only for the realm of combustion modeling but also for a broader audience in UQ. By showcasing the reliability and robustness of CoK-PCA in dimensionality reduction and deep learning predictions, we empower researchers and decision-makers to navigate complex combustion systems with greater confidence.
U.S. nuclear power facilities face increasing challenges in meeting dynamic security requirements caused by evolving and expanding threats while keeping costs reasonable to make nuclear energy competitive. The past approach has often included implementing security features after a facility has been designed and without attention to optimization, which can lead to cost overruns. Incorporating security into the design process can provide robust, cost-effective, and sufficient physical protection systems. The purpose of this report is to capture lessons learned by the Advanced Reactor Safeguards and Security (ARSS) program that may be beneficial for other advanced and small modular reactor (SMR) vendors to use when developing security systems and postures. This report will capture relevant information that can be used in the security-by-design (SeBD) process for SMR and microreactor vendors.
Agafonov, Andrei; Pineda-Romero, Nayely; Witman, Matthew D.; Nassif, Vivian; Vaughan, Gavin B.M.; Lei, Lei; Ling, Sanliang; Grant, David M.; Dornheim, Martin; Allendorf, Mark; Stavila, Vitalie; Zlotea, Claudia
The vast chemical space of high entropy alloys (HEAs) makes trial-and-error experimental approaches for materials discovery intractable and often necessitates data-driven and/or first principles computational insights to successfully target materials with desired properties. In the context of materials discovery for hydrogen storage applications, a theoretical prediction-experimental validation approach can vastly accelerate the search for substitution strategies to destabilize high-capacity hydrides based on benchmark HEAs, e.g. TiVNbCr alloys. Here, machine learning predictions, corroborated by density functional theory calculations, predict substantial hydride destabilization with increasing substitution of earth-abundant Fe content in the (TiVNb)75Cr25-xFex system. The as-prepared alloys crystallize in a single-phase bcc lattice for limited Fe content x < 7, while larger Fe content favors the formation of a secondary C14 Laves phase intermetallic. Short range order for alloys with x < 7 can be well described by a random distribution of atoms within the bcc lattice without lattice distortion. Hydrogen absorption experiments performed on selected alloys validate the predicted thermodynamic destabilization of the corresponding fcc hydrides and demonstrate promising lifecycle performance through reversible absorption/desorption. This demonstrates the potential of computationally expedited hydride discovery and points to further opportunities for optimizing bcc alloy ↔ fcc hydrides for practical hydrogen storage applications.
This report summarizes the work performed under the author's two-year John von Neumann LDRD project, which involves the non-intrusive surrogate modeling of dynamical systems with remarkable structural properties. After a brief introduction to the topic, technical accomplishments and project metrics are reviewed including peer-reviewed publications, software releases, external presentations and colloquia, as well as organized conference sessions and minisymposia. The report concludes with a summary of ongoing projects and collaborations which utilize the results of this work.
Barnard, James P.; Shen, Jianan; Tsai, Benson K.; Zhang, Yizhi; Chhabra, Max R.; Sarma, Raktim S.; Siddiqui, Aleem; Wang, Haiyan
Magnetic and ferroelectric oxide thin films have long been studied for their applications in electronics, optics, and sensors. The properties of these oxide thin films are highly dependent on the film growth quality and conditions. To maximize the film quality, epitaxial oxide thin films are frequently grown on single-crystal oxide substrates such as strontium titanate (SrTiO3) and lanthanum aluminate (LaAlO3) to satisfy lattice matching and minimize defect formation. However, these single-crystal oxide substrates cannot readily be used in practical applications due to their high cost, limited availability, and small wafer sizes. One leading solution to this challenge is film transfer. In this demonstration, a material from a new class of multiferroic oxides is selected, namely bismuth-based layered oxides, for the transfer. A water-soluble sacrificial layer of Sr3Al2O6 is inserted between the oxide substrate and the film, enabling the release of the film from the original substrate onto a polymer support layer. The films are transferred onto new substrates of silicon and lithium niobate (LiNbO3) and the polymer layer is removed. These substrates allow for the future design of electronic and optical devices as well as sensors using this new group of multiferroic layered oxide films.
In this letter, we present interfacial fracture toughness data for a polymer-metal interface where tests were conducted at various test temperatures T and loading rates δ˙. An adhesively bonded asymmetric double cantilever beam (ADCB) specimen was utilized to measure toughness. ADCB specimens were created by bonding a thinner, upper adherend to a thicker, lower adherend (both 6061 T6 aluminum) using a thin layer of epoxy adhesive, such that the crack propagated along the interface between the thinner adherend and the epoxy layer. The specimens were tested at T from 25 to 65 °C and δ˙ from 0.002 to 0.2 mm/s. The measured interfacial toughness Γ increased as both T and δ˙ increased. For an ADCB specimen loaded at a constant δ˙, the energy release rate G increases as the crack length a increases. For this reason, we defined rate effects in terms of the rate of change in the energy release rate G˙. Although not rigorously correct, a formal application of time–temperature superposition (TTS) analysis to the Γ data provided useful insights on the observed dependencies. In the TTS-shifted data, Γ decreased and then increased for monotonically increasing G˙. Thus, the TTS analysis suggests that there is a minimum value of Γ. This minimum value could be used to define a lower bound in Γ when designing critical engineering applications that are subjected to T and δ˙ excursions.
Hydrogen is known to embrittle austenitic stainless steels, which are widely used in high-pressure hydrogen storage and delivery systems, but the mechanisms that lead to such material degradation are still being elucidated. The current work investigates the deformation behavior of single crystal austenitic stainless steel 316L through combined uniaxial tensile testing, characterization and atomistic simulations. Thermally precharged hydrogen is shown to increase the critical resolved shear stress (CRSS) without previously reported deviations from Schmid's law. Molecular dynamics simulations further expose the statistical nature of the hydrogen and vacancy contributions to the CRSS in the presence of alloying. Slip distribution quantification over large in-plane distances (>1 mm), achieved via atomic force microscopy (AFM), highlights the role of hydrogen increasing the degree of slip localization in both single and multiple slip configurations. The most active slip bands accumulate significantly more deformation in hydrogen precharged specimens, with potential implications for damage nucleation. For 〈110〉 tensile loading, slip localization further enhances the activity of secondary slip, increases the density of geometrically necessary dislocations and leads to a distinct lattice rotation behavior compared to hydrogen-free specimens, as evidenced by electron backscatter diffraction (EBSD) maps. The results of this study provide a more comprehensive picture of the deformation aspect of hydrogen embrittlement in austenitic stainless steels.
Helium-4-based scintillation detector technology is emerging as a strong alternative to pulse-shape discrimination-capable organic scintillators for fast neutron detection and spectroscopy, particularly in extreme gamma-ray environments. The 4He detector is intrinsically insensitive to gamma radiation, as it has a relatively low cross-section for gamma-ray interactions, and the stopping power of electrons in the 4He medium is low compared to that of 4He recoil nuclei. Consequently, gamma rays can be discriminated by simple energy deposition thresholding instead of the more complex pulse shape analysis. The energy resolution of 4He scintillation detectors has not yet been well-characterized over a broad range of energy depositions, which limits the ability to deconvolve the source spectra. In this work, an experiment was performed to characterize the response of an Arktis S670 4He detector to nuclear recoils up to 9 MeV. The 4He detector was positioned in the center of a semicircular array of organic scintillation detectors operated in coincidence. Deuterium–deuterium and deuterium–tritium neutron generators provided monoenergetic neutrons, yielding geometrically constrained nuclear recoils ranging from 0.0925 to 8.87 MeV. The detector response provides evidence for scintillation linearity beyond the previously reported energy range. The measured response was used to develop an energy resolution function applicable to this energy range for use in high-fidelity detector simulations needed by future applications.
Shock invariant relationship, which was conceived for inert shock waves to derive the 4th power relationship between shock pressure and maximum strain rate, is generalized for reactive shock waves such as Chapman-Jouget detonation and shock-induced vaporization. The generalization, based on the first-order reaction models, is a power function relationship between overall dissipated energy ( Δ e d i s ) and reaction time Δ τ such that Δ e d i s Δ τ 1 / α = constant , where the power coefficient α is found to be in the range of 2/3-4. Experimental data, though scarce, are consistent with the generalization. Implication of the generalization for inert shocks is also considered and suggests a broad range of the 4th power coefficient including an inequality equation that constrains the shock and particle velocity relationship.
Shrestha, Shilva; Goswami, Shubhasish; Banerjee, Deepanwita; Garcia, Valentina; Zhou, Elizabeth; Olmsted, Charles N.; Majumder, Erica L.W.; Kumar, Deepak; Awasthi, Deepika; Mukhopadhyay, Aindrila; Singer, Steven W.; Gladden, John M.; Simmons, Blake A.; Choudhary, Hemant
The valorization of lignin, a currently underutilized component of lignocellulosic biomass, has attracted attention to promote a stable and circular bioeconomy. Successful approaches including thermochemical, biological, and catalytic lignin depolymerization have been demonstrated, enabling opportunities for lignino-refineries and lignocellulosic biorefineries. Although significant progress in lignin valorization has been made, this review describes unexplored opportunities in chemical and biological routes for lignin depolymerization and thereby contributes to economically and environmentally sustainable lignin-utilizing biorefineries. This review also highlights the integration of chemical and biological lignin depolymerization and identifies research gaps while also recommending future directions for scaling processes to establish a lignino-chemical industry.
Morris, Joseph P.; Pyrak-Nolte, Laura J.; Yoon, Hongkyu; Bobet, Antonio; Jiang, Liyang
In this article, We present results from a recent exercise where participating organizations were asked to provide model-based blind predictions of damage evolution in 3D-printed geomaterial analogue test articles. Participants were provided with a range of data characterizing both the undamaged state (e.g., ultrasonic measurements) and damage evolution (e.g., 3-point bending, unconfined compression, and Brazilian testing) of the material. In this paper, we focus on comparisons between the participants’ predictions and the previously secret challenge problem experimental observations. We present valuable lessons learned for the application of numerical methods to deformation and failure in brittle-ductile materials. The exercise also enables us to identify which specific types of calibration data were of most utility to the participants in developing their predictions. Further, we identify additional data that would have been useful for participants to improve the confidence of their predictions. Consequently, this work improves our understanding of how to better characterize a material to enable more accurate prediction of damage and failure propagation in natural and engineered brittle-ductile materials.
This research article presents a robust approach to optimizing the layout of pressure sensors around an airfoil. A genetic algorithm and a sequential quadratic programming algorithm are employed to derive a sensor layout best suited to represent the expected pressure distribution and, thus, the lift force. The fact that both optimization routines converge to almost identical sensor layouts suggests that an optimum exists and is reached. By comparing against a cosine-spaced sensor layout, it is demonstrated that the underlying pressure distribution can be captured more accurately with the presented layout optimization approach. Conversely, a 39 %-55 % reduction in the number of sensors compared to cosine spacing is achievable without loss in lift prediction accuracy. Given these benefits, an optimized sensor layout improves the data quality, reduces unnecessary equipment and saves cost in experimental setups. While the optimization routine is demonstrated based on the generic example of the IEA 15 MW reference wind turbine, it is suitable for a wide range of applications requiring pressure measurements around airfoils.
The list of standards, best practice, and regulations below are intended to give insight into what resources are available for developing a chemical control regime as well as information on what regulations other countries have used to implement such a regime. This list is not intended to be all inclusive and other regulations and standards related to controlling hazardous chemicals exist and should be consulted.
Remote radioactive source applications require frequent transportation of sources from storage locations to remote sites. This introduces risk of theft of a source during the transportation process, with the level of risk proportional to the radioactivity of the source. To that end, theft of smaller sources, such as microcurie-level moisture density gauges, are of minor concern, but larger sources, such as those used for radiography and well logging, present more risk. Radiography sources include 192Ir, 75Se, or 60Co radionuclides with radioactivity amounts at or exceeding IAEA Category 2. Well-logging sources, primarily 241Am/Be, are used for their neutron-emission properties. 137Cs is also used in well-logging at lower activities than in radiography but at levels that still present some risk. The vulnerability for malicious use of such sources to cause contamination and associated economic effects is dependent on the elemental chemical and physical properties, especially melting point and bulk modulus. Theft of radiography sources is somewhat common, well-logging sources less so. Theft of sources commonly occurs in concert with theft of the vehicle, with the source subsequently abandoned. There have been some instances where a source appears to have been specifically targeted. There are a variety of security measures and protocols, available and under development, to mitigate the risk of theft and assist in source recovery.
On Wednesday, March 8th and Thursday, March 9th, 2023, the University of Texas at Austin hosted Sandia National Laboratories (Sandia) for “Sandia Day 2023 at UT Austin” with the intention of reviewing, planning and shaping ongoing and future collaborations in key areas that reflect each organization’s priorities and strengths. The event brought together nearly 100 UT and Sandia participants including executive leadership, researchers, faculty, staff, and students. The primary sessions of Sandia Day consisted of a half-day tour of select J.J. Pickle Research Campus facilities, a networking happy hour, leadership meetings, presentations by both Sandia and UT Austin representatives in areas of research strategic priorities: Grid Resiliency, Examining Climate Change, and Microelectronics, and a research poster session with lunch. The group also discussed growth opportunities in the following research areas: nuclear and radiation engineering, pulsed power and fusion physics, and digital engineering, specifically as it related to materials discovery and advanced manufacturing. Appendix A contains the full Sandia Day agenda.