Binary Code Similarity Analysis (BCSA) has a wide spectrum of applications, including plagiarism detection, vulnerability discovery, and malware analysis, thus drawing significant attention from the security community. However, conventional techniques often face challenges in balancing both accuracy and scalability simultaneously. To overcome these existing problems, a surge of deep learning-based work has been recently proposed. Unfortunately, many researchers still find it extremely difficult to conduct relevant studies or extend existing approaches. First, prior work typically relies on proprietary benchmark without making the entire dataset publicly accessible. Consequently, a large-scale, well-labeled dataset for binary code similarity analysis remains precious and scarce. Moreover, previous work has primarily focused on comparing at the function level, rather than exploring other finer granularities. Therefore, we argue that the lack of a fine-grained dataset for BCSA leaves a critical gap in current research. To address these challenges, we construct a benchmark dataset for fine-grained binary code similarity analysis called BinSimDB, which contains equivalent pairs of smaller binary code snippets, such as basic blocks. Specifically, we propose BMerge and BPair algorithms to bridge the discrepancies between two binary code snippets caused by different optimization levels or platforms. Furthermore, we empirically study the properties of our dataset and evaluate its effectiveness for the BCSA research. The experimental results demonstrate that BinSimDB significantly improves the performance of binary code similarity comparison.
Ship tracks, long thin artificial cloud features formed from the pollutants in ship exhaust, are satellite-observable examples of aerosol-cloud interactions (ACI) that can lead to increased cloud albedo and thus increased solar reflectivity, phenomena of interest in solar radiation management. In addition to ship tracks being of interest to meteorologists and policy makers, their observed cloud perturbations provide benchmark evidence of ACI that remain poorly captured by climate models. To broadly analyze the effects of ship tracks, high-resolution satellite imagery data highlighting their presence are required. To support this, we provide a hand labelled dataset to serve as a benchmark for a variety of subsequent analyses. Established from a previous dataset that identified ship track presence using NASA’s MODIS Aqua satellite imager, our first-of-its-kind dataset is comprised of image masks: capturing full ship track regions, including their contours, emission points and dispersive patterns. In total, 300 images, or around 2,500 masked ship tracks, observed under varying conditions are provided, and may facilitate training of machine learning algorithms to automate extraction.
The trusted inertial terrain-aided navigation (TITAN) algorithm leverages an airborne vertical synthetic aperture radar to measure the range to the closest ground points along several prescribed iso-Doppler contours. These TITAN minimum-range, prescribed-Doppler measurements are the result of a constrained nonlinear optimization problem whose optimization function and constraints both depend on the radar position and velocity. Owing to the complexity of this measurement definition, analysis of the TITAN algorithm is lacking in prior work. This publication offers such an analysis, making the following three contributions: (1) an analytical solution to the TITAN constrained optimization measurement problem, (2) a derivation of the TITAN measurement function Jacobian, and (3) a derivation of the Cramér–Rao lower bound on the estimated position and velocity error covariance. These three contributions are verified via Monte Carlo simulations over synthetic terrain, which further reveal two remarkable properties of the TITAN algorithm: (1) the along-track positioning errors tend to be smaller than the cross-track positioning errors, and (2) the cross-track positioning errors are independent of the terrain roughness.
The June 1991 Mt. Pinatubo eruption resulted in a massive increase of sulfate aerosols in the atmosphere, absorbing radiation and leading to global changes in surface and stratospheric temperatures. A volcanic eruption of this magnitude serves as a natural analog for stratospheric aerosol injection, a proposed solar radiation modification method to combat a warming climate. The impacts of such an event are multifaceted and region-specific. Our goal is to characterize the multivariate and dynamic nature of the atmospheric impacts following the Mt. Pinatubo eruption. We developed a multivariate space-time dynamic linear model to understand the full extent of the spatially- and temporally-varying impacts. Specifically, spatial variation is modeled using a flexible set of basis functions for which the basis coefficients are allowed to vary in time through a vector autoregressive (VAR) structure. This novel model is cast in a Dynamic Linear Model (DLM) framework and estimated via a customized MCMC approach. We demonstrate how the model quantifies the relationships between key atmospheric parameters prior to and following the Mt. Pinatubo eruption with reanalysis data from MERRA-2 and highlight when such a model is advantageous over univariate models.
New concepts of symmetry related to topological order emerged from the discovery of the fractional quantum Hall effect and high-temperature superconductivity in strongly correlated electron systems. This led to the study of quantum materials-- materials exhibiting emergent quantum phenomena with no classical analogues. While these materials have engendered exciting basic materials science and physics, realizing novel devices is a key challenge in the field. The goal of this proposal is to harnes
Coulomb drag is a powerful tool to study interactions in coupled low-dimensional systems. Historically, Coulomb drag has been attributed to a frictional force arising from momentum transfer whose direction is dictated by the current flow. In the absence of electron-electron correlations, treating the Coulomb drag circuit as a rectifier of noise fluctuations yields similar conclusions about the reciprocal nature of Coulomb drag. In contrast, recent findings in one-dimensional systems have identified a nonreciprocal contribution to Coulomb drag that is independent of the current flow direction. In this work, we present Coulomb drag measurements between vertically coupled GaAs/AlGaAs quantum wires separated vertically by a hard barrier only 15 nm wide, where both reciprocal and nonreciprocal contributions to the drag signal are observed simultaneously, and whose relative magnitudes are temperature and gate tunable. Our study opens up the possibility of studying the physical mechanisms behind the onset of both Coulomb drag contributions simultaneously in a single device, ultimately leading to a better understanding of Luttinger liquids in multi-channel wires and paving the way for the creation of energy harvesting devices.
Characterization of induced microseismicity at a carbon dioxide (CO2) storage site is critical for preserving reservoir integrity and mitigating seismic hazards. We apply a multilevel machine learning (ML) approach that combines the nonnegative matrix factorization and hidden Markov model to extract spectral representations of microseismic events and cluster them to identify seismic patterns at the Illinois Basin-Decatur Project. Unlike traditional waveform correlation methods, this approach leverages spectral characteristics of first arrivals to improve event classification and detect previously undetected planes of weakness. By integrating ML-based clustering with focal mechanism analysis, we resolve small-scale fault structures that are below the detection limits of conventional seismic imaging. Our findings reveal temporal bursts of microseismicity associated with brittle failure, providing insights into the spatio-temporal evolution of fault reactivation during CO2 injection. This approach enhances seismic monitoring capabilities at CO2 injection sites by improving fault characterization beyond the resolution of standard geophysical surveys.
To date, careful data treatment workflows and statistical detectors are used to perform hyperspectral image (HSI) detection of any gas contained in a spectral library, which is often expanded with physics models to incorporate different spectral characteristics. In general, surrounding evidence or known gas-release parameters are used to provide confidence in or confirm detection capability, respectively. This makes quantifying detection performance difficult as it is nearly impossible to develop an absolute ground truth for gas target pixel presence in collected HSI. Consequently, developing and comparing new detection methods, especially machine learning (ML)-based methods, is susceptible to subjectivity in derived detection map quality. In this work, we demonstrate the first use of transformer-based paired neural networks (PNNs) for one-shot gas target detection for multiple gases while providing quantitative classification and detection metrics for their use on labeled data. Terabytes of training data are generated from a database of long-wave infrared HSI obtained from historical Mako sensor campaigns over Los Angeles. By incorporating labels, singular signature representations, and a model development pipeline, we can tune and select PNNs to detect multiple gas targets that are not seen in training on a quantitative basis. We additionally assess our test set detections using interpretability techniques widely employed with ML-based predictors, but less common with detection methods relying on learned latent spaces.
In this paper, we present a method for estimating the infection-rate of a disease as a spatial-temporal field. Our data comprises time-series case-counts of symptomatic patients in various areal units of a region. We extend an epidemiological model, originally designed for a single areal unit, to accommodate multiple units. The field estimation is framed within a Bayesian context, utilizing a parameterized Gaussian random field as a spatial prior. We apply an adaptive Markov chain Monte Carlo method to sample the posterior distribution of the model parameters condition on COVID-19 case-count data from three adjacent counties in New Mexico, USA. Our results suggest that the correlation between epidemiological dynamics in neighboring regions helps regularize estimations in areas with high variance (i.e., poor quality) data. Using the calibrated epidemic model, we forecast the infection-rate over each areal unit and develop a simple anomaly detector to signal new epidemic waves. Our findings show that anomaly detector based on estimated infection-rates outperforms a conventional algorithm that relies solely on case-counts.
This study investigates the fatigue crack growth rate (FCGR) behavior of pipeline and low-alloy pressure vessel steels in high-pressure gaseous hydrogen. Despite a broad range of yield strengths and microstructures ranging from ferrite/pearlite, acicular ferrite, bainite, and martensite, the FCGR in gaseous hydrogen remained consistent (falling within a factor of 2–3). Steels with higher fractions of pearlite, typical of older vintage pipeline steels, exhibited modestly lower crack growth rates in gaseous hydrogen compared to steels with lower fractions of pearlite. Crack growth rates in these materials exhibit a systematic dependence on stress ratio and partial pressure of hydrogen, as captured in the recently published fatigue design curves in ASME B31 code case 220 for pipeline steels and ASME BPVC code case 2938 for pressure-vessel steels.
Garner, Sean; Silling, Stewart; Ketterhagen, William; Strong, John
The pharmaceutical drug product development process can be greatly accelerated through the use of modeling and simulation techniques to predict the manufacturability and performance of a given formulation. The anticipation and possible mitigation of tablet damage due to manufacturing stresses represents a specific area of interest in the pharmaceutical industry for predicting formulation and tableting performance. While the finite element method (FEM) has been extensively used for predicting the mechanical behavior of powder material in the compaction processes, a shortcoming of the approach is the inherent difficulty to predict discontinuities (e.g., damage or cracking) within a tablet as FEM is a continuum-based approach. In this work, we propose a novel method utilizing peridynamics (PD), a numerical method that can capture discontinuities such as tablet fracture, to predict the evolution of damage and breakage in pharmaceutical tablets. The approach links (1) the finite element method – to elucidate the behavior of powders during die compaction – with (2) the peridynamics modeling technique – to model the discontinuous nature of damage and predict tablet breakage during the critical stages of unloading and ejection from the compression die. This short communication presents a proof of concept including a workflow to calibrate the linked FEM-PD simulation models. It demonstrates promising results from a preliminary experimental validation of the approach. Following further development, this approach could be used to guide the optimization of compression processes through targeted changes to formulation material properties, compression process conditions, and/or tooling geometries to deliver improved process efficiency and tablet robustness.
Background/Objectives: Children’s biological age does not always correspond to their chronological age. In the case of BMI trajectories, this can appear as phase variation, which can be seen as shift, stretch, or shrinking between trajectories. With maturation thought of as a process moving towards the final state - adult BMI, we assessed whether children can be divided into latent groups reflecting similar maturational age of BMI. The groups were characterised by early factors and time-related features of the trajectories. Subjects/Methods: We used data from two general population birth cohort studies, Northern Finland Birth Cohorts 1966 and 1986 (NFBC1966 and NFBC1986). Height (n = 6329) and weight (n = 6568) measurements were interpolated in 34 shared time points using B-splines, and BMI values were calculated between 3 months to 16 years. Pairwise phase distances of 2999 females and 3163 males were used as a similarity measure in k-medoids clustering. Results: We identified three clusters of trajectories in females and males (Type 1: females, n = 1566, males, n = 1669; Type 2: females, n = 1028, males, n = 973; Type 3: females, n = 405, males, n = 521). Similar distinct timing patterns were identified in males and females. The clusters did not differ by sex, or early growth determinants studied. Conclusions: Trajectory cluster Type 1 reflected to the shape of what is typically illustrated as the childhood BMI trajectory in literature. However, the other two have not been identified previously. Type 2 pattern was more common in the NFBC1966 suggesting a generational shift in BMI maturational patterns.
This work presents a data-driven method for learning low-dimensional time-dependent physics-based surrogate models whose predictions are endowed with uncertainty estimates. We use the operator inference approach to model reduction that poses the problem of learning low-dimensional model terms as a regression of state space data and corresponding time derivatives by minimizing the residual of reduced system equations. Standard operator inference models perform well with accurate training data that are dense in time, but producing stable and accurate models when the state data are noisy and/or sparse in time remains a challenge. Another challenge is the lack of uncertainty estimation for the predictions from the operator inference models. Our approach addresses these challenges by incorporating Gaussian process surrogates into the operator inference framework to (1) probabilistically describe uncertainties in the state predictions and (2) procure analytical time derivative estimates with quantified uncertainties. The formulation leads to a generalized least-squares regression and, ultimately, reduced-order models that are described probabilistically with a closed-form expression for the posterior distribution of the operators. The resulting probabilistic surrogate model propagates uncertainties from the observed state data to reduced-order predictions. We demonstrate the method is effective for constructing low-dimensional models of two nonlinear partial differential equations representing a compressible flow and a nonlinear diffusion–reaction process, as well as for estimating the parameters of a low-dimensional system of nonlinear ordinary differential equations representing compartmental models in epidemiology.
Miniature atomic clocks based on the interrogation of the ground state hyperfine splitting of buffer gas cooled ions confined in radio frequency Paul traps have shown great promise as high precision prototype clocks. We report on the performance of two miniature ion trap vacuum packages after being sealed for as much as 10 years. We find the lifetime of the ions within the trap has increased over time for both traps and can be as long as 50 days. We form two clocks using the two traps and compare their relative frequency instability one with another to demonstrate a short-term instability of 5×10-13$τ$-1/2 integrating down to 1×10-14 after 2 ks of integration. The trapped ion lifetime and clock instability demonstrated by these miniature devices despite only being passively pumped for many years represents a critical advance toward their proliferation in the clock community.
Chapare virus (CHAPV) is an emerging New World arenavirus that is the causative agent of Chapare hemorrhagic fever (CHHF) responsible for recent outbreaks with alarmingly high case fatality rates in Bolivia near the Brazilian border. Here, we describe a nonhuman primate (NHP) model of CHHF infection which represents an essential tool to understand this emerging biological threat agent. Cynomolgus macaques challenged intravenously with CHAPV develop clinical disease, which recapitulates several key features of human CHHF. All subjects lost weight and had clinical scores following CHAPV challenge. Notably, one of four NHPs developed lethal disease with viral hepatitis and hemorrhagic features. Clinical chemistry and hematology revealed leukopenia, anemia, thrombocytopenia, and increased transaminase levels. In all four subjects, viremia was detectable for the first week following challenge and viral RNA was detectable in serum and many tissues persisting 35 days-post challenge. Several medical countermeasures (MCM) have efficacy against CHAPV infection in vitro, but the current model for MCM testing and approval of new drugs is reliant on the availability of animal models. This work lays the foundation for future CHHF MCM development.
Development of a defensible source-term model (STM), usual ly a thermodynamical model for radionuclide solubility calculations, is critical to a performance assessment (PA) of a geologic repository for nuclear waste disposal. Such a model is generally subjected to rigorous regulatory scrutiny. In this article, we highlight key guiding principles for STM model development and validation in nuclear waste management. We illustrate these principles by closely examining three recently developed thermodynamic models with the Pitzer formulism for aqueous H+—Nd3+—NO3−(—oxalate) systems in a reverse alphabetical order of the authors: the XW model developed by Xiong and Wang, the OWC model developed by Oakes et al., and the GLC model developed by Guignot et al., among which the XW model deals with trace activity coefficients for Nd(III), while the OWC and GLC models are for concentrated Nd(NO3)3 electrolyte solutions. The principles highlighted include the following: (1) Principle 1. Validation against independent experimental data: A model should be validated against experimental data or field observations that have not been used in the original model parameterization. We tested the XW model against multiple independent experimental data sets including electromotive force (EMF), solubility, water vapor, and water activity measurements. The results show that the XW model is accurate and valid for its intended use for predicting trace activity coefficients and therefore Nd solubility in repository environments. (2) Principle 2. Testing for relevant and sensitive variables: Solution pH is such a variable for an STM and easily acquirable. All three models are checked for their ability to predict pH conditions in Nd(NO3)3 electrolyte solutions. The OWC model fails to provide a reasonable estimate for solution pH conditions, thus casting serious doubt on its validity for a source-term calculation. In contrast, both the XW and GLC models predict close-to-neutral pH values, in agreement with experimental measurements. (3) Principle 3. Honoring physical constraints: Upon close examination, it is found that the Nd(III)-NO3 association schema in the OWC model suffers from two shortcomings. Firstly, its second stepwise stability constant for Nd(NO3)2+ (log K2) is much higher than the first stepwise stability constant for NdNO32+ (log K1), thus violating the general rule of (log K2–log K1) < 0, or (Formula presented.). Secondly, the OWC model predicts abnormally high activity coefficients for Nd(NO3)2+ (up to ~900) as the concentration increases. (4) Principle 4. Minimizing degrees of freedom for model fitting: The OWC model with nine fitted parameters is compared with the GLC model with five fitted parameters, as both models apply to the concentrated region for Nd(NO3)3 electrolyte solutions. The latter appears superior to the former because the latter can fit osmotic coefficient data equally well with fewer model parameters. The work presented here thus illustrates the salient points of geochemical model development, selection, and validation in nuclear waste management.
Autonomous manipulation is a challenging problem in field robotics due to uncertainty in object properties, constraints, and coupling phenomenon with robot control systems. Humans learn motion primitives over time to effectively interact with the environment. We postulate that autonomous manipulation can be enabled by basic sets of motion primitives as well, but do not necessitate mimicking human motion primitives. This work presents an approach to generalized optimal motion primitives using physics-informed neural networks. Our simulated and experimental results demonstrate that optimality is notionally maintained where the mean maximum observed final position percent error was 0.564% and the average mean error for all the trajectories was 1.53%. These results indicate that notional generalization is attained using a physics-informed neural network approach that enables near optimal real-time adaptation of primitive motion profiles.
Using a belt as a replacement for a rope on a rotary power take-offs (PTOs) system has become more common for wave energy converters, improving cyclic bend over sheave performance with a smaller bending thickness for belts. However, the service life predictions of PTOs are a major concern in design, because belt performance under harsh underwater environments is largely less studied. In this work, the effect of fleet and twist angles on wear life is being investigated both experimentally and numerically. Two three-dimensional equivalent static finite element models are constructed to evaluate the complex stress state of polyurethane-steel belts around steel drums. The first is to capture the response of the experimental investigation performed on the wear life, and the second to predict the wear life of an existing functional PTO. The results show a significant effect for fleet and twist angles on stress concentrations and estimated service life.
Tamper-indicating devices (TIDs), also known as seals, play a crucial role in various sectors including international nuclear safeguards, arms control, domestic security, and commercial products, by ensuring that monitored or high-value items are not accessed undetected. These devices do not block access but alert to unauthorized tampering. With adversaries' capabilities evolving, there's a pressing need for seals to advance in terms of effectiveness (e.g., better tamper indication and unique identification), and new technology can improve the efficiency of installation and verification. Passive loop seals, widely used in international nuclear safeguards to ensure that continuity of knowledge is maintained on declared items, face stringent International Atomic Energy Agency (IAEA) requirements that surpass those met by commercial products. The metal cup seal (Figure 1, left), a staple IAEA seal, is robust but requires significant resources for post-use verification – specifically, the seal’s unique identity can only be verified at IAEA headquarters after removal from facilities. Further, the seal has been in use for decades and seal types should periodically be replaced to counter adversarial efforts for defeating seals. In 2020, the IAEA outlined about 40 requirements for a new passive loop seal, aiming for in-situ verification, minimal external tool use, unique identification (UID), and clear tamper indication. In response, research and development efforts focused on creating a new passive loop seal that meets these criteria and in 2022 the IAEA announced the completion of the Field Verifiable Passive Loop Seal (FVPS) (Figure 1, right). Concurrently to the IAEA’s efforts, Sandia National Laboratories (SNL) and Oak Ridge National Laboratory (ORNL) designed, developed, and tested two seal versions – Puck and Puck/SAW, with Puck based on the IAEA’s requirements and including a novel visually-obvious tamper response, and Puck/SAW adding additional beneficial capabilities like the ability to receive a unique identifier from a standoff distance and monitoring the wire integrity. Puck/SAW was specifically designed and developed to address sealing applications in dry spent fuel storage facilities, where the number of sealed spent fuel containers results in heavy verification burden and inspector safety issues related to radiation exposure. These efforts are described in this Executive Summary.
Public-facing solar hosting capacity (HC) maps, which show the maximum amount of solar energy that can be installed at a location without adverse effects, have proven to be a key driver of solar soft cost reductions through a variety of pathways (e.g., streamlining interconnection, siting, and customer acquisition processes). However, current methods for generating HC maps require detailed grid models and time-consuming simulations that limit both their accuracy and scalability—today, only a handful out of almost 2,000 utilities provide these maps. This project developed and validated data-driven algorithms for calculating solar HC using data from AMI without the need of detailed grid models or simulations. The algorithms were validated on utility datasets and incorporated as an application into NRECA’s Open Modeling Framework (OMF.coop) for the over 260 coops and vendors throughout the US to use. The OMF is free and open-source for everyone.
Brazing and soldering are metallurgical joining techniques that use a wetting molten metal to create a joint between two faying surfaces. The quality of the brazing process depends strongly on the wetting properties of the molten filler metal, namely the surface tension and contact angle, and the resulting joint can be susceptible to various defects, such as run-out and underfill, if the material properties or joining conditions are not suitable. In this work, we implement a finite element simulation to predict the formation of such defects in braze processes. This model incorporates both fluid–structure interaction through an arbitrary Eulerian–Lagrangian technique and free surface wetting through conformal decomposition finite element modeling. Upon validating our numerical simulations against experimental run-out studies on a silver-Kovar system, we then use the model to predict run-out and underfill in systems with variable surface tension, contact angles, and applied pressure. Finally, we consider variable joint/surface geometries and show how different geometrical configurations can help to mitigate run-out. This work aims to understand how brazing defects arise and validate a coupled wetting and fluid–structure interaction simulation that can be used for other industrial problems.
A striking example of frustration in physics is Hofstadter's butterfly, a fractal structure that emerges from the competition between a crystal's lattice periodicity and the magnetic length of an applied field. Current methods for predicting the topological invariants associated with Hofstadter's butterfly are challenging or impossible to apply to a range of materials, including those that are disordered or lack a bulk spectral gap. Here, we demonstrate a framework for predicting a material's local Chern markers using its position-space description and validate it against experimental observations of quantum transport in artificial graphene in a semiconductor heterostructure, inherently accounting for fabrication disorder strong enough to close the bulk spectral gap. By resolving local changes in the system's topology, we reveal the topological origins of antidot-localized states that appear in artificial graphene in the presence of a magnetic field. Moreover, we show the breadth of this framework by simulating how Hofstadter's butterfly emerges from an initially unpatterned 2D electron gas as the system's potential strength is increased and predict that artificial graphene becomes a topological insulator at the critical magnetic field. Overall, we anticipate that a position-space approach to determine a material's Chern invariant without requiring prior knowledge of its occupied states or bulk spectral gaps will enable a broad array of fundamental inquiries and provide a novel route to material discovery, especially in metallic, aperiodic, and disordered systems.
This paper develops a novel method for reconstructing the full-field response of structural dynamic systems using sparse measurements. The singular value decomposition is applied to a frequency response matrix relating the structural response to physical loads, base motion, or modal loads. The left singular vectors form a non-physical reduced basis that can be used for response reconstruction with far fewer sensors than existing methods. The contributions of the singular vectors to measured response are termed singular-vector loads (SVLs) and are used in a regularized Bayesian framework to generate full-field response estimates and confidence intervals. The reconstruction framework is applicable to the estimation of single data records and power spectral densities from multiple records. Reconstruction is successfully performed in configurations where the number of SVLs to identify is less than, equal to, and greater than the number of sensors used for reconstruction. In a simulation featuring a seismically excited shear structure, SVL reconstruction significantly outperforms modal FRF-based reconstruction and successfully estimates full-field responses with as few as two uniaxial accelerometers. SVL reconstruction is further verified in a simulation featuring an acoustically excited cylinder. Finally, response reconstruction and uncertainty quantification are performed on an experimental structure with three shaker inputs and 27 triaxial accelerometer outputs.
Hydrogen geo-storage is attracting substantial interdisciplinary interest as a cost-effective and sustainable option for medium- and long-term storage. Hydrogen can be stored underground in diverse formations, including aquifers, salt caverns, and depleted oil and gas reservoirs. The wetting dynamics of the hydrogen-brine-rock system are critical for assessing both structural and residual storage capacities, and ensuring containment safety. Through molecular dynamics simulations, we explore how varying concentrations of cushion gases (CO2 or CH4) influence the wetting properties of hydrogen-brine-clay systems under geological conditions (15 MPa and 333 K). We employed models of talc and the hydroxylated basal face of kaolinite (kaoOH) as clay substrates. Our findings reveal that the effect of cushion gases on hydrogen-brine-clay wettability is strongly dependent on the clay-brine interactions. Notably, CO2 and CH4 reduce the water wettability of talc in hydrogen-brine-talc systems, while exerting no influence on the wettability of hydrogen-brine-kaoOH systems. Detailed analysis of free energy of cavity formation near clay surfaces, clay-brine interfacial tensions, and the Willard-Chandler surface for gas-brine interfaces elucidate the molecular mechanisms underlying wettability changes. Our simulations identify empirical correlations between wetting properties and the average free energy required to perturb a flat interface when clay-brine interactions are less dominant. Our thorough thermodynamic analysis of rock-fluid and fluid-fluid interactions, aligning with key experimental observations, underscores the utility of simulated interfacial properties in refining contact angle measurements and predicting experimentally relevant properties. These insights significantly enhance the assessment of gas geo-storage potential. Prospectively, the approaches and findings obtained from this study could form a basis for more advanced multiscale simulations that consider a range of geological and operational variables, potentially guiding the development and improvement of geo-storage systems in general, with a particular focus on hydrogen storage.
We consider numerical approaches for deterministic, finite-dimensional optimal control problems whose dynamics depend on unknown or uncertain parameters. We seek to amortize the solution over a set of relevant parameters in an offline stage to enable rapid decision-making and be able to react to changes in the parameter in the online stage. To tackle the curse of dimensionality arising when the state and/or parameter are highdimensional, we represent the policy using neural networks. We compare two training paradigms: First, our model-based approach leverages the dynamics and definition of the objective function to learn the value function of the parameterized optimal control problem and obtain the policy using a feedback form. Second, we use actor-critic reinforcement learning to approximate the policy in a data-driven way. Using an example involving a two-dimensional convection-diffusion equation, which features high-dimensional state and parameter spaces, we investigate the accuracy and efficiency of both training paradigms. While both paradigms lead to a reasonable approximation of the policy, the model-based approach is more accurate and considerably reduces the number of PDE solves.
We introduce physics-informed multimodal autoencoders (PIMA)-a variational inference framework for discovering shared information in multimodal datasets. Individual modalities are embedded into a shared latent space and fused through a product-of-experts formulation, enabling a Gaussian mixture prior to identify shared features. Sampling from clusters allows cross-modal generative modeling, with a mixture-of-experts decoder that imposes inductive biases from prior scientific knowledge and thereby imparts structured disentanglement of the latent space. This approach enables cross-modal inference and the discovery of features in high-dimensional heterogeneous datasets. Consequently, this approach provides a means to discover fingerprints in multimodal scientific datasets and to avoid traditional bottlenecks related to high-fidelity measurement and characterization of scientific datasets.
Yu, Xi; Wilhelm, Benjamin; Holmes, Danielle; Vaartjes, Arjen; Schwienbacher, Daniel; Nurizzo, Martin; Kringhoj, Anders; Van Blankenstein, Mark R.; Jakob, Alexander M.; Gupta, Pragati; Hudson, Fay E.; Itoh, Kohei M.; Murray, Riley J.; Blume-Kohout, Robin; Ladd, Thaddeus D.; Dzurak, Andrew S.; Sanders, Barry C.; Jamieson, David N.; Morello, Andrea
High-dimensional quantum systems are a valuable resource for quantum information processing. They can be used to encode error-correctable logical qubits, which has been demonstrated using continuous-variable states in microwave cavities or the motional modes of trapped ions. For example, high-dimensional systems can be used to realize ‘Schrödinger cat’ states, which are superpositions of widely displaced coherent states that can be used to illustrate quantum effects at large scales. Recent proposals have suggested encoding qubits in high-spin atomic nuclei, which are finite-dimensional systems that can host hardware-efficient versions of continuous-variable codes. Here we demonstrate the creation and manipulation of Schrödinger cat states using the spin-7/2 nucleus of an antimony atom embedded in a silicon nanoelectronic device. We use a multi-frequency control scheme to produce spin rotations that preserve the symmetry of the qudit, and we constitute logical Pauli operations for qubits encoded in the Schrödinger cat states. Our work demonstrates the ability to prepare and control non-classical resource states, which is a prerequisite for applications in quantum information processing and quantum error correction, using our scalable, manufacturable semiconductor platform.
The purpose of this protocol is to define procedures and practices to be used by the PACT center for field testing of metal halide perovskite (MHP) photovoltaic (PV) modules. The protocol defines the physical, electrical, and analytical configuration of the tests and applies equally to mounting systems at a fixed orientation or sun tracking systems. While standards exist for outdoor testing of conventional PV modules, these do not anticipate the unique electrical behavior of perovskite cells. Further, the existing standards are oriented toward mature, relatively stable products with lifetimes that can be measured on the scale of years to decades. The state of the art for MHP modules is still immature with considerable sample to sample variation among nominally identical modules. Version 0.0 of this protocol does not define a minimum test duration, although the intent is for modules to be fielded for periods ranging for weeks to months. This protocol draws from relevant parts of existing standards, and where necessary includes modifications specific to the behavior of perovskites.
The objective of this work was to develop a machine learning ensemble that could assist pebble bed reactor verification by evaluating whether a given pebble circulating through a PBR was normal or anomalous using gamma spectroscopy measurements from a notional PBR burnup measurement system. Using a PBR reference design, data sets of synthetic gamma spectra representative of BUMS measurements of normal and anomalous pebbles that may be used to produce special fissile material were generated to train and test an ML anomaly detection ensemble on two reference scenarios – substitution of normal pebbles with target pebbles for production of Pu or 233U. The ML ensemble correctly identified all anomalous pebbles in the testing data set, and while perfect ensemble performance is normally indicative of overfitting, it was concluded that significantly lower photon intensity of target pebbles produced distinctly less intense photon spectra to where perfect ensemble performance was expected.
Underground caverns in a salt dome are promising geologic features to store hydrogen because of salt's extremely low permeability and self-healing behavior. The salt cavern storage community, however, has not fully understood the geomechanical behaviors of salt rock driven by quick operation cycles of injection–production, which may significantly impact the cost-effective storage-recovery performance of multiple caverns. Our field-scale generic model captures the impact of cyclic loading–unloading on the salt creep behavior and deformation under different cycle frequencies, operating pressure, and spatial order of operating cavern(s). This systematic simulation study indicates that the initial operation cycle and arrangement of multiple caverns play a significant role in the creep-driven loss of cavern volumes and cavern deformation. Our future study will develop a new salt constitutive model based on geomechanical tests of site-specific salt rock to probe the cyclic behaviors of salt precisely both beneath and above the dilatancy boundary, including reverse (inverse transient) creep, the Bauschinger effect, and damage-healing mechanism.
Low-velocity impact of 2D woven glass fiber reinforced polymer (GFRP) and carbon fiber reinforced polymer (CFRP) composite laminates was studied experimentally and numerically. Hybrid laminates containing blocked layers of GFRP/CFRP/GFRP with all plies oriented at 0° were investigated. Relatively high impact energies were used to obtain full perforation of the laminate in a low-velocity impact setup. Numerical simulations were carried out using the in-house transient dynamics finite element code, Sierra/SM, developed at Sandia National Laboratories. A three-dimensional continuum damage model was used to describe the response of a woven composite ply. Two methods for handling delamination were considered and compared: (1) cohesive zone modeling and (2) continuum damage mechanics. The reduced model size achieved by omission of the cohesive zone elements produced acceptable results at reduced computational cost. The comparison between different modeling techniques can be used to inform modeling decisions relevant to low velocity impact scenarios. The modeling was validated by comparing with the experimental results and showed good agreement in terms of predicted damage mechanisms and impactor velocity and force histories.
This article describes the theory, analysis, and initial bench-top testing of a minimally invasive, rotational resonator designed to produce small amounts of electrical energy for use in oceanic observation buoys. This work details the systems of equations that govern such a resonator, its potential power production, and its predicted effects on the modified motion of the buoy. Finally, a bench-top test apparatus is designed and experimented upon to identify the system and verify the system of equations empirically.
The (a)-type screw dislocations are known to be significant mediators of plasticity in hexagonal-close-packed (HCP) metals. These dislocations have polymorphic core structures, and subtle changes in the relative energies of these core structures are known to have a large impact on the dynamics of the dislocations. This work identifies a previously neglected long-range elastic interstitial-solute/dislocation interaction that influences the core structures. Essentially, interstitial solutes induce a change in the dislocation core structure to minimize the energy of interaction between the solutes and the dislocation. Molecular dynamics simulations, continuum linear elasticity, and statistical analysis show that this long-range interaction can locally alter the dislocation cores so that many different polymorphs appear along a single dislocation not only because of direct contact between interstitials and the dislocation core but also because of this long-range elastic interaction.
Epitaxial regrowth processes are presented for achieving Al-rich aluminum gallium nitride (AlGaN) high electron mobility transistor (HEMTs) with p-type gates with large, positive threshold voltage for enhancement mode operation and low resistance Ohmic contacts. Utilizing a deep gate recess etch into the channel and an epitaxial regrown p-AlGaN gate structure, an Al0.85Ga0.15N barrier/Al0.50Ga0.50N channel HEMT with a large positive threshold voltage (VTH = +3.5 V) and negligible gate leakage is demonstrated. Epitaxial regrowth of AlGaN avoids the use of gate insulators which can suffer from charge trapping effects observed in typical dielectric layers deposited on AlGaN. Low resistance Ohmic contacts (minimum specific contact resistance = 4 × 10−6 Ω cm2, average = 1.8 × 10−4 Ω cm2) are demonstrated in an Al0.85Ga0.15N barrier/Al0.68Ga0.32N channel HEMT by employing epitaxial regrowth of a heavily doped, n-type, reverse compositionally graded epitaxial structure. The combination of low-leakage, large positive threshold p-gates and low resistance Ohmic contacts by the described regrowth processes provide a pathway to realizing high-current, enhancement-mode, Al-rich AlGaN-based ultra-wide bandgap transistors.
Here we look at various forms of spectrum and associated pseudospectrum that can be defined for noncommuting d-tuples of Hermitian elements of a C$\ast$-algebra. In particular, we focus on the forms of multivariable pseudospectra that are finding applications in physics. The emphasis is on theoretical calculations of examples, in particular for noncommuting pairs and triple of operators on infinite dimensional Hilbert space. In particular, we look at the universal pair of projections in a C$\ast$ -algebra, the usual position and momentum operators, and triples of tridiagonal operators. We prove a relation between the quadratic pseudospectrum and Clifford pseudospectra, as well as results about how symmetries in a tuple of operators can lead to a symmetry in the various pseudospectra.
A new particle-based reweighting method is developed and demonstrated in the Aleph Particle-in-Cell with Direct Simulation Monte Carlo (PIC-DSMC) program. Novel splitting and merging algorithms ensure that modified particles maintain physically consistent positions and velocities. This method allows a single reweighting simulation to efficiently model plasma evolution over orders of magnitude variation in density, while accurately preserving energy distribution functions (EDFs). Demonstrations on electrostatic sheath and collisional rate dynamics show that reweighting simulations achieve accuracy comparable to fixed weight simulations with substantial computational time savings. This highly performant reweighting method is recommended for modeling plasma applications that require accurate resolution of EDFs or exhibit significant density variations in time or space.
Simulating subsurface contaminant transport at the kilometer-scale often entails modeling reactive flow and transport within and through complex geologic structures. These structures are typically meshed by hand and as a result geologic structure is usually represented by one or a few deterministically generated geological models for uncertainty studies of flow and transport in the subsurface. Uncertainty in geologic structure can have a significant impact on contaminant transport. In this study, the impact of geologic structure on contaminant tracer transport in a shale formation is investigated for a simplified generic deep geologic repository for permanent disposal of spent nuclear fuel. An open-source modeling framework is used to perform a sensitivity analysis study on transport of two tracers from a generic spent nuclear fuel repository with uncertain location of the interfaces between the stratum of the geologic structure. The automated workflow uses sampled realizations of the geological structural model in addition to uncertain flow parameters in a nested sensitivity analysis. Concentration of the tracers at observation points within, in line with, and downstream of the repository are used as the quantities of interest for determining model sensitivity to input parameters and geological realization. Finally, the results of the study indicate that the location of strata interfaces in the geological structure has a first-order impact on tracer transport in the example shale formation, and that this impact may be greater than that of the uncertain flow parameters.
We introduce a new training algorithm for deep neural networks that utilize random complex exponential activation functions. Our approach employs a Markov chain Monte Carlo sampling procedure to iteratively train network layers, avoiding global and gradient-based optimization while maintaining error control. It consistently attains the theoretical approximation rate for residual networks with complex exponential activation functions, determined by network complexity. Additionally, it enables efficient learning of multiscale and high-frequency features, producing interpretable parameter distributions. Despite using sinusoidal basis functions, we do not observe Gibbs phenomena in approximating discontinuous target functions.
We demonstrate magnetic anomaly detection (MAD) using an array of 24 commercial induction coil magnetometers with stand-off distances from a pulsed 99.8(3) kA·m2 magnetic dipole source of 260-1200 m. The sparse array is used to estimate the magnetic dipole location, magnitude, and orientation. We demonstrate how independent component analysis (ICA) improves the accuracy and precision of the magnetometer array when estimating the dipole parameters. Using sensor responses recorded from individual source pulses, we estimate the dipole location to within 29 ±; 2 m, the magnitude to within 3 ± kA ·m2, and dipole orientation error to within 19 ± 0.6°.
Legacy and modern-day ablation codes typically assume equilibrium pyrolysis gas chemistry. Yet, experimental data suggest that speciation from resin decomposition is far from equilibrium. A thermal and chemical kinetic study was performed on pyrolysis gas advection through a porous char, using the Theoretical Ablative Composite for Open Testing (TACOT) as a demonstrator material. The finite-element tool SIERRA/ Aria simulated the ablation of TACOT under various conditions. Temperature and phenolic decomposition rates generated from Aria were applied as inputs to a simulated network of perfectly stirred reactors (PSRs) in the chemical solver Cantera. A high-fidelity combustion mechanism computed the gas composition and thermal properties of the advecting pyrolyzate. The results indicate that pyrolysis gases do not rapidly achieve chemical equilibrium while traveling through the simulated material. Instead, a highly chemically reactive zone exists in the ablator between 1400 and 2500 K, wherein the modeled pyrolysis gases transition from a chemically frozen state to chemical equilibrium. These finite-rate results demonstrate a significant departure in computed pyrolysis gas properties from those derived from equilibrium solvers. Under the same conditions, finite-rate-derived gas is estimated to provide up to 50% less heat absorption than equilibrium-derived gas. This discrepancy suggests that nonequilibrium pyrolysis gas chemistry could substantially impact ablator material response models.
Tank farm workers involved in nuclear cleanup activities perform physically demanding tasks, typically while wearing heavy personal protective equipment (PPE). Exoskeleton devices have the potential to bring considerable benefit to this industry but have not been thoroughly studied in the context of nuclear cleanup. In this paper, we examine the performance of exoskeletons during a series of tasks emulating jobs performed on tank farms while participants wore PPE commonly deployed by tank farm workers. The goal of this study was to evaluate the effects of commercially available lower-body exoskeletons on a user’s gait kinematics and user perceptions. Three participants each tested three lower-body exoskeletons in a 70-min protocol consisting of level treadmill walking, incline treadmill walking, weighted treadmill walking, a weight lifting session, and a hand tool dexterity task. Results were compared to a no exoskeleton baseline condition and evaluated as individual case studies. The three participants showed a wide spectrum of user preferences and adaptations toward the devices. Individual case studies revealed that some users quickly adapted to select devices for certain tasks while others remained hesitant to use the devices. Temporal effects on gait change and perception were also observed for select participants in device usage over the course of the device session. Device benefit varied between tasks, but no conclusive aggregate trends were observed across devices for all tasks. Evidence suggests that device benefits observed for specific tasks may have been overshadowed by the wide array of tasks used in the protocol.
Shands, Emerson W.; Morel, Jim E.; Ahrens, Cory D.; Franke, Brian C.
We derive a new Galerkin quadrature (GQ) method for S (Formula presented.) calculations that differs from the two methods preceding it in that a matrix inverse for an (Formula presented.) matrix, where (Formula presented.) is the number of directions in the quadrature set, is no longer required. Galerkin quadrature methods are designed for calculations with highly anisotropic scattering. Such methods are not simply special angular quadratures but also are methods for representing the S (Formula presented.) scattering source that offers several advantages relative to the standard scattering source representation when highly truncated Legendre cross-section expansions must be used. Galerkin quadrature methods are also useful when the scattering is moderately anisotropic, but the quadrature being used is not sufficiently accurate for the order of the scattering source expansion that is required. We derive the new method and present computational results showing that its performance for two challenging problems is comparable to those of the two GQ methods that preceded it.
Krack, Malte; Brake, Matthew R.W.; Schwingshackl, Christoph; Gross, Johann; Hippold, Patrick; Lasen, Matias; Dini, Daniele; Salles, Loic; Allen, Matthew S.; Shetty, Drithi; Payne, Courtney A.; Willner, Kai; Lengger, Michael; Khan, Moheimin Y.; Ortiz, Jonel; Najera-Flores, David A.; Kuether, Robert J.; Miles, Paul R.; Xu, Chao; Yang, Huiyi; Jalali, Hassan; Taghipour, Javad; Khodaparast, Hamed H.; Friswell, Michael I.; Tiso, Paolo; Morsy, Ahmed A.; Bhattu, Arati; Hermann, Svenja; Jamia, Nidhal; Ozguven, H.N.; Muller, Florian; Scheel, Maren
The present article summarizes the submissions to the Tribomechadynamics Research Challenge announced in 2021. The task was a blind prediction of the vibration behavior of a system comprising a thin plate clamped on two sides via bolted joints. Both geometric and frictional contact nonlinearities are expected to be relevant. Provided were the CAD models and technical drawings of all parts as well as assembly instructions. The main objective was to predict the frequency and damping ratio of the lowest-frequency mode as function of the amplitude. Many different prediction approaches were pursued, ranging from well-known methods to very recently developed ones. After the submission deadline, the system has been fabricated and tested. The aim of this article is to evaluate the current state of the art in modeling and vibration prediction, and to provide directions for future methodological advancements.
Water security and climate change are important priorities for communities and regions worldwide. The intersections between water and climate change extend across many environmental and human activities. This Primer is intended as an introduction, grounded in examples, for students and others considering the interactions between climate, water, and society. In this Primer, we summarize key intersections between water and climate across four sectors: environment; drinking water, sanitation, and hygiene; food and agriculture; and energy. We begin with an overview of the fundamental water dynamics within each of these four sectors, and then discuss how climate change is impacting water and society within and across these sectors. Emphasizing the relationships and interconnectedness between water and climate change can encourage systems thinking, which can show how activities in one sector may influence activities or outcomes in other sectors. We argue that to achieve a resilient and sustainable water future under climate change, proposed solutions must consider the water–climate nexus to ensure the interconnected roles of water across sectors are not overlooked. Toward that end, we offer an initial set of guiding questions that can be used to inform the development of more holistic climate solutions. This article is categorized under: Science of Water > Water and Environmental Change Engineering Water > Water, Health, and Sanitation Human Water > Value of Water.
This work introduces a comprehensive simulation tool that provides a robust 1D Schrödinger - Poisson solver for modeling the electrostatics of heterostructures with an arbitrary number of layers, and non-uniform doping profiles along with the treatment of partial ionization of dopants at low temperatures. The effective masses are derived from the first-principles calculations. The solver is used to characterize three Ge1-xSnx/Ge heterostructures with non-uniform doping profiles and determine the subband structure at various temperatures. The simulation results of the sheet carrier densities show excellent agreement with the experimentally extracted data, thus demonstrating the capabilities of the solver.
Traditional Monte Carlo methods for particle transport utilize source iteration to express the solution, the flux density, of the transport equation as a Neumann series. Our contribution is to show that the particle paths simulated within source iteration are associated with the adjoint flux density and the adjoint particle paths are associated with the flux density. We make our assertion rigorous through the use of stochastic calculus by representing the particle path used in source iteration as a solution to a stochastic differential equation (SDE). The solution to the adjoint Boltzmann equation is then expressed in terms of the same SDE, and the solution to the Boltzmann equation is expressed in terms of the SDE associated with the adjoint particle process. An important consequence is that the particle paths used within source iteration simultaneously provide Monte Carlo samples of the flux density and adjoint flux density in the detector and source regions, respectively. The significant practical implication is that particle trajectories can be reused to obtain both forward and adjoint quantities of interest. To the best our knowledge, the reuse of entire particles paths has not appeared in the literature. Monte Carlo simulations are presented to support the reuse of the particle paths.
The Direct Simulation Monte Carlo (DSMC) method is utilized to numerically simulate test conditions in the Sandia Hypersonic Shock Tunnel (HST) facility. The setup consists of a hypersonic flow over a cylinder with the freestream at flow speeds of 4-5 km/s in a state of thermal non-equilibrium. We present comparisons of temperatures derived from spectrographic measurements of Nitric Oxide (NO) emission in the ultraviolet (UV) region with predictions from the DSMC solver. Furthermore, we present differences between spectrally banded imaging measurements taken during experiments in the infrared (IR) and UV regions with those obtained from numerical simulations.
Legacy and modern-day ablation codes typically assume equilibrium pyrolysis gas chemistry. Yet, experimental data suggest that speciation from resin decomposition is far from equilibrium. A thermal and chemical kinetic study was performed on pyrolysis gas advection through a porous char, using the Theoretical Ablative Composite for Open Testing (TACOT) as a demonstrator material. The finite-element tool SIERRA/ Aria simulated the ablation of TACOT under various conditions. Temperature and phenolic decomposition rates generated from Aria were applied as inputs to a simulated network of perfectly stirred reactors (PSRs) in the chemical solver Cantera. A high-fidelity combustion mechanism computed the gas composition and thermal properties of the advecting pyrolyzate. The results indicate that pyrolysis gases do not rapidly achieve chemical equilibrium while traveling through the simulated material. Instead, a highly chemically reactive zone exists in the ablator between 1400 and 2500 K, wherein the modeled pyrolysis gases transition from a chemically frozen state to chemical equilibrium. These finite-rate results demonstrate a significant departure in computed pyrolysis gas properties from those derived from equilibrium solvers. Under the same conditions, finite-rate-derived gas is estimated to provide up to 50% less heat absorption than equilibrium-derived gas. This discrepancy suggests that nonequilibrium pyrolysis gas chemistry could substantially impact ablator material response models.
In [R. J. Baraldi and D. P. Kouri, Mathematical Programming, (2022), pp. 1-40], we introduced an inexact trust-region algorithm for minimizing the sum of a smooth nonconvex and nonsmooth convex function. The principle expense of this method is in computing a trial iterate that satisfies the so-called fraction of Cauchy decrease condition—a bound that ensures the trial iterate produces sufficient decrease of the subproblem model. In this paper, we expound on various proximal trust-region subproblem solvers that generalize traditional trust-region methods for smooth unconstrained and convex-constrained problems. We introduce a simplified spectral proximal gradient solver, a truncated nonlinear conjugate gradient solver, and a dogleg method. We compare algorithm performance on examples from data science and PDE-constrained optimization.
The impact of high-altitude electromagnetic pulse events on the electric grid is not fully understood, and validated modeling of mitigations, such as lightning surge arresters (LSAs) is necessary to predict the propagation of very fast transients on the grid. Experimental validation of high frequency models for surge arresters is an active area of research. This article serves to experimentally validate a previously defined ZnO LSA model using four metal-oxide varistor pucks and nanosecond scale pulses to measure voltage and current responses. The SPICE circuit models of the pucks showed good predictability when compared to the measured arrester response when accounting for a testbed inductance of approximately 100 nH. Additionally, the comparatively high capacitance of low-profile arresters show a favorable response to high-speed transients that indicates the potential for effective electromagnetic pulse mitigation with future materials design.
Engineering and applied science rely on computational experiments to rigorously study physical systems. The mathematical models used to probe these systems are highly complex, and sampling-intensive studies often require prohibitively many simulations for acceptable accuracy. Surrogate models provide a means of circumventing the high computational expense of sampling such complex models. In particular, polynomial chaos expansions (PCEs) have been successfully used for uncertainty quantification studies of deterministic models where the dominant source of uncertainty is parametric. We discuss an extension to conventional PCE surrogate modeling to enable surrogate construction for stochastic computational models that have intrinsic noise in addition to parametric uncertainty. We develop a PCE surrogate on a joint space of intrinsic and parametric uncertainty, enabled by Rosenblatt transformations, which are evaluated via kernel density estimation of the associated conditional cumulative distributions. Furthermore, we extend the construction to random field data via the Karhunen-Loève expansion. We then take advantage of closed-form solutions for computing PCE Sobol indices to perform a global sensitivity analysis of the model which quantifies the intrinsic noise contribution to the overall model output variance. Additionally, the resulting joint PCE is generative in the sense that it allows generating random realizations at any input parameter setting that are statistically approximately equivalent to realizations from the underlying stochastic model. The method is demonstrated on a chemical catalysis example model and a synthetic example controlled by a parameter that enables a switch from unimodal to bimodal response distributions.
Carbon capture, utilization, and storage (CCUS) is an important pathway for meeting climate mitigation goals. While the economic viability of CCUS is well understood, previous studies do not evaluate the economic feasibility of carbon capture and storage (CCS) in the Permian Basin specifically regarding the new Section 45Q tax credits. We developed a technoeconomic analysis method, evaluated the economic feasibility of CCS at the acid gas injection (AGI) wells, and assessed the implication of Section 45Q tax credits for CCS at the AGIs. We find that the compressors, well depth, and the permit and monitoring costs drive the facility costs. Compressors are the predominant contributors to capital and operating expenditure driving the levelized cost of CO2 storage. Strategic cost reduction measures identified include 1) sourcing of low-cost electricity and 2) optimizing operational efficiency in well operations. In evaluating the impact of the tax credits on CCS projects, facility scale proved decisive. We found that facilities with an annual injection rate exceeding 10,000 MT storage capacity demonstrate economic viability contingent upon the procurement of inputs at the least cost. The new construction of AGI wells were found to be economically viable at a storage capacity of 100,000 MT. The basin is heavily focused on CCUS (tax credit – $65/MT CO2), which overshadows CCS ($85/MT CO2) opportunities. Balancing the dual objectives of CCS and CCUS requires planning and coordination for optimal resource and pore space utilization to attain the basin's decarbonization potential. We also found that CCS on AGI is a lower cost CCS option as compared to CCS on other industries.
There is growing interest to extend low-rank matrix decompositions to multi-way arrays, or tensors. One fundamental low-rank tensor decomposition is the canonical polyadic decomposition (CPD). The challenge of fitting a low-rank, nonnegative CPD model to Poisson-distributed count data is of particular interest. Several popular algorithms use local search methods to approximate the maximum likelihood estimator (MLE) of the Poisson CPD model. This work presents two new algorithms that extend state-of-the-art local methods for Poisson CPD. Hybrid GCP-CPAPR combines Generalized Canonical Decomposition (GCP) with stochastic optimization and CP Alternating Poisson Regression (CPAPR), a deterministic algorithm, to increase the probability of converging to the MLE over either method used alone. Restarted CPAPR with SVDrop uses a heuristic based on the singular values of the CPD model unfoldings to identify convergence toward optimizers that are not the MLE and restarts within the feasible domain of the optimization problem, thus reducing overall computational cost when using a multi-start strategy. We provide empirical evidence that indicates our approaches outperform existing methods with respect to converging to the Poisson CPD MLE.
A variational phase field model for dynamic ductile fracture is presented. The model is designed for elasto-viscoplastic materials subjected to rapid deformations in which the effects of heat generation and material softening are dominant. The variational framework allows for the consistent inclusion of plastic dissipation in the heat equation as well as thermal softening. It employs a coalescence function to degrade fracture energy during regimes of high plastic flow. A variationally consistent form of the Johnson–Cook model is developed for use with the framework. Results from various benchmark problems in dynamic ductile fracture are presented to demonstrate capabilities. In particular, the ability of the model to regularize shear band formation and subsequent damage evolution in two- and three-dimensional problems is demonstrated. Importantly, these phenomena are naturally captured through the underlying physics without the need for phenomenological criteria such as stability thresholds for the onset of shear band formation.
The sensitivity analysis algorithms that have been developed by the radiation transport community in multiple neutron transport codes, such as MCNP and SCALE, are extensively used by fields such as the nuclear criticality community. However, these techniques have seldom been considered for electron transport applications. In the past, the differential-operator method with the single scatter capability has been implemented in Sandia National Laboratories’ Integrated TIGER Series (ITS) coupled electron-photon transport code. This work is meant to extend the available sensitivity estimation techniques in ITS by implementing an adjoint-based sensitivity method, GEAR-MC, to strengthen its sensitivity analysis capabilities. To ensure the accuracy of this method being extended to coupled electron-photon transport, it is compared against the central-difference and differential-operator methodologies to estimate sensitivity coefficients for an experiment performed by McLaughlin and Hussman. Energy deposition sensitivities were calculated using all three methods, and the comparison between them has provided confidence in the accuracy of the newly implemented method. Unlike the current implementation of the differential-operator method in ITS, the GEAR-MC method was implemented with the option to calculate the energy-dependent energy deposition sensitivities, which are the sensitivity coefficients for energy deposition tallies to energy-dependent cross sections. The energy-dependent cross sections could be the cross sections for the material, elements in the material, or reactions of interest for the element. These sensitivities were compared to the energy-integrated sensitivity coefficients and exhibited a maximum percentage difference of 2.15%.
Vertical-axis wind turbines (VAWTs) have been the subject of research and development for nearly a century. However, this turbine architecture has fallen in and out of favor on multiple occasions. Beginning in the late 1970s, the U.S. Department of Energy sponsored an extensive experimental program through Sandia National Laboratories which produced a mass of experimental data from several highly instrumented turbines. Turbines designed, built, and tested include the 2 meter, 5 meter, 17 meter, and 34 meter and their respective configurations. This program kicked off a commercial collaboration and resulted in the FloWind turbines. The FloWind turbines had several notable design changes from the experimental turbines that, in conjunction with a general lack of understanding regarding predicting fatigue at the time, led to the majority of the turbines failing prematurely during the late 80s.
As quantum computing hardware becomes more complex with ongoing design innovations and growing capabilities, the quantum computing community needs increasingly powerful techniques for fabrication failure root-cause analysis. This is especially true for trapped-ion quantum computing. As trapped-ion quantum computing aims to scale to thousands of ions, the electrode numbers are growing to several hundred, with likely integrated photonic components also adding to the electrical and fabrication complexity, making faults even harder to locate. In this work, we used a high-resolution quantum magnetic imaging technique, based on nitrogen-vacancy centers in diamond, to investigate short-circuit faults in an ion trap chip. We imaged currents from these short-circuit faults to ground and compared them to intentionally created faults, finding that the root cause of the faults was failures in the on-chip trench capacitors. This work, where we exploited the performance advantages of a quantum magnetic sensing technique to troubleshoot a piece of quantum computing hardware, is a unique example of the evolving synergy between emerging quantum technologies to achieve capabilities that were previously inaccessible.
Stochastic collocation (SC) is a well-known non-intrusive method of constructing surrogate models for uncertainty quantification. In dynamical systems, SC is especially suited for full-field uncertainty propagation that characterizes the distributions of the high-dimensional solution fields of a model with stochastic input parameters. However, due to the highly nonlinear nature of the parameter-to-solution map in even the simplest dynamical systems, the constructed SC surrogates are often inaccurate. This work presents an alternative approach, where we apply the SC approximation over the dynamics of the model, rather than the solution. By combining the data-driven sparse identification of nonlinear dynamics framework with SC, we construct dynamics surrogates and integrate them through time to construct the surrogate solutions. We demonstrate that the SC-over-dynamics framework leads to smaller errors, both in terms of the approximated system trajectories as well as the model state distributions, when compared against full-field SC applied to the solutions directly. We present numerical evidence of this improvement using three test problems: a chaotic ordinary differential equation, and two partial differential equations from solid mechanics.
Statistical analysis of tensor-valued data has largely used the tensor-variate normal (TVN) distribution that may be inadequate for data arising from distributions with heavier or lighter tails. We study a general family of elliptically contoured (EC) TV distributions and derive its characterizations, moments, marginal, and conditional distributions. We describe procedures for maximum likelihood estimation from data that are (1) uncorrelated draws from an EC distribution, (2) from a scale mixture of the TVN distribution, and (3) from an underlying but unknown EC distribution, for which we extend Tyler’s robust estimator. A detailed simulation study highlights the benefits of choosing an EC distribution over the TVN for heavier-tailed data. We develop TV classification rules using discriminant analysis and EC errors and show that they better predict cats and dogs from images in the Animal Faces-HQ dataset than the TVN-based rules. A novel tensor-on-tensor regression and TV analysis of variance (TANOVA) framework under EC errors is also demonstrated to better characterize gender, age, and ethnic origin than the usual TVN-based TANOVA in the celebrated labeled faces of the wild dataset.
Entropy is a state variable that may be obtained from any thermodynamically complete equation of state (EOS). However, hydrocode calculations that output the entropy often contain numerical errors; this is not because of the EOS, but rather the solution techniques that are used in hydrocodes (especially Eulerian) such as convection, remapping, and artificial viscosity. In this work, empirical correlations are investigated to reduce the errors in entropy without altering the solution techniques for the conservation of mass, momentum, and energy. Specifically, these correlations are developed for the function of entropy ZS, and they depend upon the net artificial viscous work, as determined via Sandia National Laboratories’ shock physics hydrocode CTH. These results are a continuation of a prior effort to implement the entropy-based CREST reactive burn model in CTH, and they are presented here to stimulate further interest from the shock physics community. Future work is planned to study higher-dimensional shock waves, shock wave interactions, and possible ties between the empirical correlations and a physical law.
Ab initio molecular dynamics (AIMD) simulations were carried out to investigate the equation of state of Nb2O5 and its pressure-density relationship under shock conditions. The focus of this study is on the monoclinic B−Nb2O5 (C2/c) polymorph. Enthalpy calculations from AIMD trajectories at 300 K show that the pressure-induced transformation between the thermodynamically most stable crystalline monoclinic parent phase H−Nb2O5 (P2/m) and B−Nb2O5 occurs at ∼1.9 GPa. This H→B transition is energetically more favorable than the H→L(Pmm2) pressure-induced transition recently observed at ∼5.9−9.0 GPa. The predicted shock properties of Nb2O5 polymorphs are also compared to their Nb and NbO2 counterparts to assess the impact of niobium oxidation on shock response.
Entropy is a state variable that may be obtained from any thermodynamically complete equation of state (EOS). However, hydrocode calculations that output the entropy often contain numerical errors; this is not because of the EOS, but rather the solution techniques that are used in hydrocodes (especially Eulerian) such as convection, remapping, and artificial viscosity. Here, in this work, empirical correlations are investigated to reduce the errors in entropy without altering the solution techniques for the conservation of mass, momentum, and energy. Specifically, these correlations are developed for the function of entropy ZS, and they depend upon the net artificial viscous work, as determined via Sandia National Laboratories’ shock physics hydrocode CTH. These results are a continuation of a prior effort to implement the entropy-based CREST reactive burn model in CTH, and they are presented here to stimulate further interest from the shock physics community. Future work is planned to study higher-dimensional shock waves, shock wave interactions, and possible ties between the empirical correlations and a physical law.
Laser powder bed fusion (LPBF) additive manufacturing makes near-net-shaped parts with reduced material cost and time, rising as a promising technology to fabricate Ti-6Al-4 V, a widely used titanium alloy in aerospace and medical industries. However, LPBF Ti-6Al-4 V parts produced with 67° rotation between layers, a scan strategy commonly used to reduce microstructure and property inhomogeneity, have varying grain morphologies and weak crystallographic textures that change depending on processing parameters. This study predicts LPBF Ti-6Al-4 V solidification at three energy levels using a finite difference-Monte Carlo method and validates the simulations with large-area electron backscatter diffraction (EBSD) scans. The developed model accurately shows that a 〈001〉 texture forms at low energy and a 〈111〉 texture occurs at higher energies parallel to the build direction but with a lower strength than the textures observed from EBSD. A validated and well-established method of combining spatial correlation and general spherical harmonics representation of texture is developed to calculate a difference score between simulations and experiments. The quantitative comparison enables effective fine-tuning of nucleation density (N0) input, which shows a nonlinear relationship with increasing energy level. Future improvements in texture prediction code and a more comprehensive study of N0 with different energy levels will further advance the optimization of LPBF Ti-6Al-4 V components. These developments contribute a novel understanding of crystallographic texture formation in LPBF Ti-6Al-4 V, the development of robust model validation and calibration pipeline methodologies, and provide a platform for mechanical property prediction and process parameter optimization.
Barium titanate (BTO) is a ferroelectric perovskite used in electronics and energy storage systems because of its high dielectric constant. Decreasing the BTO particle size was shown to increase the dielectric constant of the perovskite, which is an intriguing but contested result. We investigated this result by fabricating silicone-matrix nanocomposite specimens containing BTO particles of decreasing diameter. Furthermore, density functional theory modeling was used to understand the interactions at the BTO particle surface. Combining results from experiments and modeling indicated that polymer type, particle surface interactions, and particle surface structure can influence the dielectric properties of polymer-matrix nanocomposites containing BTO.
The spatial distribution of electric field due to an imposed electric charge density profile in an infinite slab of dielectric material is derived analytically by integrating Gauss's law. Various charge density distributions are considered, including exponential and power-law forms. The Maxwell stress tensor is used to compute a notional static stress in the material due to the charge density and its electric field. Characteristics of the electric field and stress distributions are computed for example cases in polyethylene, showing that field magnitudes exceeding the dielectric strength would be required in order to achieve a stress exceeding the ultimate tensile strength.
X-rays can provide images when an object is visibly obstructed, allowing for motion measurements via x-ray digital image correlation (DIC). However, x-ray images are path-integrated and contain data for all objects between the source and detector. If multiple objects are present in the x-ray path, conventional DIC algorithms may fail to correlate the x-ray images. A new DIC algorithm called path-integrated (PI)-DIC addresses this issue by reformulating the matching criterion for DIC to account for multiple, independently-moving objects. PI-DIC requires a set of reference x-ray images of each independent object. However, due to experimental constraints, such reference images might not be obtainable from the experiment. This work focuses on the reliability of synthetically-generated reference images, in such cases. A simplified exemplar is used for demonstration purposes, consisting of two aluminum plates with tantalum x-ray DIC patterns undergoing independent rigid translations. Synthetic reference images based on the “as-designed” DIC patterns were generated. However, PI-DIC with the synthetic images suffered some biases due to manufacturing defects of the patterns. A systematic study of seven identified defect types found that an incorrect feature diameter was the most influential defect. Synthetic images were re-generated with the corrected feature diameter, and PI-DIC errors were improved by a factor of 3-4. Final biases ranged from 0.00-0.04 px, and standard uncertainties ranged from 0.06-0.11 px. In conclusion, PI-DIC accurately measured the independent displacement of two plates from a single series of path-integrated x-ray images using synthetically-generated reference images, and the methods and conclusions derived here can be extended to more generalized cases involving stereo PI-DIC for arbitrary specimen geometry and motion. This work thus extends the application space of x-ray imaging for full-field DIC measurements of multiple surfaces or objects in extreme environments where optical DIC is not possible.
We present a machine-learning strategy for finite element analysis of solid mechanics wherein we replace complex portions of a computational domain with a data-driven surrogate. In the proposed strategy, we decompose a computational domain into an “outer” coarse-scale domain that we resolve using a finite element method (FEM) and an “inner” fine-scale domain. We then develop a machine-learned (ML) model for the impact of the inner domain on the outer domain. In essence, for solid mechanics, our machine-learned surrogate performs static condensation of the inner domain degrees of freedom. This is achieved by learning the map from displacements on the inner-outer domain interface boundary to forces contributed by the inner domain to the outer domain on the same interface boundary. We consider two such mappings, one that directly maps from displacements to forces without constraints, and one that maps from displacements to forces by virtue of learning a symmetric positive semi-definite (SPSD) stiffness matrix. We demonstrate, in a simplified setting, that learning an SPSD stiffness matrix results in a coarse-scale problem that is well-posed with a unique solution. We present numerical experiments on several exemplars, ranging from finite deformations of a cube to finite deformations with contact of a fastener-bushing geometry. We demonstrate that enforcing an SPSD stiffness matrix drastically improves the robustness and accuracy of FEM–ML coupled simulations, and that the resulting methods can accurately characterize out-of-sample loading configurations with significant speedups over the standard FEM simulations.
Rimsza, Jessica; Maksimov, Vasilii; Welch, Rebecca S.; Potter, Arron R.; Mauro, John C.; Wilkinson, Collin J.
Decarbonizing the glass industry requires alternative melting technology, as current industrial melting practices rely heavily on fossil fuels. Hydrogen has been proposed as an alternative to carbon-based fuels, but the ensuing consequences on the mechanical behavior of the glass remain to be clarified. A critical distinction between hydrogen and carbon-based fuels is the increased generation of water during combustion, which raises the equilibrium solubility of water in the melt and alters the behavior of the resulting glass. A series of five silicate glasses with 80% silica and variable [Na2O]/([H2O] + [Na2O]) ratios were simulated using molecular dynamics to elucidate the effects of water on fracture. Several fracture toughness calculation methods were used in combination with atomistic fracture simulations to examine the effects of hydroxyl content on fracture behavior. This study reveals that the crack propagation pathway is a key metric to understanding fracture toughness. Notably, the fracture propagation path favors hydrogen sites over sodium sites, offering a possible explanation of the experimentally observed effects of water on fracture properties.
Cyber-physical systems have behaviour that crosses domain boundaries during events such as planned operational changes and malicious disturbances. Traditionally, the cyber and physical systems are monitored separately and use very different toolsets and analysis paradigms. The security and privacy of these cyber-physical systems requires improved understanding of the combined cyber-physical system behaviour and methods for holistic analysis. Therefore, the authors propose leveraging clustering techniques on cyber-physical data from smart grid systems to analyse differences and similarities in behaviour during cyber-, physical-, and cyber-physical disturbances. Since clustering methods are commonly used in data science to examine statistical similarities in order to sort large datasets, these algorithms can assist in identifying useful relationships in cyber-physical systems. Through this analysis, deeper insights can be shared with decision-makers on what cyber and physical components are strongly or weakly linked, what cyber-physical pathways are most traversed, and the criticality of certain cyber-physical nodes or edges. This paper presents several types of clustering methods for cyber-physical graphs of smart grid systems and their application in assessing different types of disturbances for informing cyber-physical situational awareness. The collection of these clustering techniques provide a foundational basis for cyber-physical graph interdependency analysis.
Ilgen, Anastasia G.; Borguet, Eric; Geiger, Franz M.; Gibbs, Julianne M.; Grassian, Vicki H.; Jun, Young S.; Kabengi, Nadine; Kubicki, James D.
Solid–water interfaces are crucial for clean water, conventional and renewable energy, and effective nuclear waste management. However, reflecting the complexity of reactive interfaces in continuum-scale models is a challenge, leading to oversimplified representations that often fail to predict real-world behavior. This is because these models use fixed parameters derived by averaging across a wide physicochemical range observed at the molecular scale. Recent studies have revealed the stochastic nature of molecular-level surface sites that define a variety of reaction mechanisms, rates, and products even across a single surface. To bridge the molecular knowledge and predictive continuum-scale models, we propose to represent surface properties with probability distributions rather than with discrete constant values derived by averaging across a heterogeneous surface. This conceptual shift in continuum-scale modeling requires exponentially rising computational power. By incorporating our molecular-scale understanding of solid–water interfaces into continuum-scale models we can pave the way for next generation critical technologies and novel environmental solutions.
Additive manufacturing (AM) technology, specifically 3D printing, holds great promise for in-orbit manufacturing. In-space printing can significantly reduce the mass, cost, and risk of long-term space exploration by enabling replacement parts to be made as needed and reducing dependence on Earth. However, printing in a zero-gravity environment poses challenges due to the absence of a rigid ground for the print platform, which can result in vibrational and rotational forces that may impact printing integrity. To address this issue, this paper proposes a novel linear magnetic position tracking algorithm, named Navigation Integrating Magnets By Linear Estimation (NIMBLE), for dynamic vibration compensation during 3D printing of truss structures in space. Compared to the most commonly used nonlinear optimization method, the NIMBLE algorithm is more than two orders of magnitude faster. With only a single 3-axis magnet sensor and a small NdFeB magnet, the NIMBLE algorithm provides a simple and easily implemented tracking solution for in-orbit 3D printing.
Information security and computing, two critical technological challenges for post-digital computation, pose opposing requirements – security (encryption) requires a source of unpredictability, while computing generally requires predictability. Each of these contrasting requirements presently necessitates distinct conventional Si-based hardware units with power-hungry overheads. This work demonstrates Cu0.3Te0.7/HfO2 (‘CuTeHO’) ion-migration-driven memristors that satisfy the contrasting requirements. Under specific operating biases, CuTeHO memristors generate truly random and physically unclonable functions, while under other biases, they perform universal Boolean logic. Using these computing primitives, this work experimentally demonstrates a single system that performs cryptographic key generation, universal Boolean logic operations, and encryption/decryption. Circuit-based calculations reveal the energy and latency advantages of the CuTeHO memristors in these operations. This work illustrates the functional flexibility of memristors in implementing operations with varying component-level requirements.
Material Testing 2.0 (MT2.0) is a paradigm that advocates for the use of rich, full-field data, such as from digital image correlation and infrared thermography, for material identification. By employing heterogeneous, multi-axial data in conjunction with sophisticated inverse calibration techniques such as finite element model updating and the virtual fields method, MT2.0 aims to reduce the number of specimens needed for material identification and to increase confidence in the calibration results. To support continued development, improvement, and validation of such inverse methods—specifically for rate-dependent, temperature-dependent, and anisotropic metal plasticity models—we provide here a thorough experimental data set for 304L stainless steel sheet metal. The data set includes full-field displacement, strain, and temperature data for seven unique specimen geometries tested at different strain rates and in different material orientations. Commensurate extensometer strain data from tensile dog bones is provided as well for comparison. We believe this complete data set will be a valuable contribution to the experimental and computational mechanics communities, supporting continued advances in material identification methods.
A combined Mode I-II cohesive zone (CZ) elasto-plastic constitutive model, and a two-dimensional (2D) cohesive interface element (CIE) are formulated and implemented at small strain within an ABAQUS User Element (UEL) for simulating 2D crack nucleation and propagation in fluid-saturated porous media. The CZ model mitigates problems of convergence for the global Newton-Raphson solver within ABAQUS, which when combined with a viscous stabilization procedure allows for simulation of post-peak response under load control for coupled poromechanical finite element analysis, such as concrete gravity dam stability analysis. Verification examples are presented, along with a more complex ambient limestone-concrete wedge fracture experiment, water-pressurized concrete wedge experiment, and concrete gravity dam stability analyses. A calibration procedure for estimating the CZ parameters is demonstrated with the limestone-concrete wedge fracture process. For the water-pressurized concrete wedge fracture experiment it is shown that the inherent time-dependence of the poromechanical CIE analysis provides a good match with experimental force versus displacement results at various crack mouth opening rates, yet misses the pore water pressure evolution ahead of the crack tip propagation. This is likely a result of the concrete being partially-saturated in the experiment, whereas the finite element analysis assumes fully water saturated concrete. For the concrete gravity dam analysis, it is shown that base crack opening and associated water uplift pressure leads to a reduced Factor of Safety, which is confirmed by separate analytical calculations.
Searfus, O.; Meert, C.; Clarke, S.; Pozzi, S.; Jovanovic, I.
The use of photon active interrogation to detect special nuclear material has held significant theoretical promise, as the interrogating source particles, photons, are fundamentally different from one of the main signatures of special nuclear material: neutrons produced in nuclear fission. However, neutrons produced by photonuclear reactions in the accelerator target, collimator, and environment can obscure the fission neutron signal. These (γ,n) neutrons could be discriminated from fission neutrons by their energy spectrum, but common detectors sensitive to the neutron spectrum, like organic scintillators, are typically hampered by the intense photon background characteristic of photon-based active interrogation. In contrast, high-pressure 4He-based scintillation detectors are well -suited to photon active interrogation, as they are similarly sensitive to fast neutrons and can measure their spectrum, but show little response to gamma rays. In this work, a photon active interrogation system utilizing a 4He scintillation detector and a 9 MeV linac-bremsstrahlung x-ray source was experimentally evaluated. The detector was shown to be capable of operating in intense gamma-ray environments and detecting photofission neutrons from 238U when interrogated by this x-ray source. The photofission neutrons show clear spectral separation from (γ,n) neutrons produced in lead, a common shielding material.
The bulk-boundary correspondence in topological crystalline insulators (TCIs) links the topological properties of the bulk to robust observables on the edges, e.g., the existence of robust edge modes or fractional charge. In one dimension, TCIs protected by reflection symmetry have been realized in a variety of systems in which each unit cell has spatially distributed degrees of freedom (SDOF). However, these realizations exhibit sensitivity of the resulting edge modes to variations in edge termination and to the local breaking of the protective spatial symmetries by inhomogeneity. Here we demonstrate topologically protected edge states in a monoatomic, orbital-based TCI that mitigates both of these issues. By collapsing all SDOF within the unit cell to a singular point in space, we eliminate the ambiguity in unit-cell definition and hence remove a prominent source of boundary termination variability. The topological observables are also more tolerant to disorder in the orbital energies. To validate this concept, we experimentally realize a lattice of mechanical resonators where each resonator acts as an "atom"that harbors two key orbital degrees of freedom having opposite reflection parity. Our measurements of this system provide direct visualization of the sp-hybridization between orbital modes that leads to a nontrivial band inversion in the bulk.
In this study we present a replication method to determine surface roughness and to identify surface features when a sample cannot be directly analyzed by conventional techniques. As a demonstration, this method was applied to an unused spent nuclear fuel dry storage canister to determine variation across different surface features. In this study, an initial material down-selection was performed to determine the best molding agent and determined that non-modified Polytek PlatSil23-75 provided the most accurate representation of the surface while providing good usability. Other materials that were considered include Polygel Brush-On 35 polyurethane rubber (with and without Pol-ease 2300 release agent), Polytek PlatSil73-25 silicone rubber (with and without PlatThix thickening agent and Pol-ease 2300 release agent), and Express STD vinylpolysiloxane impression putty. The ability of PlatSil73-25 to create an accurate surface replica was evaluated by creating surface molds of several locations on surface roughness standards representing ISO grade surfaces N3, N5, N7, and N8. Overall, the molds were able to accurately reproduce the expected roughness average (Ra) values, but systematically over-estimated the peak-valley maximum roughness (Rz) values. Using a 3D printed sample cell, several locations across the stainless steel spent nuclear fuel canister were sampled to determine the surface roughness. These measurements provided information regarding variability in normal surface roughness across the canister as well as a detailed evaluation on specific surface features (e.g., welds, grind marks, etc.). The results of these measurements can support development of dry storage canister ageing management programs, as surface roughness is an important factor for surface dust deposition and accumulation. This method can be applied more broadly to different surfaces beyond stainless steel to provide rapid, accurate surface replications for analytical evaluation by profilometry.
Ostrove, Corey I.; Rudinger, Kenneth M.; Blume-Kohout, Robin; Young, Kevin; Stemp, Holly G.; Asaad, Serwan; Van Blankenstein, Mark R.; Vaartjes, Arjen; Johnson, Mark A.I.; Madzik, Mateusz T.; Heskes, Amber J.A.; Firgau, Hannes R.; Su, Rocky Y.; Yang, Chih H.; Laucht, Arne; Hudson, Fay E.; Dzurak, Andrew S.; Itoh, Kohei M.; Jakob, Alexander M.; Johnson, Brett C.; Jamieson, David N.; Morello, Andrea
Scalable quantum processors require high-fidelity universal quantum logic operations in a manufacturable physical platform. Donors in silicon provide atomic size, excellent quantum coherence and compatibility with standard semiconductor processing, but no entanglement between donor-bound electron spins has been demonstrated to date. Here we present the experimental demonstration and tomography of universal one- and two-qubit gates in a system of two weakly exchange-coupled electrons, bound to single phosphorus donors introduced in silicon by ion implantation. We observe that the exchange interaction has no effect on the qubit coherence. We quantify the fidelity of the quantum operations using gate set tomography (GST), and we use the universal gate set to create entangled Bell states of the electrons spins, with fidelity 91.3 ± 3.0%, and concurrence 0.87 ± 0.05. These results form the necessary basis for scaling up donor-based quantum computers.
In the machine learning problem of multilabel classification, the objective is to determine for each test instance which classes the instance belongs to. In this work, we consider an extension of multilabel classification, called multilabel proportion prediction, in the context of radioisotope identification (RIID) using gamma spectra data. We aim to not only predict radioisotope proportions, but also identify out-of-distribution (OOD) spectra. We achieve this goal by viewing gamma spectra as discrete probability distributions, and based on this perspective, we develop a custom semi-supervised loss function that combines a traditional supervised loss with an unsupervised reconstruction error function. Our approach was motivated by its application to the analysis of short-lived fission products from spent nuclear fuel. In particular, we demonstrate that a neural network model trained with our loss function can successfully predict the relative proportions of 37 radioisotopes simultaneously. The model trained with synthetic data was then applied to measurements taken by Pacific Northwest National Laboratory (PNNL) to conduct analysis typically done by subject-matter experts. We also extend our approach to successfully identify when measurements are OOD, and thus should not be trusted, whether due to the presence of a novel source or novel proportions.
Modern lens designs are capable of resolving greater than 10 gigapixels, while advances in camera frame-rate and hyperspectral imaging have made data acquisition rates of Terapixel/second a real possibility. The main bottlenecks preventing such high data-rate systems are power consumption and data storage. In this work, we show that analog photonic encoders could address this challenge, enabling high-speed image compression using orders-of-magnitude lower power than digital electronics. Our approach relies on a silicon-photonics front-end to compress raw image data, foregoing energy-intensive image conditioning and reducing data storage requirements. The compression scheme uses a passive disordered photonic structure to perform kernel-type random projections of the raw image data with minimal power consumption and low latency. A back-end neural network can then reconstruct the original images with structural similarity exceeding 90%. This scheme has the potential to process data streams exceeding Terapixel/second using less than 100 fJ/pixel, providing a path to ultra-high-resolution data and image acquisition systems.
Composite materials with different microstructural material symmetries are common in engineering applications where grain structure, alloying and particle/fiber packing are optimized via controlled manufacturing. In fact these microstructural tunings can be done throughout a part to achieve functional gradation and optimization at a structural level. To predict the performance of particular microstructural configuration and thereby overall performance, constitutive models of materials with microstructure are needed. In this work we provide neural network architectures that provide effective homogenization models of materials with anisotropic components. These models satisfy equivariance and material symmetry principles inherently through a combination of equivariant and tensor basis operations. We demonstrate them on datasets of stochastic volume elements with different textures and phases where the material undergoes elastic and plastic deformation, and show that the these network architectures provide significant performance improvements.
The current present in a galvanic couple can define its resistance or susceptibility to corrosion. However, as the current is dependent upon environmental, material, and geometrical parameters it is experimentally costly to measure. To reduce these costs, Finite Element (FE) simulations can be used to assess the cathodic current but also require experimental inputs to define boundary conditions. Due to these challenges, it is crucial to accelerate predictions and accurately predict the current output for different environments and geometries representative of in-service conditions. Machine learned surrogate models provides a means to accelerate corrosion predictions. However, a one-time cost is incurred in procuring the simulation and experimental dataset necessary to calibrate the surrogate model. Therefore, an active learning protocol is developed through calibration of a low-cost surrogate model for the cathodic current of an exemplar galvanic couple (AA7075-SS304) as a function of environmental and geometric parameters. The surrogate model is calibrated on a dataset of FE simulations, and calculates an acquisition function that identifies specific additional inputs with the maximum potential to improve the current predictions. This is accomplished through a staggered workflow that not only improves and refines prediction, but identifies the points at which the most information is gained, thus enabling expansion to a larger parameter space. The protocols developed and demonstrated in this work provide a powerful tool for screening various forms of corrosion under in-service conditions.
Using coarse graining, the upscaled mechanical properties of a solid with small scale heterogeneities are derived. The method maps internal forces at the small scale onto peridynamic bond forces in the coarse grained mesh. These upscaled bond forces are used to calibrate a peridynamic material model with position-dependent parameters. These parameters incorporate mesoscale variations in the statistics of the small scale system. The upscaled peridynamic model can have a much coarser discretization than the original small scale model, allowing larger scale simulations to be performed efficiently. The convergence properties of the method are investigated for representative random microstructures. A bond breakage criterion for the upscaled peridynamic material model is also demonstrated.
Downhole logging tools are commonly used to characterize multi-thousand-foot geothermal wells. The elevated temperatures, pressures, and harsh chemical environments present significant challenges for the long-term operation of these tools, especially when real-time data transmission to the surface is required via data cable lines. Teflon-based single or multi-conductor cables with grease-filled cable heads are typically used for downhole tools. However, over extended periods of operation, the grease used to seal the conductors can slowly dissolve into the well fluid, creating electrical shorts and disabling data transmission. Additionally, when temperatures exceed 260 °C, Teflon can soften, potentially allowing parallel conductors to make contact and cause shorts. Between 2009 and 2015, Draka Cableteq USA, now part of the Prysmian Group, developed a multi-conductor/fiber cable and a four-conductor cable capable of operating above 300 °C. While a full study was conducted on the conductor/fiber cable, the evaluation of the four-conductor cable remained incomplete. With the increasing need for long-term high-temperature (HT) operation of logging tools, Sandia National Laboratories is now completing the evaluation of the four-conductor cable. The four-conductor cable has two major novel aspects. Firstly, its glass braid insulation can operate above 300 °C, eliminating the potential for shorts. Secondly, the insulated conductors are encased in metal tubing along the full length of the cable, creating a high-pressure seal between the cable and the tool. This metal tubing eliminates the need for a grease seal, a major limiting factor in the operation time of common cable lines. Sandia National Laboratories will conduct multiple tests to characterize the cable at temperatures above 300 °C and pressures up to 5,000 psi. This cable would enable tools to operate continuously at elevated temperatures, pressures, and in harsh fluids for extended periods, potentially lasting months.
Herein, we report on the ultrafast photodissociation of nickel tetracarbonyl─a prototypical metal-ligand model system─at 197 nm. Using mid-infrared transient absorption spectroscopy to probe the bound C≡O stretching modes, we find evidence for the picosecond time scale production of highly vibronically excited nickel dicarbonyl and nickel monocarbonyl, in marked contrast with a prior investigation at 193 nm. Further spectral evolution with a 50 ps time constant suggests an additional dissociation step; the absence of any corresponding growth in signal strongly indicates the production of bare Ni, a heretofore unreported product from single-photon excitation of nickel tetracarbonyl. Thus, by probing the deep UV-induced photodynamics of a prototypical metal carbonyl, this Letter adds time-resolved spectroscopic signatures of these dynamics to the sparse literature at high excitation energies.
High-entropy ceramics have garnered interest due to their remarkable hardness, compressive strength, thermal stability, and fracture toughness; yet the discovery of new high-entropy ceramics (out of a tremendous number of possible elemental permutations) still largely requires costly, inefficient, trial-and-error experimental and computational approaches. The entropy forming ability (EFA) factor was recently proposed as a computational descriptor that positively correlates with the likelihood that a 5-metal high-entropy carbide (HECs) will form the desired single phase, homogeneous solid solution; however, discovery of new compositions is computationally expensive. If you consider 8 candidate metals, the HEC EFA approach uses 49 optimizations for each of the 56 unique 5-metal carbides, requiring a total of 2744 costly density functional theory calculations. Here, we describe an orders-of-magnitude more efficient active learning (AL) approach for identifying novel HECs. To begin, we compared numerous methods for generating composition-based feature vectors (e.g., magpie and mat2vec), deployed an ensemble of machine learning (ML) models to generate an average and distribution of predictions, and then utilized the distribution as an uncertainty. We then deployed an AL approach to extract new training data points where the ensemble of ML models predicted a high EFA value or was uncertain of the prediction. Our approach has the combined benefit of decreasing the amount of training data required to reach acceptable prediction qualities and biases the predictions toward identifying HECs with the desired high EFA values, which are tentatively correlated with the formation of single phase HECs. Using this approach, we increased the number of 5-metal carbides screened from 56 to 15,504, revealing 4 compositions with record-high EFA values that were previously unreported in the literature. Our AL framework is also generalizable and could be modified to rationally predict optimized candidate materials/combinations with a wide range of desired properties (e.g., mechanical stability, thermal conductivity).
Carbon dots have attracted widespread interest for sensing applications based on their low cost, ease of synthesis, and robust optical properties. We investigate structure-function evolution on multiemitter fluorescence patterns for model carbon-nitride dots (CNDs) and their implications on trace-level sensing. Hydrothermally synthesized CNDs with different reaction times were used to determine how specific functionalities and their corresponding fluorescence signatures respond upon the addition of trace-level analytes. Archetype explosives molecules were chosen as a testbed due to similarities in substituent groups or inductive properties (i.e., electron withdrawing), and solution-based assays were performed using ratiometric fluorescence excitation-emission mapping (EEM). Analyte-specific quenching and enhancement responses were observed in EEM landscapes that varied with the CND reaction time. We then used self-organizing map models to examine EEM feature clustering with specific analytes. The results reveal that interactions between carbon-nitride frameworks and molecular-like species dictate response characteristics that may be harnessed to tailor sensor development for specific applications.
There is growing interest in material candidates with properties that can be engineered beyond traditional design limits. Compositionally complex oxides (CCO), often called high entropy oxides, are excellent candidates, wherein a lattice site shares more than four cations, forming single-phase solid solutions with unique properties. However, the nature of compositional complexity in dictating properties remains unclear, with characteristics that are difficult to calculate from first principles. Here, compositional complexity is demonstrated as a tunable parameter in a spin-transition oxide semiconductor La1− x(Nd, Sm, Gd, Y)x/4CoO3, by varying the population x of rare earth cations over 0.00≤ x≤ 0.80. Across the series, increasing complexity is revealed to systematically improve crystallinity, increase the amount of electron versus hole carriers, and tune the spin transition temperature and on-off ratio. At high a population (x = 0.8), Seebeck measurements indicate a crossover from hole-majority to electron-majority conduction without the introduction of conventional electron donors, and tunable complexity is proposed as new method to dope semiconductors. First principles calculations combined with angle resolved photoemission reveal an unconventional doping mechanism of lattice distortions leading to asymmetric hole localization over electrons. Thus, tunable complexity is demonstrated as a facile knob to improve crystallinity, tune electronic transitions, and to dope semiconductors beyond traditional means.
Harmonic and subharmonic RF injection locking is demonstrated in a terahertz (THz) quantum-cascade vertical-external-cavity surface-emitting laser (QC-VECSEL). By tuning the RF injection frequency around integer multiples and submultiples of the cavity round-trip frequency, different harmonic and subharmonic orders can be excited in the same device. Modulation-dependent behavior of the device has been studied with recorded lasing spectral broadening and locking bandwidths in each case. In particular, harmonic injection locking results in the observation of harmonic spectra with bandwidths over 200 GHz. A semiclassical Maxwell-density matrix formalism has been applied to interpret QC-VECSEL dynamics, which aligns well with experimental observations.
Conceptual models of smectite hydration include planar (flat) clay layers that undergo stepwise expansion as successive monolayers of water molecules fill the interlayer regions. However, X-ray diffraction (XRD) studies indicate the presence of interstratified hydration states, suggesting non-uniform interlayer hydration in smectites. Additionally, recent theoretical studies have shown that clay layers can adopt bent configurations over nanometer-scale lateral dimensions with minimal effect on mechanical properties. Therefore, in this study we used molecular simulations to evaluate structural properties and water adsorption isotherms for montmorillonite models composed of bent clay layers in mixed hydration states. Results are compared with models consisting of planar clay layers with interstratified hydration states (e.g. 1W–2W). The small degree of bending in these models (up to 1.5 Å of vertical displacement over a 1.3 nm lateral dimension) had little or no effect on bond lengths and angle distributions within the clay layers. Except for models that included dry states, porosities and simulated water adsorption isotherms were nearly identical for bent or flat clay layers with the same averaged layer spacing. Similar agreement was seen with Na- and Ca-exchanged clays. In conclusion, while the small bent models did not retain their configurations during unconstrained molecular dynamics simulation with flexible clay layers, we show that bent structures are stable at much larger length scales by simulating a 41.6×7.1 nm2 system that included dehydrated and hydrated regions in the same interlayer.
The stochastic weighted particle method (SWPM) is a generalization of the Direct Simulation Monte Carlo (DSMC) method where particle weights are variable and dynamic. SWPM is backed by a strong theoretical foundation but has not been critically evaluated for problems of practical interest. A thorough assessment of SWPM for boundary-driven flows reveals significant numerical artifacts near the boundary, notably a diverging heat flux. To correct the boundary heat flux, two modifications to SWPM are proposed: separated grouping and a spatially-dependent weight transfer function. To gauge the relative efficiency of SWPM in comparison to DSMC, a high-Mach-number wheel flow which forms a strong density gradient is also simulated.
Researchers are exploring adding wave energy converters to existing oceanographic buoys to provide a predictable source of renewable power. A ”pitch resonator” power take-off system has been developed that generates power using a geared flywheel system designed to match resonance with the pitching motion of the buoy. However, the novelty of the concept leaves researchers uncertain about various design aspects of the system. This work presents a novel design study of a pitch resonator to inform design decisions for an upcoming deployment of the system. The assessment uses control co-design via WecOptTool to optimize control trajectories for maximal electrical power production while varying five design parameters of the pitch resonator. Given the large search space of the problem, the control trajectories are optimized within a Monte Carlo analysis to identify optimal designs, followed by parameter sweeps around the optimum to identify trends between the design parameters. The gear ratio between the pitch resonator spring and flywheel are found to be the most sensitive design variables to power performance. The assessment also finds similar power generation for various sizes of resonator components, suggesting that correctly designing for optimal control trajectories at resonance is more critical to the design than component sizing.