Leveraging Monte Carlo Tree Search to Improve Teaming Performance in Multi-AgentAdversarial Environments
Abstract not provided.
Abstract not provided.
Advanced Materials Interfaces
Epitaxial regrowth processes are presented for achieving Al-rich aluminum gallium nitride (AlGaN) high electron mobility transistor (HEMTs) with p-type gates with large, positive threshold voltage for enhancement mode operation and low resistance Ohmic contacts. Utilizing a deep gate recess etch into the channel and an epitaxial regrown p-AlGaN gate structure, an Al0.85Ga0.15N barrier/Al0.50Ga0.50N channel HEMT with a large positive threshold voltage (VTH = +3.5 V) and negligible gate leakage is demonstrated. Epitaxial regrowth of AlGaN avoids the use of gate insulators which can suffer from charge trapping effects observed in typical dielectric layers deposited on AlGaN. Low resistance Ohmic contacts (minimum specific contact resistance = 4 × 10−6 Ω cm2, average = 1.8 × 10−4 Ω cm2) are demonstrated in an Al0.85Ga0.15N barrier/Al0.68Ga0.32N channel HEMT by employing epitaxial regrowth of a heavily doped, n-type, reverse compositionally graded epitaxial structure. The combination of low-leakage, large positive threshold p-gates and low resistance Ohmic contacts by the described regrowth processes provide a pathway to realizing high-current, enhancement-mode, Al-rich AlGaN-based ultra-wide bandgap transistors.
Journal of Thermophysics and Heat Transfer
Legacy and modern-day ablation codes typically assume equilibrium pyrolysis gas chemistry. Yet, experimental data suggest that speciation from resin decomposition is far from equilibrium. A thermal and chemical kinetic study was performed on pyrolysis gas advection through a porous char, using the Theoretical Ablative Composite for Open Testing (TACOT) as a demonstrator material. The finite-element tool SIERRA/ Aria simulated the ablation of TACOT under various conditions. Temperature and phenolic decomposition rates generated from Aria were applied as inputs to a simulated network of perfectly stirred reactors (PSRs) in the chemical solver Cantera. A high-fidelity combustion mechanism computed the gas composition and thermal properties of the advecting pyrolyzate. The results indicate that pyrolysis gases do not rapidly achieve chemical equilibrium while traveling through the simulated material. Instead, a highly chemically reactive zone exists in the ablator between 1400 and 2500 K, wherein the modeled pyrolysis gases transition from a chemically frozen state to chemical equilibrium. These finite-rate results demonstrate a significant departure in computed pyrolysis gas properties from those derived from equilibrium solvers. Under the same conditions, finite-rate-derived gas is estimated to provide up to 50% less heat absorption than equilibrium-derived gas. This discrepancy suggests that nonequilibrium pyrolysis gas chemistry could substantially impact ablator material response models.
Journal of Thermophysics and Heat Transfer
Legacy and modern-day ablation codes typically assume equilibrium pyrolysis gas chemistry. Yet, experimental data suggest that speciation from resin decomposition is far from equilibrium. A thermal and chemical kinetic study was performed on pyrolysis gas advection through a porous char, using the Theoretical Ablative Composite for Open Testing (TACOT) as a demonstrator material. The finite-element tool SIERRA/ Aria simulated the ablation of TACOT under various conditions. Temperature and phenolic decomposition rates generated from Aria were applied as inputs to a simulated network of perfectly stirred reactors (PSRs) in the chemical solver Cantera. A high-fidelity combustion mechanism computed the gas composition and thermal properties of the advecting pyrolyzate. The results indicate that pyrolysis gases do not rapidly achieve chemical equilibrium while traveling through the simulated material. Instead, a highly chemically reactive zone exists in the ablator between 1400 and 2500 K, wherein the modeled pyrolysis gases transition from a chemically frozen state to chemical equilibrium. These finite-rate results demonstrate a significant departure in computed pyrolysis gas properties from those derived from equilibrium solvers. Under the same conditions, finite-rate-derived gas is estimated to provide up to 50% less heat absorption than equilibrium-derived gas. This discrepancy suggests that nonequilibrium pyrolysis gas chemistry could substantially impact ablator material response models.
AIP Conference Proceedings
Entropy is a state variable that may be obtained from any thermodynamically complete equation of state (EOS). However, hydrocode calculations that output the entropy often contain numerical errors; this is not because of the EOS, but rather the solution techniques that are used in hydrocodes (especially Eulerian) such as convection, remapping, and artificial viscosity. In this work, empirical correlations are investigated to reduce the errors in entropy without altering the solution techniques for the conservation of mass, momentum, and energy. Specifically, these correlations are developed for the function of entropy ZS, and they depend upon the net artificial viscous work, as determined via Sandia National Laboratories’ shock physics hydrocode CTH. These results are a continuation of a prior effort to implement the entropy-based CREST reactive burn model in CTH, and they are presented here to stimulate further interest from the shock physics community. Future work is planned to study higher-dimensional shock waves, shock wave interactions, and possible ties between the empirical correlations and a physical law.
Nature Communications
Information security and computing, two critical technological challenges for post-digital computation, pose opposing requirements – security (encryption) requires a source of unpredictability, while computing generally requires predictability. Each of these contrasting requirements presently necessitates distinct conventional Si-based hardware units with power-hungry overheads. This work demonstrates Cu0.3Te0.7/HfO2 (‘CuTeHO’) ion-migration-driven memristors that satisfy the contrasting requirements. Under specific operating biases, CuTeHO memristors generate truly random and physically unclonable functions, while under other biases, they perform universal Boolean logic. Using these computing primitives, this work experimentally demonstrates a single system that performs cryptographic key generation, universal Boolean logic operations, and encryption/decryption. Circuit-based calculations reveal the energy and latency advantages of the CuTeHO memristors in these operations. This work illustrates the functional flexibility of memristors in implementing operations with varying component-level requirements.
Nature Communications
Solid–water interfaces are crucial for clean water, conventional and renewable energy, and effective nuclear waste management. However, reflecting the complexity of reactive interfaces in continuum-scale models is a challenge, leading to oversimplified representations that often fail to predict real-world behavior. This is because these models use fixed parameters derived by averaging across a wide physicochemical range observed at the molecular scale. Recent studies have revealed the stochastic nature of molecular-level surface sites that define a variety of reaction mechanisms, rates, and products even across a single surface. To bridge the molecular knowledge and predictive continuum-scale models, we propose to represent surface properties with probability distributions rather than with discrete constant values derived by averaging across a heterogeneous surface. This conceptual shift in continuum-scale modeling requires exponentially rising computational power. By incorporating our molecular-scale understanding of solid–water interfaces into continuum-scale models we can pave the way for next generation critical technologies and novel environmental solutions.
IET Cyber-Physical Systems: Theory and Applications
Cyber-physical systems have behaviour that crosses domain boundaries during events such as planned operational changes and malicious disturbances. Traditionally, the cyber and physical systems are monitored separately and use very different toolsets and analysis paradigms. The security and privacy of these cyber-physical systems requires improved understanding of the combined cyber-physical system behaviour and methods for holistic analysis. Therefore, the authors propose leveraging clustering techniques on cyber-physical data from smart grid systems to analyse differences and similarities in behaviour during cyber-, physical-, and cyber-physical disturbances. Since clustering methods are commonly used in data science to examine statistical similarities in order to sort large datasets, these algorithms can assist in identifying useful relationships in cyber-physical systems. Through this analysis, deeper insights can be shared with decision-makers on what cyber and physical components are strongly or weakly linked, what cyber-physical pathways are most traversed, and the criticality of certain cyber-physical nodes or edges. This paper presents several types of clustering methods for cyber-physical graphs of smart grid systems and their application in assessing different types of disturbances for informing cyber-physical situational awareness. The collection of these clustering techniques provide a foundational basis for cyber-physical graph interdependency analysis.
npj Materials Degradation
The current present in a galvanic couple can define its resistance or susceptibility to corrosion. However, as the current is dependent upon environmental, material, and geometrical parameters it is experimentally costly to measure. To reduce these costs, Finite Element (FE) simulations can be used to assess the cathodic current but also require experimental inputs to define boundary conditions. Due to these challenges, it is crucial to accelerate predictions and accurately predict the current output for different environments and geometries representative of in-service conditions. Machine learned surrogate models provides a means to accelerate corrosion predictions. However, a one-time cost is incurred in procuring the simulation and experimental dataset necessary to calibrate the surrogate model. Therefore, an active learning protocol is developed through calibration of a low-cost surrogate model for the cathodic current of an exemplar galvanic couple (AA7075-SS304) as a function of environmental and geometric parameters. The surrogate model is calibrated on a dataset of FE simulations, and calculates an acquisition function that identifies specific additional inputs with the maximum potential to improve the current predictions. This is accomplished through a staggered workflow that not only improves and refines prediction, but identifies the points at which the most information is gained, thus enabling expansion to a larger parameter space. The protocols developed and demonstrated in this work provide a powerful tool for screening various forms of corrosion under in-service conditions.
Nature Communications
Modern lens designs are capable of resolving greater than 10 gigapixels, while advances in camera frame-rate and hyperspectral imaging have made data acquisition rates of Terapixel/second a real possibility. The main bottlenecks preventing such high data-rate systems are power consumption and data storage. In this work, we show that analog photonic encoders could address this challenge, enabling high-speed image compression using orders-of-magnitude lower power than digital electronics. Our approach relies on a silicon-photonics front-end to compress raw image data, foregoing energy-intensive image conditioning and reducing data storage requirements. The compression scheme uses a passive disordered photonic structure to perform kernel-type random projections of the raw image data with minimal power consumption and low latency. A back-end neural network can then reconstruct the original images with structural similarity exceeding 90%. This scheme has the potential to process data streams exceeding Terapixel/second using less than 100 fJ/pixel, providing a path to ultra-high-resolution data and image acquisition systems.
EPJ Web of Conferences
The characterization of the neutron, prompt gamma-ray, and delayed gamma-ray radiation fields in the University of Texas at Austin Nuclear Engineering Teaching Laboratory (NETL) TRIGA reactor for the beam port (BP) 1/5 free-field environment at the 128-inch location adjacent to the core centerline has been accomplished. NETL is being explored as an auxiliary neutron test facility for the Sandia National Laboratories radiation effects sciences research and development campaigns. The NETL reactor is a TRIGA Mark-II pulse and steady-state, above-ground pool-type reactor. NETL is intended as a university research reactor typically used to perform irradiation experiments for students and customers, radioisotope production, as well as a training reactor. Initial criticality of the NETL TRIGA reactor was achieved on March 12, 1992, making it one of the newest test reactor facilities in the US. The neutron energy spectra, uncertainties, and covariance matrices are presented as well as a neutron fluence map of the experiment area of the cavity. For an unmoderated condition, the neutron fluence at the center of BP 1/5, at the adjacent core axial centerline, is about 8.2×1012 n/cm2 per MJ of reactor energy. About 67% of the neutron fluence is below 1 keV and 22% above 100 keV. The 1-MeV Damage-Equivalent Silicon (DES) fluence is roughly 1.6×1012 n/cm2 per MJ of reactor energy.
Advanced Electronic Materials
Biaxial stress is identified to play an important role in the polar orthorhombic phase stability in hafnium oxide-based ferroelectric thin films. However, the stress state during various stages of wake-up has not yet been quantified. In this work, the stress evolution with field cycling in hafnium zirconium oxide capacitors is evaluated. The remanent polarization of a 20 nm thick hafnium zirconium oxide thin film increases from 9.80 to 15.0 µC cm−2 following 106 field cycles. This increase in remanent polarization is accompanied by a decrease in relative permittivity that indicates that a phase transformation has occurred. The presence of a phase transformation is supported by nano-Fourier transform infrared spectroscopy measurements and scanning transmission electron microscopy that show an increase in ferroelectric phase content following wake-up. The stress of individual devices field cycled between pristine and 106 cycles is quantified using the sin2(ψ) technique, and the biaxial stress is observed to decrease from 4.3 ± 0.2 to 3.2 ± 0.3 GPa. The decrease in stress is attributed, in part, to a phase transformation from the antipolar Pbca phase to the ferroelectric Pca21 phase. This work provides new insight into the mechanisms controlling and/or accompanying polarization wake-up in hafnium oxide-based ferroelectrics.
Systems Engineering
The Human Readiness Level (HRL) scale is a simple nine-level scale that brings structure and consistency to the real-world application of user-centered design. It enables multidisciplinary consideration of human-focused elements during the system development process. Use of the standardized set of questions comprising the HRL scale results in a single human readiness number that communicates system readiness for human use. The Human Views (HVs) are part of an architecture framework that provides a repository for human-focused system information that can be used during system development to support the evaluation of HRL levels. This paper illustrates how HRLs and HVs can be used in combination to support user-centered design processes. A real-world example for a U.S. Army software modernization program is described to demonstrate application of HRLs and HVs in the context of user-centered design.
Journal of Nuclear Engineering and Radiation Science
Accident analysis and ensuring power plant safety are pivotal in the nuclear energy sector. Significant strides have been achieved over the past few decades regarding fire protection and safety, primarily centered on design and regulatory compliance. Yet, after the Fukushima accident a decade ago, the imperative to enhance measures against fire, internal flooding, and power loss has intensified. Hence, a comprehensive, multilayered protection strategy against severe accidents is needed. Consequently, gaining a deeper insight into pool fires and their behavior through extensive validated data can greatly aid in improving these measures using advanced validation techniques. A model validation study was performed at Sandia National Laboratories (SNL) in which a 30-cm diameter methanol pool fire was modeled using the SIERRA/Fuego turbulent reacting flow code. This validation study used a standard validation experiment to compare model results against, and conclusions have been published. The fire was modeled with a large eddy simulation (LES) turbulence model with subgrid turbulent kinetic energy closure. Combustion was modeled using a strained laminar flamelet library approach. Radiative heat transfer was accounted for with a model utilizing the gray-gas approximation. In this study, additional validation analysis is performed using the area validation metric (AVM). These activities are done on multiple datasets involving different variables and temporal/spatial ranges and intervals. The results provide insight into the use of the area validation metric on such temporally varying datasets and the importance of physics-aware use of the metric for proper analysis.
Journal of Thermal Analysis and Calorimetry
Exploding bridgewire detonators (EBWs) containing pentaerythritol tetranitrate (PETN) exposed to high temperatures may not function following discharge of the design electrical firing signal from a charged capacitor. Knowing functionality of these arbitrarily facing EBWs is crucial when making safety assessments of detonators in accidental fires. Orientation effects are only significant when the PETN is partially melted. The melting temperature can be measured with a differential scanning calorimeter. Nonmelting EBWs will be fully functional provided the detonator never exceeds 406 K (133 °C) for at least 1 h. Conversely, EBWs will not be functional once the average input pellet temperature exceeds 414 K (141 °C) for a least 1 min which is long enough to cause the PETN input pellet to completely melt. Functionality of the EBWs at temperatures between 406 and 414 K will depend on orientation and can be predicted using a stratification model for downward facing detonators but is more complex for arbitrary orientations. A conservative rule of thumb would be to assume that the EBWs are fully functional unless the PETN input pellet has completely melted.
Journal of Computational Physics
A novel algorithm for explicit temporal discretization of the variable-density, low-Mach Navier-Stokes equations is presented here. Recognizing there is a redundancy between the mass conservation equation, the equation of state, and the transport equation(s) for the scalar(s) which characterize the thermochemical state, and that it destabilizes explicit methods, we demonstrate how to analytically eliminate the redundancy and propose an iterative scheme to solve the resulting transformed scalar equations. The method obtains second-order accuracy in time regardless of the number of iterations, so one can terminate this subproblem once stability is achieved. Hence, flows with larger density ratios can be simulated while still retaining the efficiency, low cost, and parallelizability of an explicit scheme. The temporal discretization algorithm is used within a pseudospectral direct numerical simulation which extends the method of Kim, Moin, and Moser for incompressible flow [17] to the variable-density, low-Mach setting, where we demonstrate stability for density ratios up to ∼25.7.
Polymer Degradation and Stability
Polymer Degradation and Stability
Bulletin of the Seismological Society of America
Determining the depths of small crustal earthquakes is challenging in many regions of the world, because most seismic networks are too sparse to resolve trade-offs between depth and origin time with conventional arrival-time methods. Precise and accurate depth estimation is important, because it can help seismologists discriminate between earthquakes and explosions, which is relevant to monitoring nuclear test ban treaties and producing earthquake catalogs that are uncontaminated by mining blasts. Here, we examine the depth sensitivity of several physics-based waveform features for ∼8000 earthquakes in southern California that have well-resolved depths from arrival-time inversion. We focus on small earthquakes (2 < ML < 4) recorded at local distances (< 150 km), for which depth estimation is especially challenging. We find that differential magnitudes (Mw= ML–Mc) are positively correlated with focal depth, implying that coda wave excitation decreases with focal depth. We analyze a simple proxy for relative frequency content, Φ≡ log10 (M0)+3log10 (fc (,and find that source spectra are preferentially enriched in high frequencies, or “blue-shifted,” as focal depth increases. We also find that two spectral amplitude ratios Rg 0.5–2 Hz/Sg 0.5–8 Hz and Pg/Sg at 3–8 Hz decrease as focal depth increases. Using multilinear regression with these features as predictor variables, we develop models that can explain 11%–59% of the variance in depths within 10 subregions and 25% of the depth variance across southern California as a whole. We suggest that incorporating these features into a machine learning workflow could help resolve focal depths in regions that are poorly instrumented and lack large databases of well-located events. Some of the waveform features we evaluate in this study have previously been used as source discriminants, and our results imply that their effectiveness in discrimination is partially because explosions generally occur at shallower depths than earthquakes.
International Journal of Plasticity
Hydrogen is known to embrittle austenitic stainless steels, which are widely used in high-pressure hydrogen storage and delivery systems, but the mechanisms that lead to such material degradation are still being elucidated. The current work investigates the deformation behavior of single crystal austenitic stainless steel 316L through combined uniaxial tensile testing, characterization and atomistic simulations. Thermally precharged hydrogen is shown to increase the critical resolved shear stress (CRSS) without previously reported deviations from Schmid's law. Molecular dynamics simulations further expose the statistical nature of the hydrogen and vacancy contributions to the CRSS in the presence of alloying. Slip distribution quantification over large in-plane distances (>1 mm), achieved via atomic force microscopy (AFM), highlights the role of hydrogen increasing the degree of slip localization in both single and multiple slip configurations. The most active slip bands accumulate significantly more deformation in hydrogen precharged specimens, with potential implications for damage nucleation. For 〈110〉 tensile loading, slip localization further enhances the activity of secondary slip, increases the density of geometrically necessary dislocations and leads to a distinct lattice rotation behavior compared to hydrogen-free specimens, as evidenced by electron backscatter diffraction (EBSD) maps. The results of this study provide a more comprehensive picture of the deformation aspect of hydrogen embrittlement in austenitic stainless steels.
Computer Methods in Applied Mechanics and Engineering
We study the problem of multifidelity uncertainty propagation for computationally expensive models. In particular, we consider the general setting where the high-fidelity and low-fidelity models have a dissimilar parameterization both in terms of number of random inputs and their probability distributions, which can be either known in closed form or provided through samples. We derive novel multifidelity Monte Carlo estimators which rely on a shared subspace between the high-fidelity and low-fidelity models where the parameters follow the same probability distribution, i.e., a standard Gaussian. We build the shared space employing normalizing flows to map different probability distributions into a common one, together with linear and nonlinear dimensionality reduction techniques, active subspaces and autoencoders, respectively, which capture the subspaces where the models vary the most. We then compose the existing low-fidelity model with these transformations and construct modified models with an increased correlation with the high-fidelity model, which therefore yield multifidelity estimators with reduced variance. A series of numerical experiments illustrate the properties and advantages of our approaches.
ChemSusChem
The valorization of lignin, a currently underutilized component of lignocellulosic biomass, has attracted attention to promote a stable and circular bioeconomy. Successful approaches including thermochemical, biological, and catalytic lignin depolymerization have been demonstrated, enabling opportunities for lignino-refineries and lignocellulosic biorefineries. Although significant progress in lignin valorization has been made, this review describes unexplored opportunities in chemical and biological routes for lignin depolymerization and thereby contributes to economically and environmentally sustainable lignin-utilizing biorefineries. This review also highlights the integration of chemical and biological lignin depolymerization and identifies research gaps while also recommending future directions for scaling processes to establish a lignino-chemical industry.
International Journal for Numerical and Analytical Methods in Geomechanics
A technique is proposed for reproducing particle size distributions in three-dimensional simulations of the crushing and comminution of solid materials. The method is designed to produce realistic distributions over a wide range of loading conditions, especially for small fragments. In contrast to most existing methods, the new model does not explicitly treat the small-scale process of fracture. Instead, it uses measured fragment distributions from laboratory tests as the basic material property that is incorporated into the algorithm, providing a data-driven approach. The algorithm is implemented within a nonlocal peridynamic solver, which simulates the underlying continuum mechanics and contact interactions between fragments after they are formed. The technique is illustrated in reproducing fragmentation data from drop weight testing on sandstone samples.
Nano Letters
We present large-scale atomistic simulations that reveal triple junction (TJ) segregation in Pt-Au nanocrystalline alloys in agreement with experimental observations. While existing studies suggest grain boundary solute segregation as a route to thermally stabilize nanocrystalline materials with respect to grain coarsening, here we quantitatively show that it is specifically the segregation to TJs that dominates the observed stability of these alloys. Our results reveal that doping the TJs renders them immobile, thereby locking the grain boundary network and hindering its evolution. In dilute alloys, it is shown that grain boundary and TJ segregation are not as effective in mitigating grain coarsening, as the solute content is not sufficient to dope and pin all grain boundaries and TJs. Our work highlights the need to account for TJ segregation effects in order to understand and predict the evolution of nanocrystalline alloys under extreme environments.
Journal of Applied Physics
Granular metals (GMs), consisting of metal nanoparticles separated by an insulating matrix, frequently serve as a platform for fundamental electron transport studies. However, few technologically mature devices incorporating GMs have been realized, in large part because intrinsic defects (e.g., electron trapping sites and metal/insulator interfacial defects) frequently impede electron transport, particularly in GMs that do not contain noble metals. Here, we demonstrate that such defects can be minimized in molybdenum-silicon nitride (Mo-SiNx) GMs via optimization of the sputter deposition atmosphere. For Mo-SiNx GMs deposited in a mixed Ar/N2 environment, x-ray photoemission spectroscopy shows a 40%-60% reduction of interfacial Mo-silicide defects compared to Mo-SiNx GMs sputtered in a pure Ar environment. Electron transport measurements confirm the reduced defect density; the dc conductivity improved (decreased) by 104-105 and the activation energy for variable-range hopping increased 10×. Since GMs are disordered materials, the GM nanostructure should, theoretically, support a universal power law (UPL) response; in practice, that response is generally overwhelmed by resistive (defective) transport. Here, the defect-minimized Mo-SiNx GMs display a superlinear UPL response, which we quantify as the ratio of the conductivity at 1 MHz to that at dc, Δ σ ω . Remarkably, these GMs display a Δ σ ω up to 107, a three-orders-of-magnitude improved response than previously reported for GMs. By enabling high-performance electric transport with a non-noble metal GM, this work represents an important step toward both new fundamental UPL research and scalable, mature GM device applications.
Optica
Frequency-modulated (FM) combs based on active cavities like quantum cascade lasers have recently emerged as promising light sources in many spectral regions. Unlike passive modelocking, which generates amplitude modulation using the field’s amplitude, FM comb formation relies on the generation of phase modulation from the field’s phase. They can therefore be regarded as a phase-domain version of passive modelocking. However, while the ultimate scaling laws of passive modelocking have long been known—Haus showed in 1975 that pulses modelocked by a fast saturable absorber have a bandwidth proportional to effective gain bandwidth—the limits of FM combs have been much less clear. Here, we show that FM combs based on fast gain media are governed by the same fundamental limits, producing combs whose bandwidths are linear in the effective gain bandwidth. Not only do we show theoretically that the diffusive effect of gain curvature limits comb bandwidth, but we also show experimentally how this limit can be increased. By adding carefully designed resonant-loss structures that are evanescently coupled to the cavity of a terahertz laser, we reduce the curvature and increase the effective gain bandwidth of the laser, demonstrating bandwidth enhancement. Our results can better enable the creation of active chip-scale combs and be applied to a wide array of cavity geometries.