Epitaxial regrowth processes are presented for achieving Al-rich aluminum gallium nitride (AlGaN) high electron mobility transistor (HEMTs) with p-type gates with large, positive threshold voltage for enhancement mode operation and low resistance Ohmic contacts. Utilizing a deep gate recess etch into the channel and an epitaxial regrown p-AlGaN gate structure, an Al0.85Ga0.15N barrier/Al0.50Ga0.50N channel HEMT with a large positive threshold voltage (VTH = +3.5 V) and negligible gate leakage is demonstrated. Epitaxial regrowth of AlGaN avoids the use of gate insulators which can suffer from charge trapping effects observed in typical dielectric layers deposited on AlGaN. Low resistance Ohmic contacts (minimum specific contact resistance = 4 × 10−6 Ω cm2, average = 1.8 × 10−4 Ω cm2) are demonstrated in an Al0.85Ga0.15N barrier/Al0.68Ga0.32N channel HEMT by employing epitaxial regrowth of a heavily doped, n-type, reverse compositionally graded epitaxial structure. The combination of low-leakage, large positive threshold p-gates and low resistance Ohmic contacts by the described regrowth processes provide a pathway to realizing high-current, enhancement-mode, Al-rich AlGaN-based ultra-wide bandgap transistors.
Legacy and modern-day ablation codes typically assume equilibrium pyrolysis gas chemistry. Yet, experimental data suggest that speciation from resin decomposition is far from equilibrium. A thermal and chemical kinetic study was performed on pyrolysis gas advection through a porous char, using the Theoretical Ablative Composite for Open Testing (TACOT) as a demonstrator material. The finite-element tool SIERRA/ Aria simulated the ablation of TACOT under various conditions. Temperature and phenolic decomposition rates generated from Aria were applied as inputs to a simulated network of perfectly stirred reactors (PSRs) in the chemical solver Cantera. A high-fidelity combustion mechanism computed the gas composition and thermal properties of the advecting pyrolyzate. The results indicate that pyrolysis gases do not rapidly achieve chemical equilibrium while traveling through the simulated material. Instead, a highly chemically reactive zone exists in the ablator between 1400 and 2500 K, wherein the modeled pyrolysis gases transition from a chemically frozen state to chemical equilibrium. These finite-rate results demonstrate a significant departure in computed pyrolysis gas properties from those derived from equilibrium solvers. Under the same conditions, finite-rate-derived gas is estimated to provide up to 50% less heat absorption than equilibrium-derived gas. This discrepancy suggests that nonequilibrium pyrolysis gas chemistry could substantially impact ablator material response models.
Legacy and modern-day ablation codes typically assume equilibrium pyrolysis gas chemistry. Yet, experimental data suggest that speciation from resin decomposition is far from equilibrium. A thermal and chemical kinetic study was performed on pyrolysis gas advection through a porous char, using the Theoretical Ablative Composite for Open Testing (TACOT) as a demonstrator material. The finite-element tool SIERRA/ Aria simulated the ablation of TACOT under various conditions. Temperature and phenolic decomposition rates generated from Aria were applied as inputs to a simulated network of perfectly stirred reactors (PSRs) in the chemical solver Cantera. A high-fidelity combustion mechanism computed the gas composition and thermal properties of the advecting pyrolyzate. The results indicate that pyrolysis gases do not rapidly achieve chemical equilibrium while traveling through the simulated material. Instead, a highly chemically reactive zone exists in the ablator between 1400 and 2500 K, wherein the modeled pyrolysis gases transition from a chemically frozen state to chemical equilibrium. These finite-rate results demonstrate a significant departure in computed pyrolysis gas properties from those derived from equilibrium solvers. Under the same conditions, finite-rate-derived gas is estimated to provide up to 50% less heat absorption than equilibrium-derived gas. This discrepancy suggests that nonequilibrium pyrolysis gas chemistry could substantially impact ablator material response models.
Entropy is a state variable that may be obtained from any thermodynamically complete equation of state (EOS). However, hydrocode calculations that output the entropy often contain numerical errors; this is not because of the EOS, but rather the solution techniques that are used in hydrocodes (especially Eulerian) such as convection, remapping, and artificial viscosity. In this work, empirical correlations are investigated to reduce the errors in entropy without altering the solution techniques for the conservation of mass, momentum, and energy. Specifically, these correlations are developed for the function of entropy ZS, and they depend upon the net artificial viscous work, as determined via Sandia National Laboratories’ shock physics hydrocode CTH. These results are a continuation of a prior effort to implement the entropy-based CREST reactive burn model in CTH, and they are presented here to stimulate further interest from the shock physics community. Future work is planned to study higher-dimensional shock waves, shock wave interactions, and possible ties between the empirical correlations and a physical law.
The current present in a galvanic couple can define its resistance or susceptibility to corrosion. However, as the current is dependent upon environmental, material, and geometrical parameters it is experimentally costly to measure. To reduce these costs, Finite Element (FE) simulations can be used to assess the cathodic current but also require experimental inputs to define boundary conditions. Due to these challenges, it is crucial to accelerate predictions and accurately predict the current output for different environments and geometries representative of in-service conditions. Machine learned surrogate models provides a means to accelerate corrosion predictions. However, a one-time cost is incurred in procuring the simulation and experimental dataset necessary to calibrate the surrogate model. Therefore, an active learning protocol is developed through calibration of a low-cost surrogate model for the cathodic current of an exemplar galvanic couple (AA7075-SS304) as a function of environmental and geometric parameters. The surrogate model is calibrated on a dataset of FE simulations, and calculates an acquisition function that identifies specific additional inputs with the maximum potential to improve the current predictions. This is accomplished through a staggered workflow that not only improves and refines prediction, but identifies the points at which the most information is gained, thus enabling expansion to a larger parameter space. The protocols developed and demonstrated in this work provide a powerful tool for screening various forms of corrosion under in-service conditions.
Modern lens designs are capable of resolving greater than 10 gigapixels, while advances in camera frame-rate and hyperspectral imaging have made data acquisition rates of Terapixel/second a real possibility. The main bottlenecks preventing such high data-rate systems are power consumption and data storage. In this work, we show that analog photonic encoders could address this challenge, enabling high-speed image compression using orders-of-magnitude lower power than digital electronics. Our approach relies on a silicon-photonics front-end to compress raw image data, foregoing energy-intensive image conditioning and reducing data storage requirements. The compression scheme uses a passive disordered photonic structure to perform kernel-type random projections of the raw image data with minimal power consumption and low latency. A back-end neural network can then reconstruct the original images with structural similarity exceeding 90%. This scheme has the potential to process data streams exceeding Terapixel/second using less than 100 fJ/pixel, providing a path to ultra-high-resolution data and image acquisition systems.
Information security and computing, two critical technological challenges for post-digital computation, pose opposing requirements – security (encryption) requires a source of unpredictability, while computing generally requires predictability. Each of these contrasting requirements presently necessitates distinct conventional Si-based hardware units with power-hungry overheads. This work demonstrates Cu0.3Te0.7/HfO2 (‘CuTeHO’) ion-migration-driven memristors that satisfy the contrasting requirements. Under specific operating biases, CuTeHO memristors generate truly random and physically unclonable functions, while under other biases, they perform universal Boolean logic. Using these computing primitives, this work experimentally demonstrates a single system that performs cryptographic key generation, universal Boolean logic operations, and encryption/decryption. Circuit-based calculations reveal the energy and latency advantages of the CuTeHO memristors in these operations. This work illustrates the functional flexibility of memristors in implementing operations with varying component-level requirements.
Ilgen, Anastasia G.; Borguet, Eric; Geiger, Franz M.; Gibbs, Julianne M.; Grassian, Vicki H.; Jun, Young S.; Kabengi, Nadine; Kubicki, James D.
Solid–water interfaces are crucial for clean water, conventional and renewable energy, and effective nuclear waste management. However, reflecting the complexity of reactive interfaces in continuum-scale models is a challenge, leading to oversimplified representations that often fail to predict real-world behavior. This is because these models use fixed parameters derived by averaging across a wide physicochemical range observed at the molecular scale. Recent studies have revealed the stochastic nature of molecular-level surface sites that define a variety of reaction mechanisms, rates, and products even across a single surface. To bridge the molecular knowledge and predictive continuum-scale models, we propose to represent surface properties with probability distributions rather than with discrete constant values derived by averaging across a heterogeneous surface. This conceptual shift in continuum-scale modeling requires exponentially rising computational power. By incorporating our molecular-scale understanding of solid–water interfaces into continuum-scale models we can pave the way for next generation critical technologies and novel environmental solutions.
Cyber-physical systems have behaviour that crosses domain boundaries during events such as planned operational changes and malicious disturbances. Traditionally, the cyber and physical systems are monitored separately and use very different toolsets and analysis paradigms. The security and privacy of these cyber-physical systems requires improved understanding of the combined cyber-physical system behaviour and methods for holistic analysis. Therefore, the authors propose leveraging clustering techniques on cyber-physical data from smart grid systems to analyse differences and similarities in behaviour during cyber-, physical-, and cyber-physical disturbances. Since clustering methods are commonly used in data science to examine statistical similarities in order to sort large datasets, these algorithms can assist in identifying useful relationships in cyber-physical systems. Through this analysis, deeper insights can be shared with decision-makers on what cyber and physical components are strongly or weakly linked, what cyber-physical pathways are most traversed, and the criticality of certain cyber-physical nodes or edges. This paper presents several types of clustering methods for cyber-physical graphs of smart grid systems and their application in assessing different types of disturbances for informing cyber-physical situational awareness. The collection of these clustering techniques provide a foundational basis for cyber-physical graph interdependency analysis.
The characterization of the neutron, prompt gamma-ray, and delayed gamma-ray radiation fields in the University of Texas at Austin Nuclear Engineering Teaching Laboratory (NETL) TRIGA reactor for the beam port (BP) 1/5 free-field environment at the 128-inch location adjacent to the core centerline has been accomplished. NETL is being explored as an auxiliary neutron test facility for the Sandia National Laboratories radiation effects sciences research and development campaigns. The NETL reactor is a TRIGA Mark-II pulse and steady-state, above-ground pool-type reactor. NETL is intended as a university research reactor typically used to perform irradiation experiments for students and customers, radioisotope production, as well as a training reactor. Initial criticality of the NETL TRIGA reactor was achieved on March 12, 1992, making it one of the newest test reactor facilities in the US. The neutron energy spectra, uncertainties, and covariance matrices are presented as well as a neutron fluence map of the experiment area of the cavity. For an unmoderated condition, the neutron fluence at the center of BP 1/5, at the adjacent core axial centerline, is about 8.2×1012 n/cm2 per MJ of reactor energy. About 67% of the neutron fluence is below 1 keV and 22% above 100 keV. The 1-MeV Damage-Equivalent Silicon (DES) fluence is roughly 1.6×1012 n/cm2 per MJ of reactor energy.
See, Judi E.; Handley, Holly A.H.; Savage-Knepshield, Pamela A.
The Human Readiness Level (HRL) scale is a simple nine-level scale that brings structure and consistency to the real-world application of user-centered design. It enables multidisciplinary consideration of human-focused elements during the system development process. Use of the standardized set of questions comprising the HRL scale results in a single human readiness number that communicates system readiness for human use. The Human Views (HVs) are part of an architecture framework that provides a repository for human-focused system information that can be used during system development to support the evaluation of HRL levels. This paper illustrates how HRLs and HVs can be used in combination to support user-centered design processes. A real-world example for a U.S. Army software modernization program is described to demonstrate application of HRLs and HVs in the context of user-centered design.
Biaxial stress is identified to play an important role in the polar orthorhombic phase stability in hafnium oxide-based ferroelectric thin films. However, the stress state during various stages of wake-up has not yet been quantified. In this work, the stress evolution with field cycling in hafnium zirconium oxide capacitors is evaluated. The remanent polarization of a 20 nm thick hafnium zirconium oxide thin film increases from 9.80 to 15.0 µC cm−2 following 106 field cycles. This increase in remanent polarization is accompanied by a decrease in relative permittivity that indicates that a phase transformation has occurred. The presence of a phase transformation is supported by nano-Fourier transform infrared spectroscopy measurements and scanning transmission electron microscopy that show an increase in ferroelectric phase content following wake-up. The stress of individual devices field cycled between pristine and 106 cycles is quantified using the sin2(ψ) technique, and the biaxial stress is observed to decrease from 4.3 ± 0.2 to 3.2 ± 0.3 GPa. The decrease in stress is attributed, in part, to a phase transformation from the antipolar Pbca phase to the ferroelectric Pca21 phase. This work provides new insight into the mechanisms controlling and/or accompanying polarization wake-up in hafnium oxide-based ferroelectrics.
Koper, Keith D.; Burlacu, Relu; Murray, Riley; Baker, Ben; Tibi, Rigobert T.; Mueen, Abdullah
Determining the depths of small crustal earthquakes is challenging in many regions of the world, because most seismic networks are too sparse to resolve trade-offs between depth and origin time with conventional arrival-time methods. Precise and accurate depth estimation is important, because it can help seismologists discriminate between earthquakes and explosions, which is relevant to monitoring nuclear test ban treaties and producing earthquake catalogs that are uncontaminated by mining blasts. Here, we examine the depth sensitivity of several physics-based waveform features for ∼8000 earthquakes in southern California that have well-resolved depths from arrival-time inversion. We focus on small earthquakes (2 < ML < 4) recorded at local distances (< 150 km), for which depth estimation is especially challenging. We find that differential magnitudes (Mw= ML–Mc) are positively correlated with focal depth, implying that coda wave excitation decreases with focal depth. We analyze a simple proxy for relative frequency content, Φ≡ log10 (M0)+3log10 (fc (,and find that source spectra are preferentially enriched in high frequencies, or “blue-shifted,” as focal depth increases. We also find that two spectral amplitude ratios Rg 0.5–2 Hz/Sg 0.5–8 Hz and Pg/Sg at 3–8 Hz decrease as focal depth increases. Using multilinear regression with these features as predictor variables, we develop models that can explain 11%–59% of the variance in depths within 10 subregions and 25% of the depth variance across southern California as a whole. We suggest that incorporating these features into a machine learning workflow could help resolve focal depths in regions that are poorly instrumented and lack large databases of well-located events. Some of the waveform features we evaluate in this study have previously been used as source discriminants, and our results imply that their effectiveness in discrimination is partially because explosions generally occur at shallower depths than earthquakes.
A novel algorithm for explicit temporal discretization of the variable-density, low-Mach Navier-Stokes equations is presented here. Recognizing there is a redundancy between the mass conservation equation, the equation of state, and the transport equation(s) for the scalar(s) which characterize the thermochemical state, and that it destabilizes explicit methods, we demonstrate how to analytically eliminate the redundancy and propose an iterative scheme to solve the resulting transformed scalar equations. The method obtains second-order accuracy in time regardless of the number of iterations, so one can terminate this subproblem once stability is achieved. Hence, flows with larger density ratios can be simulated while still retaining the efficiency, low cost, and parallelizability of an explicit scheme. The temporal discretization algorithm is used within a pseudospectral direct numerical simulation which extends the method of Kim, Moin, and Moser for incompressible flow [17] to the variable-density, low-Mach setting, where we demonstrate stability for density ratios up to ∼25.7.
Accident analysis and ensuring power plant safety are pivotal in the nuclear energy sector. Significant strides have been achieved over the past few decades regarding fire protection and safety, primarily centered on design and regulatory compliance. Yet, after the Fukushima accident a decade ago, the imperative to enhance measures against fire, internal flooding, and power loss has intensified. Hence, a comprehensive, multilayered protection strategy against severe accidents is needed. Consequently, gaining a deeper insight into pool fires and their behavior through extensive validated data can greatly aid in improving these measures using advanced validation techniques. A model validation study was performed at Sandia National Laboratories (SNL) in which a 30-cm diameter methanol pool fire was modeled using the SIERRA/Fuego turbulent reacting flow code. This validation study used a standard validation experiment to compare model results against, and conclusions have been published. The fire was modeled with a large eddy simulation (LES) turbulence model with subgrid turbulent kinetic energy closure. Combustion was modeled using a strained laminar flamelet library approach. Radiative heat transfer was accounted for with a model utilizing the gray-gas approximation. In this study, additional validation analysis is performed using the area validation metric (AVM). These activities are done on multiple datasets involving different variables and temporal/spatial ranges and intervals. The results provide insight into the use of the area validation metric on such temporally varying datasets and the importance of physics-aware use of the metric for proper analysis.
Exploding bridgewire detonators (EBWs) containing pentaerythritol tetranitrate (PETN) exposed to high temperatures may not function following discharge of the design electrical firing signal from a charged capacitor. Knowing functionality of these arbitrarily facing EBWs is crucial when making safety assessments of detonators in accidental fires. Orientation effects are only significant when the PETN is partially melted. The melting temperature can be measured with a differential scanning calorimeter. Nonmelting EBWs will be fully functional provided the detonator never exceeds 406 K (133 °C) for at least 1 h. Conversely, EBWs will not be functional once the average input pellet temperature exceeds 414 K (141 °C) for a least 1 min which is long enough to cause the PETN input pellet to completely melt. Functionality of the EBWs at temperatures between 406 and 414 K will depend on orientation and can be predicted using a stratification model for downward facing detonators but is more complex for arbitrary orientations. A conservative rule of thumb would be to assume that the EBWs are fully functional unless the PETN input pellet has completely melted.
We study the problem of multifidelity uncertainty propagation for computationally expensive models. In particular, we consider the general setting where the high-fidelity and low-fidelity models have a dissimilar parameterization both in terms of number of random inputs and their probability distributions, which can be either known in closed form or provided through samples. We derive novel multifidelity Monte Carlo estimators which rely on a shared subspace between the high-fidelity and low-fidelity models where the parameters follow the same probability distribution, i.e., a standard Gaussian. We build the shared space employing normalizing flows to map different probability distributions into a common one, together with linear and nonlinear dimensionality reduction techniques, active subspaces and autoencoders, respectively, which capture the subspaces where the models vary the most. We then compose the existing low-fidelity model with these transformations and construct modified models with an increased correlation with the high-fidelity model, which therefore yield multifidelity estimators with reduced variance. A series of numerical experiments illustrate the properties and advantages of our approaches.
Hydrogen is known to embrittle austenitic stainless steels, which are widely used in high-pressure hydrogen storage and delivery systems, but the mechanisms that lead to such material degradation are still being elucidated. The current work investigates the deformation behavior of single crystal austenitic stainless steel 316L through combined uniaxial tensile testing, characterization and atomistic simulations. Thermally precharged hydrogen is shown to increase the critical resolved shear stress (CRSS) without previously reported deviations from Schmid's law. Molecular dynamics simulations further expose the statistical nature of the hydrogen and vacancy contributions to the CRSS in the presence of alloying. Slip distribution quantification over large in-plane distances (>1 mm), achieved via atomic force microscopy (AFM), highlights the role of hydrogen increasing the degree of slip localization in both single and multiple slip configurations. The most active slip bands accumulate significantly more deformation in hydrogen precharged specimens, with potential implications for damage nucleation. For 〈110〉 tensile loading, slip localization further enhances the activity of secondary slip, increases the density of geometrically necessary dislocations and leads to a distinct lattice rotation behavior compared to hydrogen-free specimens, as evidenced by electron backscatter diffraction (EBSD) maps. The results of this study provide a more comprehensive picture of the deformation aspect of hydrogen embrittlement in austenitic stainless steels.
Shrestha, Shilva; Goswami, Shubhasish; Banerjee, Deepanwita; Garcia, Valentina; Zhou, Elizabeth; Olmsted, Charles N.; Majumder, Erica L.W.; Kumar, Deepak; Awasthi, Deepika; Mukhopadhyay, Aindrila; Singer, Steven W.; Gladden, John M.; Simmons, Blake A.; Choudhary, Hemant
The valorization of lignin, a currently underutilized component of lignocellulosic biomass, has attracted attention to promote a stable and circular bioeconomy. Successful approaches including thermochemical, biological, and catalytic lignin depolymerization have been demonstrated, enabling opportunities for lignino-refineries and lignocellulosic biorefineries. Although significant progress in lignin valorization has been made, this review describes unexplored opportunities in chemical and biological routes for lignin depolymerization and thereby contributes to economically and environmentally sustainable lignin-utilizing biorefineries. This review also highlights the integration of chemical and biological lignin depolymerization and identifies research gaps while also recommending future directions for scaling processes to establish a lignino-chemical industry.
A technique is proposed for reproducing particle size distributions in three-dimensional simulations of the crushing and comminution of solid materials. The method is designed to produce realistic distributions over a wide range of loading conditions, especially for small fragments. In contrast to most existing methods, the new model does not explicitly treat the small-scale process of fracture. Instead, it uses measured fragment distributions from laboratory tests as the basic material property that is incorporated into the algorithm, providing a data-driven approach. The algorithm is implemented within a nonlocal peridynamic solver, which simulates the underlying continuum mechanics and contact interactions between fragments after they are formed. The technique is illustrated in reproducing fragmentation data from drop weight testing on sandstone samples.
We present large-scale atomistic simulations that reveal triple junction (TJ) segregation in Pt-Au nanocrystalline alloys in agreement with experimental observations. While existing studies suggest grain boundary solute segregation as a route to thermally stabilize nanocrystalline materials with respect to grain coarsening, here we quantitatively show that it is specifically the segregation to TJs that dominates the observed stability of these alloys. Our results reveal that doping the TJs renders them immobile, thereby locking the grain boundary network and hindering its evolution. In dilute alloys, it is shown that grain boundary and TJ segregation are not as effective in mitigating grain coarsening, as the solute content is not sufficient to dope and pin all grain boundaries and TJs. Our work highlights the need to account for TJ segregation effects in order to understand and predict the evolution of nanocrystalline alloys under extreme environments.
Granular metals (GMs), consisting of metal nanoparticles separated by an insulating matrix, frequently serve as a platform for fundamental electron transport studies. However, few technologically mature devices incorporating GMs have been realized, in large part because intrinsic defects (e.g., electron trapping sites and metal/insulator interfacial defects) frequently impede electron transport, particularly in GMs that do not contain noble metals. Here, we demonstrate that such defects can be minimized in molybdenum-silicon nitride (Mo-SiNx) GMs via optimization of the sputter deposition atmosphere. For Mo-SiNx GMs deposited in a mixed Ar/N2 environment, x-ray photoemission spectroscopy shows a 40%-60% reduction of interfacial Mo-silicide defects compared to Mo-SiNx GMs sputtered in a pure Ar environment. Electron transport measurements confirm the reduced defect density; the dc conductivity improved (decreased) by 104-105 and the activation energy for variable-range hopping increased 10×. Since GMs are disordered materials, the GM nanostructure should, theoretically, support a universal power law (UPL) response; in practice, that response is generally overwhelmed by resistive (defective) transport. Here, the defect-minimized Mo-SiNx GMs display a superlinear UPL response, which we quantify as the ratio of the conductivity at 1 MHz to that at dc, Δ σ ω . Remarkably, these GMs display a Δ σ ω up to 107, a three-orders-of-magnitude improved response than previously reported for GMs. By enabling high-performance electric transport with a non-noble metal GM, this work represents an important step toward both new fundamental UPL research and scalable, mature GM device applications.
Frequency-modulated (FM) combs based on active cavities like quantum cascade lasers have recently emerged as promising light sources in many spectral regions. Unlike passive modelocking, which generates amplitude modulation using the field’s amplitude, FM comb formation relies on the generation of phase modulation from the field’s phase. They can therefore be regarded as a phase-domain version of passive modelocking. However, while the ultimate scaling laws of passive modelocking have long been known—Haus showed in 1975 that pulses modelocked by a fast saturable absorber have a bandwidth proportional to effective gain bandwidth—the limits of FM combs have been much less clear. Here, we show that FM combs based on fast gain media are governed by the same fundamental limits, producing combs whose bandwidths are linear in the effective gain bandwidth. Not only do we show theoretically that the diffusive effect of gain curvature limits comb bandwidth, but we also show experimentally how this limit can be increased. By adding carefully designed resonant-loss structures that are evanescently coupled to the cavity of a terahertz laser, we reduce the curvature and increase the effective gain bandwidth of the laser, demonstrating bandwidth enhancement. Our results can better enable the creation of active chip-scale combs and be applied to a wide array of cavity geometries.
The additive manufacture of compositionally graded Al/Cu parts by laser engineered net shaping (LENS) is demonstrated. The use of a blue light build laser enabled deposition on a Cu substrate. The thermal gradient and rapid solidification inherent to selective laser melting enabled mass transport of Cu up to 4 mm from a Cu substrate through a pure Al deposition, providing a means of producing gradients with finer step sizes than the printed layer thicknesses. Divorcing gradient continuity from layer or particle size makes LENS a potentially enabling technology for the manufacture of graded density impactors for ramp compression experiments. Printing graded structures with pure Al, however, was prevented by the growth of Al2Cu3 dendrites and acicular grains amid a matrix of Al2Cu. A combination of adding TiB2 grain refining powder and actively varying print layer composition suppressed the dendritic growth mode and produced an equiaxed microstructure in a compositionally graded part. Material phase was characterized for crystal structure and nanoindentation hardness to enable a discussion of phase evolution in the rapidly solidifying melt pool of a LENS print.
Radiation and radioactive substances result in the production of radioactive wastes which require safe management and disposal to avoid risks to human health and the environment. To ensure permanent safe disposal, the performance of a deep geological repository for radioactive waste is assessed against internationally agreed risk-based standards. Assessing postclosure safety of the future system's evolution includes screening of features, events, and processes (FEPs) relevant to the situation, their subsequent development into scenarios, and finally the development and execution of safety assessment (SA) models. Global FEP catalogs describe important natural and man-made repository system features and identify events and processes that may affect these features into the future. By combining FEPs, many of which are uncertain, different possible future system evolution scenarios are derived. Repository licensing should consider both the reference or “base” evolution as well as alternative futures that may lead to radiation release, pollution, or exposures. Scenarios are used to derive and consider both base and alternative evolutions, often through production of scenario-specific SA models and the recombination of their results into an assessment of the risk of harm. While the FEP-based scenario development process outlined here has evolved somewhat since its development in the 1980s, the fundamental ideas remain unchanged. A spectrum of common approaches is given here (e.g., bottom–up vs. top–down scenario development, probabilistic vs. bounding handling of uncertainty), related to how individual numerical models for possible futures are converted into a determination as to whether the system is safe (i.e., how aleatoric uncertainty and scenarios are integrated through bounding or Monte Carlo approaches).
The rise of grid modernization has been prompted by the escalating demand for power, the deteriorating state of infrastructure, and the growing concern regarding the reliability of electric utilities. The smart grid encompasses recent advancements in electronics, technology, telecommunications, and computer capabilities. Smart grid telecommunication frameworks provide bidirectional communication to facilitate grid operations. Software-defined networking (SDN) is a proposed approach for monitoring and regulating telecommunication networks, which allows for enhanced visibility, control, and security in smart grid systems. Nevertheless, the integration of telecommunications infrastructure exposes smart grid networks to potential cyberattacks. Unauthorized individuals may exploit unauthorized access to intercept communications, introduce fabricated data into system measurements, overwhelm communication channels with false data packets, or attack centralized controllers to disable network control. An ongoing, thorough examination of cyber attacks and protection strategies for smart grid networks is essential due to the ever-changing nature of these threats. Previous surveys on smart grid security lack modern methodologies and, to the best of our knowledge, most, if not all, focus on only one sort of attack or protection. This survey examines the most recent security techniques, simultaneous multi-pronged cyber attacks, and defense utilities in order to address the challenges of future SDN smart grid research. The objective is to identify future research requirements, describe the existing security challenges, and highlight emerging threats and their potential impact on the deployment of software-defined smart grid (SD-SG).
Atomic cluster expansion (ACE) methods provide a systematic way to describe particle local environments of arbitrary body order. For practical applications it is often required that the basis of cluster functions be symmetrized with respect to rotations and permutations. Existing methodologies yield sets of symmetrized functions that are over-complete. These methodologies thus require an additional numerical procedure, such as singular value decomposition (SVD), to eliminate redundant functions. In this work, it is shown that analytical linear relationships for subsets of cluster functions may be derived using recursion and permutation properties of generalized Wigner symbols. From these relationships, subsets (blocks) of cluster functions can be selected such that, within each block, functions are guaranteed to be linearly independent. It is conjectured that this block-wise independent set of permutation-adapted rotation and permutation invariant (PA-RPI) functions forms a complete, independent basis for ACE. Along with the first analytical proofs of block-wise linear dependence of ACE cluster functions and other theoretical arguments, numerical results are offered to demonstrate this. The utility of the method is demonstrated in the development of an ACE interatomic potential for tantalum. Using the new basis functions in combination with Bayesian compressive sensing sparse regression, some high degree descriptors are observed to persist and help achieve high-accuracy models.
The formation of magnesium chloride-hydroxide salts (magnesium hydroxychlorides) has implications for many geochemical processes and technical applications. For this reason, a thermodynamic database for evaluating the Mg(OH)2–MgCl2–H2O ternary system from 0 °C–120 °C has been developed based on extensive experimental solubility data. Internally consistent sets of standard thermodynamic parameters (ΔGf°, ΔHf°, S°, and CP) were derived for several solid phases: 3 Mg(OH)2:MgCl2:8H2O, 9 Mg(OH)2:MgCl2:4H2O, 2 Mg(OH)2:MgCl2:4H2O, 2 Mg(OH)2:MgCl2: 2H2O(s), brucite (Mg(OH)2), bischofite (MgCl2:6H2O), and MgCl2:4H2O. First, estimated values for the thermodynamic parameters were derived using a component addition method. These parameters were combined with standard thermodynamic data for Mg2+(aq) consistent with CODATA (Cox et al., 1989) to generate temperature-dependent Gibbs energies for the dissolution reactions of the solid phases. These data, in combination with values for MgOH+(aq) updated to be consistent with Mg2+-CODATA, were used to compute equilibrium constants and incorporated into a Pitzer thermodynamic database for concentrated electrolyte solutions. Phase solubility diagrams were constructed as a function of temperature and magnesium chloride concentration for comparisons with available experimental data. To improve the fits to the experimental data, reaction equilibrium constants for the Mg-bearing mineral phases, the binary Pitzer parameters for the MgOH+ — Cl− interaction, and the temperature-dependent coefficients for those Pitzer parameters were constrained by experimental phase boundaries and to match phase solubilities. These parameter adjustments resulted in an updated set of standard thermodynamic data and associated temperature-dependent functions. The resulting database has direct applications to investigations of magnesia cement formation and leaching, chemical barrier interactions related to disposition of heat-generating nuclear waste, and evaluation of magnesium-rich salt and brine stabilities at elevated temperatures.
In magnetized liner inertial fusion (MagLIF), a cylindrical liner filled with fusion fuel is imploded with the goal of producing a one-dimensional plasma column at thermonuclear conditions. However, structures attributed to three-dimensional effects are observed in self-emission x-ray images. Despite this, the impact of many experimental inputs on the column morphology has not been characterized. We demonstrate the use of a linear regression analysis to explore correlations between morphology and a wide variety of experimental inputs across 57 MagLIF experiments. Results indicate the possibility of several unexplored effects. For example, we demonstrate that increasing the initial magnetic field correlates with improved stability. Although intuitively expected, this has never been quantitatively assessed in integrated MagLIF experiments. We also demonstrate that azimuthal drive asymmetries resulting from the geometry of the “current return can” appear to measurably impact the morphology. In conjunction with several counterintuitive null results, we expect the observed correlations will encourage further experimental, theoretical, and simulation-based studies. Finally, we note that the method used in this work is general and may be applied to explore not only correlations between input conditions and morphology but also with other experimentally measured quantities.
Laros, James H.; Davis, Jacob; Tom, Nathan; Thiagarajan, Krish
This study presents theoretical formulations to evaluate the fundamental parameters and performance characteristics of a bottom-raised oscillating surge wave energy converter (OSWEC) device. Employing a flat plate assumption and potential flow formulation in elliptical coordinates, closed-form equations for the added mass, radiation damping, and excitation forces/torques in the relevant pitch-pitch and surge-pitch directions of motion are developed and used to calculate the system's response amplitude operator and the forces and moments acting on the foundation. The model is benchmarked against numerical simulations using WAMIT and WEC-Sim, showcasing excellent agreement. The sensitivity of plate thickness on the analytical hydrodynamic solutions is investigated over several thickness-to-width ratios ranging from 1:80 to 1:10. The results show that as the thickness of the benchmark OSWEC increases, the deviation of the analytical hydrodynamic coefficients from the numerical solutions grows from 3 % to 25 %. Differences in the excitation forces and torques, however, are contained within 12 %. While the flat plate assumption is a limitation of the proposed analytical model, the error is within a reasonable margin for use in the design space exploration phase before a higher-fidelity (and thus more computationally expensive) model is employed. A parametric study demonstrates the ability of the analytical model to quickly sweep over a domain of OSWEC dimensions, illustrating the analytical model's utility in the early phases of design.
The critical stress for cutting of a void and He bubble (generically referred to as a cavity) by edge and screw dislocations has been determined for FCC Fe0.70Cr0.20Ni0.10—close to 300-series stainless steel—over a range of cavity spacings, diameters, pressures, and glide plane positions. The results exhibit anomalous trends with spacing, diameter, and pressure when compared with classical theories for obstacle hardening. These anomalies are attributed to elastic anisotropy and the wide extended dislocation core in low stacking fault energy metals, indicating that caution must be exercised when using perfect dislocations in isotropic solids to study void and bubble hardening. In many simulations with screw dislocations, cross-slip was observed at the void/bubble surface, leading to an additional contribution to strengthening. We refer to this phenomenon as cavity cross-slip locking, and argue that it may be an important contributor to void and bubble hardening.
The 2022 National Defense Strategy of the United States listed climate change as a serious threat to national security. Climate intervention methods, such as stratospheric aerosol injection, have been proposed as mitigation strategies, but the downstream effects of such actions on a complex climate system are not well understood. The development of algorithmic techniques for quantifying relationships between source and impact variables related to a climate event (i.e., a climate pathway) would help inform policy decisions. Data-driven deep learning models have become powerful tools for modeling highly nonlinear relationships and may provide a route to characterize climate variable relationships. In this paper, we explore the use of an echo state network (ESN) for characterizing climate pathways. ESNs are a computationally efficient neural network variation designed for temporal data, and recent work proposes ESNs as a useful tool for forecasting spatiotemporal climate data. However, ESNs are noninterpretable black-box models along with other neural networks. The lack of model transparency poses a hurdle for understanding variable relationships. We address this issue by developing feature importance methods for ESNs in the context of spatiotemporal data to quantify variable relationships captured by the model. We conduct a simulation study to assess and compare the feature importance techniques, and we demonstrate the approach on reanalysis climate data. In the climate application, we consider a time period that includes the 1991 volcanic eruption of Mount Pinatubo. This event was a significant stratospheric aerosol injection, which acts as a proxy for an anthropogenic stratospheric aerosol injection. We are able to use the proposed approach to characterize relationships between pathway variables associated with this event that agree with relationships previously identified by climate scientists.
Nuclear power plant (NPP) risk assessment is broadly separated into disciplines of nuclear safety, security, and safeguards. Different analysis methods and computer models have been constructed to analyze each of these as separate disciplines. However, due to the complexity of NPP systems, there are risks that can span all these disciplines and require consideration of safety-security (2S) interactions which allows a more complete understanding of the relationship among these risks. A novel leading simulator/trailing simulator (LS/TS) method is introduced to integrate multiple generic safety and security computer models into a single, holistic 2S analysis. A case study is performed using this novel method to determine its effectiveness. The case study shows that the LS/TS method avoided introducing errors in simulation, compared to the same scenario performed without the LS/TS method. A second case study is then used to illustrate an integrated 2S analysis which shows that different levels of damage to vital equipment from sabotage at a NPP can affect accident evolution by several hours.
Seismic waveform data recorded at stations can be thought of as a superposition of the signal from a source of interest and noise from other sources. Frequency-based filtering methods for waveform denoising do not result in desired outcomes when the targeted signal and noise occupy similar frequency bands. Recently, denoising techniques based on deep-learning convolutional neural networks (CNNs), in which a recorded waveform is decomposed into signal and noise components, have led to improved results. These CNN methods, which use short-time Fourier transform representations of the time series, provide signal and noise masks for the input waveform. These masks are used to create denoised signal and designaled noise waveforms, respectively. However, advancements in the field of image denoising have shown the benefits of incorporating discrete wavelet transforms (DWTs) into CNN architectures to create multilevel wavelet CNN (MWCNN) models. The MWCNN model preserves the details of the input due to the good time–frequency localization of the DWT. Here, we use a data set of over 382,000 constructed seismograms recorded by the University of Utah Seismograph Stations network to compare the performance of CNN and MWCNN-based denoising models. Evaluation of both models on constructed test data shows that the MWCNN model outperforms the CNN model in the ability to recover the ground-truth signal component in terms of both waveform similarity and preservation of amplitude information. Model evaluation of real-world data shows that both the CNN and MWCNN models outperform standard band-pass filtering (BPF; average improvement in signal-to-noise ratio of 9.6 and 19.7 dB, respectively, with respect to BPF). Evaluation of continuous data suggests the MWCNN denoiser can improve both signal detection capabilities and phase arrival time estimates.
Finding alloys with specific design properties is challenging due to the large number of possible compositions and the complex interactions between elements. This study introduces a multi-objective Bayesian optimization approach guiding molecular dynamics simulations for discovering high-performance refractory alloys with both targeted intrinsic static thermomechanical properties and also deformation mechanisms occurring during dynamic loading. The objective functions are aiming for excellent thermomechanical stability via a high bulk modulus, a low thermal expansion, a high heat capacity, and for a resilient deformation mechanism maximizing the retention of the BCC phase after shock loading. Contrasting two optimization procedures, we show that the Pareto-optimal solutions are confined to a small performance space when the property objectives display a cooperative relationship. Conversely, the Pareto front is much broader in the performance space when these properties have antagonistic relationships. Density functional theory simulations validate these findings and unveil underlying atomic-bond changes driving property improvements.
We present a comprehensive benchmarking framework for evaluating machine-learning approaches applied to phase-field problems. This framework focuses on four key analysis areas crucial for assessing the performance of such approaches in a systematic and structured way. Firstly, interpolation tasks are examined to identify trends in prediction accuracy and accumulation of error over simulation time. Secondly, extrapolation tasks are also evaluated according to the same metrics. Thirdly, the relationship between model performance and data requirements is investigated to understand the impact on predictions and robustness of these approaches. Finally, systematic errors are analyzed to identify specific events or inadvertent rare events triggering high errors. Quantitative metrics evaluating the local and global description of the microstructure evolution, along with other scalar metrics representative of phase-field problems, are used across these four analysis areas. This benchmarking framework provides a path to evaluate the effectiveness and limitations of machine-learning strategies applied to phase-field problems, ultimately facilitating their practical application.
Solvent expulsion away from an intervening region between two approaching particles plays important roles in particle aggregation yet remains poorly understood. In this work, we use metadynamics molecular simulations to study the free energy landscape of removing water molecules from gibbsite and pyrophyllite slit pores representing the confined spaces between two approaching particles. For gibbsite, removing water from the intervening region is both entropically and enthalpically unfavorable. The closer the particles approach each other, the harder it is to expel water molecules. For pyrophyllite, water expulsion is spontaneous, which is different from the gibbsite system. A smaller pore makes the water removal more favorable. When water is being drained from the intervening region, single chains of water molecules are observed in gibbsite pore, while in pyrophyllite pore water cluster is usually observed. Water-gibbsite hydrogen bonds help stabilize water chains, while water forms clusters in pyrophyllite pore to maximize the number of hydrogen bonds among themselves. This work provides the first assessment into the energetics and structure of water being drained from the intervening region between two approaching particles during oriented attachment and aggregation.
Helium-4-based scintillation detector technology is emerging as a strong alternative to pulse-shape discrimination-capable organic scintillators for fast neutron detection and spectroscopy, particularly in extreme gamma-ray environments. The 4He detector is intrinsically insensitive to gamma radiation, as it has a relatively low cross-section for gamma-ray interactions, and the stopping power of electrons in the 4He medium is low compared to that of 4He recoil nuclei. Consequently, gamma rays can be discriminated by simple energy deposition thresholding instead of the more complex pulse shape analysis. The energy resolution of 4He scintillation detectors has not yet been well-characterized over a broad range of energy depositions, which limits the ability to deconvolve the source spectra. In this work, an experiment was performed to characterize the response of an Arktis S670 4He detector to nuclear recoils up to 9 MeV. The 4He detector was positioned in the center of a semicircular array of organic scintillation detectors operated in coincidence. Deuterium–deuterium and deuterium–tritium neutron generators provided monoenergetic neutrons, yielding geometrically constrained nuclear recoils ranging from 0.0925 to 8.87 MeV. The detector response provides evidence for scintillation linearity beyond the previously reported energy range. Finally, the measured response was used to develop an energy resolution function applicable to this energy range for use in high-fidelity detector simulations needed by future applications.
Individual lanthanide elements have physical/electronic/magnetic properties that make each useful for specific applications. Several of the lanthanides cations (Ln3+) naturally occur together in the same ores. They are notoriously difficult to separate from each other due to their chemical similarity. Predicting the Ln3+ differential binding energies (ΔΔE) or free energies (ΔΔG) at different binding sites, which are key figures of merit for separation applications, will help design of materials with lanthanide selectivity. We apply ab initio molecular dynamics (AIMD) simulations and density functional theory (DFT) to calculate ΔΔG for Ln3+ coordinated to ligands in water and embedded in metal-organic frameworks (MOFs), and ΔΔE for Ln3+ bonded to functionalized silica surfaces, thus circumventing the need for the computational costly absolute binding (free) energies ΔG and ΔE. Perturbative AIMD simulations of water-inundated simulation cells are applied to examine the selectivity of ligands towards adjacent Ln3+ in the periodic table. Static DFT calculations with a full Ln3+ first coordination shell, while less rigorous, show that all ligands examined with net negative charges are more selective towards the heavier lanthanides than a charge-neutral coordination shell made up of water molecules. Amine groups are predicted to be poor ligands for lanthanide-binding. We also address cooperative ion binding, i.e., using different ligands in concert to enhance lanthanide selectivity.
Materials simulations based on direct numerical solvers are accurate but computationally expensive for predicting materials evolution across length- and time-scales, due to the complexity of the underlying evolution equations, the nature of multiscale spatiotemporal interactions, and the need to reach long-time integration. We develop a method that blends direct numerical solvers with neural operators to accelerate such simulations. This methodology is based on the integration of a community numerical solver with a U-Net neural operator, enhanced by a temporal-conditioning mechanism to enable accurate extrapolation and efficient time-to-solution predictions of the dynamics. We demonstrate the effectiveness of this hybrid framework on simulations of microstructure evolution via the phase-field method. Such simulations exhibit high spatial gradients and the co-evolution of different material phases with simultaneous slow and fast materials dynamics. We establish accurate extrapolation of the coupled solver with large speed-up compared to DNS depending on the hybrid strategy utilized. This methodology is generalizable to a broad range of materials simulations, from solid mechanics to fluid dynamics, geophysics, climate, and more.
Analytical and semi–analytical models for stream depletion with transient stream stage drawdown induced by groundwater pumping are developed to address a deficiency in existing models, namely, the use of a fixed stream stage condition at the stream–aquifer interface. Here field data are presented to demonstrate that stream stage drawdown does indeed occur in response to groundwater pumping near aquifer–connected streams. A model that predicts stream depletion with transient stream drawdown is developed based on stream channel mass conservation and finite stream channel storage. The resulting models are shown to reduce to existing fixed–stage models in the limit as stream channel storage becomes infinitely large, and to the confined aquifer flow with a no–flow boundary at the streambed in the limit as stream storage becomes vanishingly small. The model is applied to field measurements of aquifer and stream drawdown, giving estimates of aquifer hydraulic parameters, streambed conductance, and a measure of stream channel storage. The results of the modeling and data analysis presented herein have implications for sustainable groundwater management.
This dataset is comprised of a library of atomistic structure files and corresponding X-ray diffraction (XRD) profiles and vibrational density of states (VDoS) profiles for bulk single crystal silicon (Si), gold (Au), magnesium (Mg), and iron (Fe) with and without disorder introduced into the atomic structure and with and without mechanical loading. Included with the atomistic structure files are descriptor files that measure the stress state, phase fractions, and dislocation content of the microstructures. All data was generated via molecular dynamics or molecular statics simulations using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code. This dataset can inform the understanding of how local or global changes to a materials microstructure can alter their spectroscopic and diffraction behavior across a variety of initial structure types (cubic diamond, face-centered cubic (FCC), hexagonal close-packed (HCP), and body-centered cubic (BCC) for Si, Au, Mg, and Fe, respectively) and overlapping changes to the microstructure (i.e., both disorder insertion and mechanical loading).
Efficient carbon capture requires engineered porous systems that selectively capture CO2 and have low energy regeneration pathways. Porous liquids (PLs), solvent-based systems containing permanent porosity through the incorporation of a porous host, increase the CO2 adsorption capacity. A proposed mechanism of PL regeneration is the application of isostatic pressure in which the dissolved nanoporous host is compressed to alter the stability of gases in the internal pore. This regeneration mechanism relies on the flexibility of the porous host, which can be evaluated through molecular simulations. Here, the flexibility of porous organic cages (POCs) as representative porous hosts was evaluated, during which pore windows decreased by 10-40% at 6 GPa. POCs with sterically smaller functional groups, such as the 1,2-ethane in the CC1 POC resulted in greater imine cage flexibility relative to those with sterically larger functional groups, such as the cyclohexane in the CC3 POC that protected the imine cage from the application of pressure. Structural changes in the POC also caused CO2 adsorption to be thermodynamically unfavorable beginning at ∼2.2 GPa in the CC1 POC, ∼1.1 GPa in the CC3 POC, and ∼1.0 GPa in the CC13 POC, indicating that the CO2 would be expelled from the POC at or above these pressures. Energy barriers for CO2 desorption from inside the POC varied based on the geometry of the pore window and all the POCs had at least one pore window with a sufficiently low energy barrier to allow for CO2 desorption under ambient temperatures. The results identified that flexibility of the CC1, CC3, or CC13 POCs under compression can result in the expulsion of captured gas molecules.
Stereo high-speed video of photovoltaic modules undergoing laboratory hail tests was processed using digital image correlation to determine module surface deformation during and immediately following impact. The purpose of this work was to demonstrate a methodology for characterizing module impact response differences as a function of construction and incident hail parameters. Video capture and digital image analysis were able to capture out-of-plane module deformation to a resolution of ±0.1 mm at 11 kHz on an in-plane grid of 10 × 10 mm over the area of a 1 × 2 m commercial photovoltaic module. With lighting and optical adjustments, the technique was adaptable to arbitrary module designs, including size, backsheet color, and cell interconnection. Impacts were observed to produce an initially localized dimple in the glass surface, with peak deflection proportional to the square root of incident energy. Subsequent deformation propagation and dissipation were also captured, along with behavior for instances when the module glass fractured. Natural frequencies of the module were identifiable by analyzing module oscillations postimpact. Limitations of the measurement technique were that the impacting ice ball obscured the data field immediately surrounding the point of contact, and both ice and glass fracture events occurred within 100 μs, which was not resolvable at the chosen frame rate. Increasing the frame rate and visualizing the back surface of the impact could be applied to avoid these issues. Applications for these data include validating computational models for hail impacts, identifying the natural frequencies of a module, and identifying damage initiation mechanisms.
High-throughput image segmentation of atomic resolution electron microscopy data poses an ongoing challenge for materials characterization. In this paper, we investigate the application of the polyhedral template matching (PTM) method, a technique widely employed for visualizing three-dimensional (3D) atomistic simulations, to the analysis of two-dimensional (2D) atomic resolution electron microscopy images. This technique is complementary with other atomic resolution data reduction techniques, such as the centrosymmetry parameter, that use the measured atomic peak positions as the starting input. Furthermore, since the template matching process also gives a measure of the local rotation, the method can be used to segment images based on local orientation. We begin by presenting a 2D implementation of the PTM method, suitable for atomic resolution images. We then demonstrate the technique's application to atomic resolution scanning transmission electron microscopy images from close-packed metals, providing examples of the analysis of twins and other grain boundaries in FCC gold and martensite phases in 304 L austenitic stainless steel. Finally, we discuss factors, such as positional errors in the image peak locations, that can affect the accuracy and sensitivity of the structural determinations.
In order to make design decisions, engineers may seek to identify regions of the design domain that are acceptable in a computationally efficient manner. A design is typically considered acceptable if its reliability with respect to parametric uncertainty exceeds the designer’s desired level of confidence. Despite major advancements in reliability estimation and in design classification via decision boundary estimation, the current literature still lacks a design classification strategy that incorporates parametric uncertainty and desired design confidence. To address this gap, this works offers a novel interpretation of the acceptance region by defining the decision boundary as the hypersurface which isolates the designs that exceed a user-defined level of confidence given parametric uncertainty. This work addresses the construction of this novel decision boundary using computationally efficient algorithms that were developed for reliability analysis and decision boundary estimation. The proposed approach is verified on two physical examples from structural and thermal analysis using Support Vector Machines and Efficient Global Optimization-based contour estimation.