Here we report on AlGaN high electron mobility transistor (HEMT)-based logic development, using combined enhancement- and depletion-mode transistors to fabricate inverters with operation from room temperature up to 500°C. Our development approach included: (a) characterizing temperature-dependent carrier transport for different AlGaN HEMT heterostructures, (b) developing a suitable gate metal scheme for use in high temperatures, and (c) over-temperature testing of discrete devices and inverters. Hall mobility data (from 30°C to 500°C) revealed the reference GaN-channel HEMT experienced a 6.9x reduction in mobility, whereas the AlGaN channel HEMTs experienced about a 3.1x reduction. Furthermore, a greater aluminum contrast between the barrier and channel enabled higher carrier densities in the two-dimensional electron gas for all temperatures. The combination of reduced variation in mobility with temperature and high sheet carrier concentration showed that an Al-rich AlGaN-channel HEMT with a high barrier-to-channel aluminum contrast is the best option for an extreme temperature HEMT design. Three gate metal stacks were selected for low resistivity, high melting point, low thermal expansion coefficient, and high expected barrier height. The impact of thermal cycling was examined through electrical characterization of samples measured before and after rapid thermal anneal. The 200-nm tungsten gate metallization was the top performer with minimal reduction in drain current, a slightly positive threshold voltage shift, and about an order of magnitude advantage over the other gates in on-to-off current ratio. After incorporating the tungsten gate metal stack in device fabrication, characterization of transistors and inverters from room temperature up to 500°C was performed. The enhancement-mode (e-mode) devices’ resistance started increasing at about 200°C, resulting in drain current degradation. This phenomenon was not observed in depletion-mode (d-mode) devices but highlights a challenge for inverters in an e-mode driver and d-mode load configuration.
Hydrocarbon polymers are used in a wide variety of practical applications. In the field of dynamic compression at extreme pressures, these polymers are used at several high energy density (HED) experimental facilities. One of the most common polymers is poly(methyl methacrylate) or PMMA, also called Plexiglass® or Lucite®. Here, we present high-fidelity, hundreds of GPa range experimental shock compression data measured on Sandia's Z machine. We extend the principal shock Hugoniot for PMMA to more than threefold compression up to 650 GPa and re-shock Hugoniot states up to 1020 GPa in an off-Hugoniot regime, where experimental data are even sparser. These data can be used to put additional constraints on tabular equation of state (EOS) models. The present results provide clear evidence for the need to re-examine the existing tabular EOS models for PMMA above ∼120 GPa as well as perhaps revisit EOSs of similar hydrocarbon polymers commonly used in HED experiments investigating dynamic compression, hydrodynamics, or inertial confinement fusion.
Laser-induced photoemission of electrons offers opportunities to trigger and control plasmas and discharges [1]. However, the underlying mechanisms are not sufficiently characterized to be fully utilized [2]. We present an investigation to characterize the effects of photoemission on plasma breakdown for different reduced electric fields, laser intensities, and photon energies. We perform Townsend breakdown experiments assisted by high-speed imaging and employ a quantum model of photoemission along with a 0D discharge model [3], [4] to interpret the experimental measurements.
We report on a two-step technique for post-bond III-V substrate removal involving precision mechanical milling and selective chemical etching. We show results on GaAs, GaSb, InP, and InAs substrates and from mm-scale chips to wafers.
The structure-property linkage is one of the two most important relationships in materials science besides the process-structure linkage, especially for metals and polycrystalline alloys. The stochastic nature of microstructures begs for a robust approach to reliably address the linkage. As such, uncertainty quantification (UQ) plays an important role in this regard and cannot be ignored. To probe the structure-property linkage, many multi-scale integrated computational materials engineering (ICME) tools have been proposed and developed over the last decade to accelerate the material design process in the spirit of Material Genome Initiative (MGI), notably crystal plasticity finite element model (CPFEM) and phase-field simulations. Machine learning (ML) methods, including deep learning and physics-informed/-constrained approaches, can also be conveniently applied to approximate the computationally expensive ICME models, allowing one to efficiently navigate in both structure and property spaces effortlessly. Since UQ also plays a crucial role in verification and validation for both ICME and ML models, it is important to include UQ in the picture. In this paper, we summarize a few of our recent research efforts addressing UQ aspects of homogenized properties using CPFEM in a big picture context.
Criticality Control Overpack (CCO) containers are being considered for the disposal of defense-related nuclear waste at the Waste Isolation Pilot Plant (WIPP).
Phosphor thermometry has become an established remote sensing technique for acquiring the temperature of surfaces and gas-phase flows. Often, phosphors are excited by a light source (typically emitting in the UV region), and their temperature-sensitive emission is captured. Temperature can be inferred from shifts in the emission spectra or the radiative decay lifetime during relaxation. While recent work has shown that the emission of several phosphors remains thermographic during x-ray excitation, the radiative decay lifetime was not investigated. The focus of the present study is to characterize the lifetime decay of the phosphor Gd2O2S:Tb for temperature sensitivity after excitation from a pulsed x-ray source. These results are compared to the lifetime decays found for this phosphor when excited using a pulsed UV laser. Results show that the lifetime of this phosphor exhibits comparable sensitivity to temperature between both excitation sources for a temperature range between 21 °C to 140 °C in increments of 20 °C. This work introduces a novel method of thermometry for researchers to implement when employing x-rays for diagnostics.
The use of high-fidelity, real-time physics engines of nuclear power plants in a cyber security training platform is feasible but requires additional research and development. This paper discusses recent developments for cybersecurity training leveraging open-source NPP simulators and network emulation tools. The paper will detail key elements of currently available environments for cybersecurity training. Key elements assessed for each environment are: (i) Management and student user interfaces, (ii) pre-developed baseline and cyber-attack effects, and (iii) capturing student results and performance. Representative and dynamic environments require integration of physics model, network emulation, commercial of the shelf hardware, and technologies that connect these together. Further, orchestration tools for management of the holistic set of models and technologies decrease time in setup and maintenance allow for click to deploy capability. The paper will describe and discuss the Sandia developed environment and open-source tools that incorporates these technologies with click-to-deploy capability. This environment was deployed for delivery of an undergraduate/graduate course with the University of Sao Paulo, Brazil in July 2022 and has been used to investigate new concepts involving Cyber-STPA analysis. This paper captures the identified future improvements, development activities, and lessons learned from the course.
A large-scale numerical computation of five wind farms was performed as a part of the American WAKE experimeNt (AWAKEN). This high-fidelity computation used the ExaWind/AMR-Wind LES solver to simulate a 100 km × 100 km domain containing 541 turbines under unstable atmospheric conditions matching previous measurements. The turbines were represented by Joukowski and OpenFAST coupled actuator disk models. Results of this qualitative comparison illustrate the interactions of wind farms with large-scale ABL structures in the flow, as well as the extent of downstream wake penetration in the flow and blockage effects around wind farms.
Multiple rotors on single structures have long been proposed to increase wind turbine energy capture with no increase in rotor size, but at the cost of additional mechanical complexity in the yaw and tower designs. Standard turbines on their own very-closely-spaced towers avoid these disadvantages but create a significant disadvantage; for some wind directions the wake turbulence of a rotor enters the swept area of a very close downwind rotor causing low output, fatigue stress, and changes in wake recovery. Knowing how the performance of pairs of closely spaced rotors varies with wind direction is essential to design a layout that maximizes the useful directions and minimizes the losses and stress at other directions. In the current work, the high-fidelity large-eddy simulation (LES) code Exa-Wind/Nalu-Wind is used to simulate the wake interactions from paired-rotor configurations in a neutrally stratified atmospheric boundary layer to investigate performance and feasibility. Each rotor pair consists of two Vestas V27 turbines with hub-to-hub separation distances of 1.5 rotor diameters. The on-design wind direction results are consistent with previous literature. For an off-design wind direction of 26.6°, results indicate little change in power and far-wake recovery relative to the on-design case. At a direction of 45.0°, significant rotor-wake interactions produce an increase in power but also in far-wake velocity deficit and turbulence intensity. A severely off-design case is also considered.
Multiple rotors on single structures have long been proposed to increase wind turbine energy capture with no increase in rotor size, but at the cost of additional mechanical complexity in the yaw and tower designs. Standard turbines on their own very-closely-spaced towers avoid these disadvantages but create a significant disadvantage; for some wind directions the wake turbulence of a rotor enters the swept area of a very close downwind rotor causing low output, fatigue stress, and changes in wake recovery. Knowing how the performance of pairs of closely spaced rotors varies with wind direction is essential to design a layout that maximizes the useful directions and minimizes the losses and stress at other directions. In the current work, the high-fidelity large-eddy simulation (LES) code Exa-Wind/Nalu-Wind is used to simulate the wake interactions from paired-rotor configurations in a neutrally stratified atmospheric boundary layer to investigate performance and feasibility. Each rotor pair consists of two Vestas V27 turbines with hub-to-hub separation distances of 1.5 rotor diameters. The on-design wind direction results are consistent with previous literature. For an off-design wind direction of 26.6°, results indicate little change in power and far-wake recovery relative to the on-design case. At a direction of 45.0°, significant rotor-wake interactions produce an increase in power but also in far-wake velocity deficit and turbulence intensity. A severely off-design case is also considered.
Springs play important roles in many mechanisms, including critical safety components employed by Sandia National Laboratories. Due to the nature of these safety component applications, serious concerns arise if their springs become damaged or unhook from their posts. Finite element analysis (FEA) is one technique employed to ensure such adverse scenarios do not occur. Ideally, a very fine spring mesh would be used to make the simulation as accurate as possible with respect to mesh convergence. While this method does yield the best results, it is also the most time consuming and therefore most computationally expensive process. In some situations, reduced order models (ROMs) can be adopted to lower this cost at the expense of some accuracy. This study quantifies the error present between a fine, solid element mesh and a reduced order spring beam model, with the aim of finding the best balance of a low computational cost and high accuracy analysis. Two types of analyses were performed, a quasi-static displacement-controlled pull and a haversine shock. The first used implicit methods to examine basic properties as the elastic limit of the spring material was reached. This analysis was also used to study the convergence and residual tolerance of the models. The second used explicit dynamics methods to investigate spring dynamics and stress/strain properties, as well as examine the impact of the chosen friction coefficient. Both the implicit displacement-controlled pull test and explicit haversine shock test showed good similarities between the hexahedral and beam meshes. The results were especially favorable when comparing reaction force and stress trends and maximums. However, the EQPS results were not quite as favorable. This could be due to differences in how the shear stress is calculated in both models, and future studies will need to investigate the exact causes. The data indicates that the beam model may be less likely to correctly predict spring failure, defined as inappropriate application of tension and/or compressive forces to a larger assembly. Additionally, this study was able to quantify the computational cost advantage of using a reduced order model beam mesh. In the transverse haversine shock case, the hexahedral mesh took over three days with 228 processors to solve, compared to under 10 hours for the ROM using just a single processor. Depending on the required use case for the results, using the beam mesh will significantly improve the speed of work flows, especially when integrated into larger safety component models. However, appropriate use of the ROM should carefully balance these optimized run times with its reduction in accuracy, especially when examining spring failure and outputting variables such as equivalent plastic strain. Current investigations are broadening the scope of this work to include a validation study comparing the beam ROM to physical testing data.
Uncertainty quantification (UQ) plays a critical role in verifying and validating forward integrated computational materials engineering (ICME) models. Among numerous ICME models, the crystal plasticity finite element method (CPFEM) is a powerful tool that enables one to assess microstructure-sensitive behaviors and thus, bridge material structure to performance. Nevertheless, given its nature of constitutive model form and the randomness of microstructures, CPFEM is exposed to both aleatory uncertainty (microstructural variability), as well as epistemic uncertainty (parametric and model-form error). Therefore, the observations are often corrupted by the microstructure-induced uncertainty, as well as the ICME approximation and numerical errors. In this work, we highlight several ongoing research topics in UQ, optimization, and machine learning applications for CPFEM to efficiently solve forward and inverse problems. The first aspect of this work addresses the UQ of constitutive models for epistemic uncertainty, including both phenomenological and dislocation-density-based constitutive models, where the quantities of interest (QoIs) are related to the initial yield behaviors. We apply a stochastic collocation (SC) method to quantify the uncertainty of the three most commonly used constitutive models in CPFEM, namely phenomenological models (with and without twinning), and dislocation-density-based constitutive models, for three different types of crystal structures, namely face-centered cubic (fcc) copper (Cu), body-centered cubic (bcc) tungsten (W), and hexagonal close packing (hcp) magnesium (Mg). The second aspect of this work addresses the aleatory and epistemic uncertainty with multiple mesh resolutions and multiple constitutive models by the multi-index Monte Carlo method, where the QoI is also related to homogenized materials properties. We present a unified approach that accounts for various fidelity parameters, such as mesh resolutions, integration time-steps, and constitutive models simultaneously. We illustrate how multilevel sampling methods, such as multilevel Monte Carlo (MLMC) and multi-index Monte Carlo (MIMC), can be applied to assess the impact of variations in the microstructure of polycrystalline materials on the predictions of macroscopic mechanical properties. The third aspect of this work addresses the crystallographic texture study of a single void in a cube. Using a parametric reduced-order model (also known as parametric proper orthogonal decomposition) with a global orthonormal basis as a model reduction technique, we demonstrate that the localized dynamic stress and strain fields can be predicted as a spatiotemporal problem.
We analyze the regression accuracy of convolutional neural networks assembled from encoders, decoders and skip connections and trained with multifidelity data. Besides requiring significantly less trainable parameters than equivalent fully connected networks, encoder, decoder, encoder-decoder or decoder-encoder architectures can learn the mapping between inputs to outputs of arbitrary dimensionality. We demonstrate their accuracy when trained on a few high-fidelity and many low-fidelity data generated from models ranging from one-dimensional functions to Poisson equation solvers in two-dimensions. We finally discuss a number of implementation choices that improve the reliability of the uncertainty estimates generated by Monte Carlo DropBlocks, and compare uncertainty estimates among low-, high- and multifidelity approaches.
Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering - OMAE
Laros, James H.; Davis, Jacob; Sharman, Krish; Tom, Nathan; Husain, Salman
Experiments were conducted on a wave tank model of a bottom raised oscillating surge wave energy converter (OSWEC) model in regular waves. The OSWEC model shape was a thin rectangular flap, which was allowed to pitch in response to incident waves about a hinge located at the intersection of the flap and the top of the supporting foundation. Torsion springs were added to the hinge in order to position the pitch natural frequency at the center of the wave frequency range of the wave maker. The flap motion as well as the loads at the base of the foundation were measured. The OSWEC was modeled analytically using elliptic functions in order to obtain closed form expressions for added mass and radiation damping coefficients, along with the excitation force and torque. These formulations were derived and reported in a previous publication by the authors. While analytical predictions of the foundation loads agree very well with experiments, large discrepancies are seen in the pitch response close to resonance. These differences are analyzed by conducting a sensitivity study, in which system parameters, including damping and added mass values, are varied. The likely contributors to the differences between predictions and experiments are attributed to tank reflections, standing waves that can occur in long, narrow wave tanks, as well as the thin plate assumption employed in the analytical approach.
We report on a two-step technique for post-bond III-V substrate removal involving precision mechanical milling and selective chemical etching. We show results on GaAs, GaSb, InP, and InAs substrates and from mm-scale chips to wafers.
Computational simulation allows scientists to explore, observe, and test physical regimes thought to be unattainable. Validation and uncertainty quantification play crucial roles in extrapolating the use of physics-based models. Bayesian analysis provides a natural framework for incorporating the uncertainties that undeniably exist in computational modeling. However, the ability to perform quality Bayesian and uncertainty analyses is often limited by the computational expense of first-principles physics models. In the absence of a reliable low-fidelity physics model, phenomenological surrogate or machine learned models can be used to mitigate this expense; however, these data-driven models may not adhere to known physics or properties. Furthermore, the interactions of complex physics in high-fidelity codes lead to dependencies between quantities of interest (QoIs) that are difficult to quantify and capture when individual surrogates are used for each observable. Although this is not always problematic, predicting multiple QoIs with a single surrogate preserves valuable insights regarding the correlated behavior of the target observables and maximizes the information gained from available data. A method of constructing a Gaussian Process (GP) that emulates multiple QoIs simultaneously is presented. As an exemplar, we consider Magnetized Liner Inertial Fusion, a fusion concept that relies on the direct compression of magnetized, laser-heated fuel by a metal liner to achieve thermonuclear ignition. Magneto-hydrodynamics (MHD) codes calculate diagnostics to infer the state of the fuel during experiments, which cannot be measured directly. The calibration of these diagnostic metrics is complicated by sparse experimental data and the expense of high-fidelity neutron transport models. The development of an appropriate surrogate raises long-standing issues in modeling and simulation, including calibration, validation, and uncertainty quantification. The performance of the proposed multi-output GP surrogate model, which preserves correlations between QoIs, is compared to the standard single-output GP for a 1D realization of the MagLIF experiment.