Phosphor thermometry has become an established remote sensing technique for acquiring the temperature of surfaces and gas-phase flows. Often, phosphors are excited by a light source (typically emitting in the UV region), and their temperature-sensitive emission is captured. Temperature can be inferred from shifts in the emission spectra or the radiative decay lifetime during relaxation. While recent work has shown that the emission of several phosphors remains thermographic during x-ray excitation, the radiative decay lifetime was not investigated. The focus of the present study is to characterize the lifetime decay of the phosphor Gd2O2S:Tb for temperature sensitivity after excitation from a pulsed x-ray source. These results are compared to the lifetime decays found for this phosphor when excited using a pulsed UV laser. Results show that the lifetime of this phosphor exhibits comparable sensitivity to temperature between both excitation sources for a temperature range between 21 °C to 140 °C in increments of 20 °C. This work introduces a novel method of thermometry for researchers to implement when employing x-rays for diagnostics.
We analyze the regression accuracy of convolutional neural networks assembled from encoders, decoders and skip connections and trained with multifidelity data. Besides requiring significantly less trainable parameters than equivalent fully connected networks, encoder, decoder, encoder-decoder or decoder-encoder architectures can learn the mapping between inputs to outputs of arbitrary dimensionality. We demonstrate their accuracy when trained on a few high-fidelity and many low-fidelity data generated from models ranging from one-dimensional functions to Poisson equation solvers in two-dimensions. We finally discuss a number of implementation choices that improve the reliability of the uncertainty estimates generated by Monte Carlo DropBlocks, and compare uncertainty estimates among low-, high- and multifidelity approaches.
Laser-induced photoemission of electrons offers opportunities to trigger and control plasmas and discharges [1]. However, the underlying mechanisms are not sufficiently characterized to be fully utilized [2]. We present an investigation to characterize the effects of photoemission on plasma breakdown for different reduced electric fields, laser intensities, and photon energies. We perform Townsend breakdown experiments assisted by high-speed imaging and employ a quantum model of photoemission along with a 0D discharge model [3], [4] to interpret the experimental measurements.
Multiple rotors on single structures have long been proposed to increase wind turbine energy capture with no increase in rotor size, but at the cost of additional mechanical complexity in the yaw and tower designs. Standard turbines on their own very-closely-spaced towers avoid these disadvantages but create a significant disadvantage; for some wind directions the wake turbulence of a rotor enters the swept area of a very close downwind rotor causing low output, fatigue stress, and changes in wake recovery. Knowing how the performance of pairs of closely spaced rotors varies with wind direction is essential to design a layout that maximizes the useful directions and minimizes the losses and stress at other directions. In the current work, the high-fidelity large-eddy simulation (LES) code Exa-Wind/Nalu-Wind is used to simulate the wake interactions from paired-rotor configurations in a neutrally stratified atmospheric boundary layer to investigate performance and feasibility. Each rotor pair consists of two Vestas V27 turbines with hub-to-hub separation distances of 1.5 rotor diameters. The on-design wind direction results are consistent with previous literature. For an off-design wind direction of 26.6°, results indicate little change in power and far-wake recovery relative to the on-design case. At a direction of 45.0°, significant rotor-wake interactions produce an increase in power but also in far-wake velocity deficit and turbulence intensity. A severely off-design case is also considered.
Criticality Control Overpack (CCO) containers are being considered for the disposal of defense-related nuclear waste at the Waste Isolation Pilot Plant (WIPP). At WIPP, these containers would be placed in underground disposal rooms, which will naturally close and compact the containers closer to one another over several centuries. This report details simulations to predict the final container configuration as an input to nuclear criticality assessments. Each container was discretely modeled, including the plywood and stainless steel pipe inside the 55-gallon drum, in order to capture its complex mechanical behavior. Although these high-fidelity simulations were computationally intensive, several different material models were considered in an attempt to reasonably bound the horizontal and vertical compaction percentages. When exceptionally strong materials were used for the containers, the horizontal and vertical closure respectively stabilized at 43:9 % and 93:7 %. At the other extreme, when the containers completely degraded and the clay seams between the salt layers were glued, the horizontal and vertical closure reached respective final values of 48:6 % and 100 %.
Mega-ampere class pulsed power machines drive intense currents into small volumes to study high energy and density environments. Power lost during these events is a difficult and paramount problem to solve. For example, facilities such as Sandia National Laboratories’ Z machine experience meaningful power loss, which can be linked to non-linear ohmic heating at high currents (i.e., 26 MA on Z) leading to thermal desorption of contaminants and subsequent shunt plasma formation. Characterizing and understanding this type of thermal desorption is key to design optimizations necessary to minimize current loss, which will be even more important for next generation pulsed power. This type of characterization requires the ability to identify and determine concentration of analytes with nanosecond resolution given the pulse width of Z is on the order of 100 ns. This report summarizes progress on a small exploratory project focused on investigating options to meet this challenge using mass spectrometry. The main focus of these efforts utilized an Energy and Velocity Analyzer for Distributions of Electric Rockets intending to determine how quickly transient data could be resolved. This probe combines an electrostatic analyzer with a Wien velocity filter (ExB) to obtain ion energy and velocity distributions. Primary results from this exploratory project indicate significant additional work is needed to demonstrate a nanosecond time scale mass spectrometer for this application and also highlight that alternative detection methods such as laser-based diagnostics should be considered to meet the need for ultra-fast detection.
Integrated computational materials engineering (ICME) models have been a crucial building block for modern materials development, relieving heavy reliance on experiments and significantly accelerating the materials design process. However, ICME models are also computationally expensive, particularly with respect to time integration for dynamics, which hinders the ability to study statistical ensembles and thermodynamic properties of large systems for long time scales. To alleviate the computational bottleneck, we propose to model the evolution of statistical microstructure descriptors as a continuous-time stochastic process using a non-linear Langevin equation, where the probability density function (PDF) of the statistical microstructure descriptors, which are also the quantities of interests (QoIs), is modeled by the Fokker-Planck equation. We discuss how to calibrate the drift and diffusion terms of the Fokker-Planck equation from the theoretical and computational perspectives. The calibrated Fokker-Planck equation can be used as a stochastic reduced-order model to simulate the microstructure evolution of statistical microstructure descriptors PDF. Considering statistical microstructure descriptors in the microstructure evolution as QoIs, we demonstrate our proposed methodology in three integrated computational materials engineering (ICME) models: kinetic Monte Carlo, phase field, and molecular dynamics simulations.
The Arroyo Seco Improvement Program (ASIP) is intended to provide active channel improvements and stream zone management activities that will reduce current flood and erosion risk while providing additional and improved habitat for critical species that may use the Arroyo Seco at the Sandia National Laboratories, California (SNL/CA). SNL/CA facility is operated by the National Technology and Engineering Solutions of Sandia, LLC (NTESS) under a contract with the U.S. Department of Energy/National Nuclear Security Administration (DOE/NNSA). The DOE/ NNSA’s Sandia Field Office (SFO) oversees the operations of the site.
This report represents completion of milestone deliverable M2SF-23SN010309082 Annual Status Update for OWL due on November 30, 2022. It provides the status of fiscal year 2022 (FY2022) updates for the Online Waste Library (OWL).
We present Q Framework: a verification framework used at Sandia National Laboratories. Q is a collection of tools used to verify safety and correctness properties of high-consequence embedded systems and captures the structure and compositionality of system specifications written with state machines in order to prove system-level properties about their implementations. Q consists of two main workflows: 1) compilation of temporal properties and state machine models (such as those made with Stateflow) into SMV models and 2) generation of ACSL specifications for the C code implementation of the state machine models. These together prove a refinement relation between the state machine model and its C code implementation, with proofs of properties checked by NuSMV (for SMV models) and Frama-C (for ACSL specifications).