Previous research has provided strong evidence that CO2 and H2O gasification reactions can provide non-negligible contributions to the consumption rates of pulverized coal (pc) char during combustion, particularly in oxy-fuel environments. Fully quantifying the contribution of these gasification reactions has proven to be difficult, due to the dearth of knowledge of gasification rates at the elevated particle temperatures associated with typical pc char combustion processes, as well as the complex interaction of oxidation and gasification reactions. Gasification reactions tend to become more important at higher char particle temperatures (because of their high activation energy) and they tend to reduce pc oxidation due to their endothermicity (i.e. cooling effect). The work reported here attempts to quantify the influence of the gasification reaction of CO2 in a rigorous manner by combining experimental measurements of the particle temperatures and consumption rates of size-classified pc char particles in tailored oxy-fuel environments with simulations from a detailed reacting porous particle model. The results demonstrate that a specific gasification reaction rate relative to the oxidation rate (within an accuracy of approximately +/- 20% of the pre-exponential value), is consistent with the experimentally measured char particle temperatures and burnout rates in oxy-fuel combustion environments. Conversely, the results also show, in agreement with past calculations, that it is extremely difficult to construct a set of kinetics that does not substantially overpredict particle temperature increase in strongly oxygen-enriched N2 environments. This latter result is believed to result from deficiencies in standard oxidation mechanisms that fail to account for falloff in char oxidation rates at high temperatures.
The Information Harm Triangle (IHT) is a novel approach that aims to adapt intuitive engineering concepts to simplify defense in depth for instrumentation and control (I&C) systems at nuclear power plants. This approach combines digital harm, real-world harm, and unsafe control actions (UCAs) into a single graph named “Information Harm Triangle.” The IHT is based on the postulation that the consequences of cyberattacks targeting I&C systems can be expressed in terms of two orthogonal components: a component representing the magnitude of data harm (DH) (i.e., digital information harm) and a component representing physical information harm (PIH) (i.e., real-world harm, e.g., an inadvertent plant trip). The magnitude of the severity of the physical consequence is the aspect of risk that is of concern. The sum of these two components represents the total information harm. The IHT intuitively informs risk-informed cybersecurity strategies that employ independent measures that either act to prevent, reduce, or mitigate DH or PIH. Another aspect of the IHT is that the DH can result in cyber-initiated UCAs that result in severe physical consequences. The orthogonality of DH and PIH provides insights into designing effective defense in depth. The IHT can also represent cyberattacks that have the potential to impede, evade, or compromise countermeasures from taking appropriate action to reduce, stop, or mitigate the harm caused by such UCAs. Cyber-initiated UCAs transform DH to PIH.
Multiple rotors on single structures have long been proposed to increase wind turbine energy capture with no increase in rotor size, but at the cost of additional mechanical complexity in the yaw and tower designs. Standard turbines on their own very-closely-spaced towers avoid these disadvantages but create a significant disadvantage; for some wind directions the wake turbulence of a rotor enters the swept area of a very close downwind rotor causing low output, fatigue stress, and changes in wake recovery. Knowing how the performance of pairs of closely spaced rotors varies with wind direction is essential to design a layout that maximizes the useful directions and minimizes the losses and stress at other directions. In the current work, the high-fidelity large-eddy simulation (LES) code Exa-Wind/Nalu-Wind is used to simulate the wake interactions from paired-rotor configurations in a neutrally stratified atmospheric boundary layer to investigate performance and feasibility. Each rotor pair consists of two Vestas V27 turbines with hub-to-hub separation distances of 1.5 rotor diameters. The on-design wind direction results are consistent with previous literature. For an off-design wind direction of 26.6°, results indicate little change in power and far-wake recovery relative to the on-design case. At a direction of 45.0°, significant rotor-wake interactions produce an increase in power but also in far-wake velocity deficit and turbulence intensity. A severely off-design case is also considered.
The DevOps movement, which aims to accelerate the continuous delivery of high-quality software, has taken a leading role in reshaping the software industry. Likewise, there is growing interest in applying DevOps tools and practices in the domains of computational science and engineering (CSE) to meet the ever-growing demand for scalable simulation and analysis. Translating insights from industry to research computing, however, remains an ongoing challenge; DevOps for science and engineering demands adaptation and innovation in those tools and practices. There is a need to better understand the challenges faced by DevOps practitioners in CSE contexts in bridging this divide. To that end, we conducted a participatory action research study to collect and analyze the experiences of DevOps practitioners at a major US national laboratory through the use of storytelling techniques. We share lessons learned and present opportunities for future investigation into DevOps practice in the CSE domain.
The Reynolds-averaged Navier–Stokes (RANS) equations remain a workhorse technology for simulating compressible fluid flows of practical interest. Due to model-form errors, however, RANS models can yield erroneous predictions that preclude their use on mission-critical problems. This work presents a data-driven turbulence modeling strategy aimed at improving RANS models for compressible fluid flows. The strategy outlined has three core aspects: (1) prediction for the discrepancy in the Reynolds stress tensor and turbulent heat flux via machine learning (ML), (2) estimating uncertainties in ML model outputs via out-of-distribution detection, and (3) multi-step training strategies to improve feature-response consistency. Results are presented across a range of cases publicly available on NASA’s turbulence modeling resource involving wall-bounded flows, jet flows, and hypersonic boundary layer flows with cold walls. We find that one ML turbulence model is able to provide consistent improvements for numerous quantities-of-interest across all cases.
Two-dimensional (2D) layered oxides have recently attracted wide attention owing to the strong coupling among charges, spins, lattice, and strain, which allows great flexibility and opportunities in structure designs as well as multifunctionality exploration. In parallel, plasmonic hybrid nanostructures exhibit exotic localized surface plasmon resonance (LSPR) providing a broad range of applications in nanophotonic devices and sensors. A hybrid material platform combining the unique multifunctional 2D layered oxides and plasmonic nanostructures brings optical tuning into the new level. In this work, a novel self-assembled Bi2MoO6 (BMO) 2D layered oxide incorporated with plasmonic Au nanoinclusions has been demonstrated via one-step pulsed laser deposition (PLD) technique. Comprehensive microstructural characterizations, including scanning transmission electron microscopy (STEM), differential phase contrast imaging (DPC), and STEM tomography, have demonstrated the high epitaxial quality and particle-in-matrix morphology of the BMO-Au nanocomposite film. DPC-STEM imaging clarifies the magnetic domain structures of BMO matrix. Three different BMO structures including layered supercell (LSC) and superlattices have been revealed which is attributed to the variable strain states throughout the BMO-Au film. Owing to the combination of plasmonic Au and layered structure of BMO, the nanocomposite film exhibits a typical LSPR in visible wavelength region and strong anisotropy in terms of its optical and ferromagnetic properties. This study opens a new avenue for developing novel 2D layered complex oxides incorporated with plasmonic metal or semiconductor phases showing great potential for applications in multifunctional nanoelectronics devices. [Figure not available: see fulltext.]
This work investigates the low- and high-temperature ignition and combustion processes, applied to the Engine Combustion Network Spray A flame, combining advanced optical diagnostics and large-eddy simulations (LES). Simultaneous high-speed (50 kHz) formaldehyde (CH2O) planar laser-induced fluorescence (PLIF) and line-of-sight OH* chemiluminescence imaging were used to measure the low- and high-temperature flame, during ignition as well as during quasi-steady combustion. While tracking the cool flame at the laser sheet plane, the present experimental setup allows detection of distinct ignition spots and dynamic fluctuations of the lift-off length over time, which overcomes limitations for flame tracking when using schlieren imaging [Sim et al.Proc. Combust. Inst. 38 (4) (2021) 5713–5721]. After significant development to improve LES prediction of the low-and high-temperature flame position, both during the ignition processes and quasi-steady combustion, the simulations were analyzed to gain understanding of the mixture variance and how this variance affects formation/consumption of CH2O. Analysis of the high-temperature ignition period shows that a key improvement in the LES is the ability to predict heterogeneous ignition sites, not only in the head of the jet, but in shear layers at the jet edge close to the position where flame lift-off eventually stabilizes. The LES analysis also shows concentrated pockets of CH2O, in the center of jet and at 20 mm downstream of the injector (in regions where the equivalence ratio is greater than 6), that are of similar length scale and frequency as the experiment (approximately 5–6 kHz). The periodic oscillation of CH2O match the frequency of pressure waves generated during auto-ignition and reflected within the constant-volume vessel throughout injection. The ability of LES to capture the periodic appearance and destruction of CH2O is particularly important because these structures travel downstream and become rich premixed flames that affect soot production.
This study investigated the durability of four high temperature coatings for use as a Gardon gauge foil coating. Failure modes and effects analysis have identified Gardon gauge foil coating as a critical component for the development of a robust flux gauge for high intensity flux measurements. Degradation of coating optical properties and physical condition alters flux gauge sensitivity, resulting in flux measurement errors. In this paper, four coatings were exposed to solar and thermal cycles to simulate real-world aging. Solar simulator and box furnace facilities at the National Solar Thermal Test Facility (NSTTF) were utilized in separate test campaigns. Coating absorptance and emissivity properties were measured and combined into a figure of merit (FOM) to characterize the optical property stability of each coating, and physical coating degradation was assessed qualitatively using microscope images. Results suggest rapid high temperature cycling did not significantly impact coating optical properties and physical state. In contrast, prolonged exposure of coatings to high temperatures degraded coating optical properties and physical state. Coatings degraded after 1 hour of exposure at temperatures above 400 °C and stabilized after 6-24 hours of exposure. It is concluded that the combination of high temperatures and prolonged exposure provide the energy necessary to sustain coating surface reactions and alter optical and physical coating properties. Results also suggest flux gauge foil coatings could benefit from long duration high temperature curing (>400 °C) prior to sensor calibration to stabilize coating properties and increase measurement reliability in high flux and high temperature applications.
Reinforcement learning (RL) may enable fixedwing unmanned aerial vehicle (UAV) guidance to achieve more agile and complex objectives than typical methods. However, RL has yet struggled to achieve even minimal success on this problem; fixed-wing flight with RL-based guidance has only been demonstrated in literature with reduced state and/or action spaces. In order to achieve full 6-DOF RL-based guidance, this study begins training with imitation learning from classical guidance, a method known as warm-staring (WS), before further training using Proximal Policy Optimization (PPO). We show that warm starting is critical to successful RL performance on this problem. PPO alone achieved a 2% success rate in our experiments. Warm-starting alone achieved 32% success. Warm-starting plus PPO achieved 57% success over all policies, with 40% of policies achieving 94% success.
Researchers at Sandia National Laboratories, in conjunction with the Nuclear Energy Institute and Light Water Reactor Sustainability Programs, have conducted testing and analysis to reevaluate and redefine the minimum passible opening size through which a person can effectively pass and navigate. Physical testing with a representative population has been performed on both simple two-dimensional (rectangular and circular cross sections up to 91.4 cm in depth) and more complex three-dimensional (circular cross sections of longer lengths up to 9.1 m and changes in direction) opening configurations. The primary impact of this effort is to define the physical design in which an adversary could successfully pass through a potentially complex opening, as well as to define the designs in which an adversary would not be expected to successfully traverse a complex opening. These data can then be used to support risk-informed decision making.
Raffaelle, Patrick R.; Wang, George T.; Shestopalov, Alexander A.
The focus of this study was to demonstrate the vaporphase halogenation of Si(100) and subsequently evaluate the inhibiting ability of the halogenated surfaces toward atomic layer deposition (ALD) of aluminum oxide (Al2O3). Hydrogen-terminated silicon ⟨100⟩ (H−Si(100)) was halogenated using N-chlorosuccinimide (NCS), N-bromosuccinimide (NBS), and N-iodosuccinimide (NIS) in a vacuum-based chemical process. The composition and physical properties of the prepared monolayers were analyzed by using X-ray photoelectron spectroscopy (XPS) and contact angle (CA) goniometry. These measurements confirmed that all three reagents were more effective in halogenating H−Si(100) over OH−Si(100) in the vapor phase. The stability of the modified surfaces in air was also tested, with the chlorinated surface showing the greatest resistance to monolayer degradation and silicon oxide (SiO2) generation within the first 24 h of exposure to air. XPS and atomic force microscopy (AFM) measurements showed that the succinimide-derived Hal-Si(100) surfaces exhibited blocking ability superior to that of H− Si(100), a commonly used ALD resist. This halogenation method provides a dry chemistry alternative for creating halogen-based ALD resists on Si(100) in near-ambient environments.
An inherited containment vessel design that has been used in the past to contain items in an environmental testing unit was brought to the Explosives Applications Lab to be analyzed and modified. The goal was to modify the vessel to contain an explosive event of 4g TNT equivalence at least once without failure or significant girth expansion while maintaining a seal. A total of ten energetic tests were performed on multiple vessels. In these tests, the 7075-T6 aluminum vessels were instrumented with thin-film resistive strain gages and both static and dynamic pressure gauges to study its ability to withstand an oversize explosive charge of 8g. Additionally, high precision girth (pi tape) measurements were taken before and after each test to measure the plastic growth of the vessel due to the event. Concurrent with this explosive testing, hydrocode modeling of the containment vessel and charge was performed. The modeling results were shown to agree with the results measured in the explosive field testing. Based on the data obtained during this testing, this vessel design can be safely used at least once to contain explosive detonations of 8g at the center of the chamber for a charge that will not result in damaging fragments.
Molten Salt Reactor (MSR) systems can be divided into two basic categories: liquid-fueled MSRs in which the fuel is dissolved in the salt, and solid-fueled systems such as the Fluoride-salt-cooled High-temperature Reactor (FHR). The molten salt provides an impediment to fission product release as actinides and many fission products are soluble in molten salt. Nonetheless, under accident conditions, some radionuclides may escape the salt by vaporization and aerosol formation, which may lead to release into the environment. We present recent enhancements to MELCOR to represent the transport of radionuclides in the salt and releases from the salt. Some soluble but volatile radionuclides may vaporize and subsequently condense to aerosol. Insoluble fission products can deposit on structures. Thermochimica, an open-source Gibbs Energy Minimization (GEM) code, has been integrated into MELCOR. With the appropriate thermochemical database, Thermochimica provides the solubility and vapor pressure of species as a function of temperature, pressure, and composition, which are needed to characterize the vaporization rate and the state of the salt with fission products. Since thermochemical databases are still under active development for molten salt systems, thermodynamic data for fission product solubility and vapor pressure may be user specified. This enables preliminary assessments of fission product transport in molten salt systems. In this paper, we discuss modeling of soluble and insoluble fission product releases in a MSR with Thermochimica incorporated into MELCOR. Separate-effects experiments performed as part of the Molten Salt Reactor Experiment in which radioactive aerosol was released are discussed as needed for determining the source term.
This presentation describes a new effort to better understand insulator flashover in high current, high voltage pulsed power systems. Both experimental and modeling investigations are described. Particular emphasis is put upon understand flashover that initiate in the anode triple junction (anode-vacuum-dielectric).
Surrogate construction is an essential component for all non-deterministic analyses in science and engineering. The efficient construction of easy and cheaper-to-run alternatives to a computationally expensive code paves the way for outer loop workflows for forward and inverse uncertainty quantification and optimization. Unfortunately, the accurate construction of a surrogate still remains a task that often requires a prohibitive number of computations, making the approach unattainable for large-scale and high-fidelity applications. Multifidelity approaches offer the possibility to lower the computational expense requirement on the highfidelity code by fusing data from additional sources. In this context, we have demonstrated that multifidelity Bayesian Networks (MFNets) can efficiently fuse information derived from models with an underlying complex dependency structure. In this contribution, we expand on our previous work by adopting a basis adaptation procedure for the selection of the linear model representing each data source. Our numerical results demonstrate that this procedure is computationally advantageous because it can maximize the use of limited data to learn and exploit the important structures shared among models. Two examples are considered to demonstrate the benefits of the proposed approach: an analytical problem and a nuclear fuel finite element assembly. From these two applications, a lower dependency of MFnets on the model graph has been also observed.
Creation of a Sandia internally developed, shock-hardened Recoverable Data Recorder (RDR) necessitated experimentation by ballistically-firing the device into water targets at velocities up to 5,000 ft/s. The resultant mechanical environments were very severe—routinely achieving peak accelerations in excess of 200 kG and changes in pseudo-velocity greater than 38,000 inch/s. High-quality projectile deceleration datasets were obtained though high-speed imaging during the impact events. The datasets were then used to calibrate and validate computational models in both CTH and EPIC. Hydrodynamic stability in these environments was confirmed to differ from aerodynamic stability; projectile stability is maintained through a phenomenon known as “tail-slapping” or impingement of the rear of the projectile on the cavitation vapor-water interface which envelopes the projectile. As the projectile slows the predominate forces undergo a transition which is outside the codes’ capabilities to calculate accurately, however, CTH and EPIC both predict the projectile trajectory well in the initial hypervelocity regime. Stable projectile designs and the achievable acceleration space are explored through a large parameter sweep of CTH simulations. Front face chamfer angle has the largest influence on stability with low angles being more stable.
A quantum-cascade-laser-absorption-spectroscopy (QCLAS) diagnostic was used to characterize post-detonation fireballs of RP-80 detonators via measurements of temperature, pressure, and CO column pressure at a repetition rate of 1 MHz. Scanned-wavelength direct-absorption spectroscopy was used to measure CO absorbance spectra near 2008.5 cm−1 which are dominated by the P(0,31), P(2,20), and P(3,14) transitions. Line-of-sight (LOS) measurements were acquired 51 and 91 mm above the detonator surface. Three strategies were employed to facilitate interpretation of the LAS measurements in this highly nonuniform environment and to evaluate the accuracy of four post-detonation fireball models: (1) High-energy transitions were used to deliberately bias the measurements to the high-temperature outer shell, (2) a novel dual-zone absorption model was used to extract temperature, pressure, and CO measurements in two distinct regions of the fireball at times where pressure variations along the LOS were pronounced, and (3) the LAS measurements were compared with synthetic LAS measurements produced using the simulated distributions of temperature, pressure, and gas composition predicted by reactive CFD modeling. The results indicate that the QCLAS diagnostic provides high-fidelity data for evaluating post-detonation fireball models, and that assumptions regarding thermochemical equilibrium and carbon freeze-out during expansion of detonation gases have a large impact on the predicted chemical composition of the fireball.
Bao, Jichao; Lee, Jonghyun; Yoon, Hongkyu Y.; Pyrak-Nolte, Laura
Characterization of geologic heterogeneity at an enhanced geothermal system (EGS) is crucial for cost-effective stimulation planning and reliable heat production. With recent advances in computational power and sensor technology, large-scale fine-resolution simulations of coupled thermal-hydraulic-mechanical (THM) processes have been available. However, traditional large-scale inversion approaches have limited utility for sites with complex subsurface structures unless one can afford high, often computationally prohibitive, computations. Key computational burdens are predominantly associated with a number of large-scale coupled numerical simulations and large dense matrix multiplications derived from fine discretization of the field site domain and a large number of THM and chemical (THMC) measurements. In this work, we present deep-generative model-based Bayesian inversion methods for the computationally efficient and accurate characterization of EGS sites. Deep generative models are used to learn the approximate subsurface property (e.g., permeability, thermal conductivity, and elastic rock properties) distribution from multipoint geostatistics-derived training images or discrete fracture network models as a prior and accelerated stochastic inversion is performed on the low-dimensional latent space in a Bayesian framework. Numerical examples with synthetic permeability fields with fracture inclusions with THM data sets based on Utah FORGE geothermal site will be presented to test the accuracy, speed, and uncertainty quantification capability of our proposed joint data inversion method.
Extreme meteorological events, such as hurricanes and floods, cause significant infrastructure damage and, as a result, prolonged grid outages. To mitigate the negative effect of these outages and enhance the resilience of communities, microgrids consisting of solar photovoltaics (PV), energy storage (ES) technologies, and backup diesel generation are being considered. Furthermore, it is necessary to take into account how the extreme event affects the systems' performance during the outage, often referred to as black-sky conditions. In this paper, an optimization model is introduced to properly size ES and PV technologies to meet various duration of grid outages for selected critical infrastructure while considering black-sky conditions. A case study of the municipality of Villalba, Puerto Rico is presented to identify the several potential microgrid configurations that increase the community's resilience. Sensitivity analyses are performed around the grid outage durations and black-sky conditions to better decide what factors should be considered when scoping potential microgrids for community resilience.
A challenge for TW-class accelerators, such as Sandia's Z machine, is efficient power coupling due to current loss in the final power feed. It is also important to understand how such losses will scale to larger next generation pulsed power (NGPP) facilities. While modeling is studying these power flow losses it is important to have diagnostic that can experimentally measure plasmas in these conditions and help inform simulations. The plasmas formed in the power flow region can be challenging to diagnose due to both limited lines of sight and being at significantly lower temperatures and densities than typical plasmas studied on Z. This necessitates special diagnostic development to accurately measure the power flow plasma on Z.
This is an investigation on two experimental datasets of laminar hypersonic flows, over a double-cone geometry, acquired in Calspan—University at Buffalo Research Center’s Large Energy National Shock (LENS)-XX expansion tunnel. These datasets have yet to be modeled accurately. A previous paper suggested that this could partly be due to mis-specified inlet conditions. The authors of this paper solved a Bayesian inverse problem to infer the inlet conditions of the LENS-XX test section and found that in one case they lay outside the uncertainty bounds specified in the experimental dataset. However, the inference was performed using approximate surrogate models. In this paper, the experimental datasets are revisited and inversions for the tunnel test-section inlet conditions are performed with a Navier–Stokes simulator. The inversion is deterministic and can provide uncertainty bounds on the inlet conditions under a Gaussian assumption. It was found that deterministic inversion yields inlet conditions that do not agree with what was stated in the experiments. An a posteriori method is also presented to check the validity of the Gaussian assumption for the posterior distribution. This paper contributes to ongoing work on the assessment of datasets from challenging experiments conducted in extreme environments, where the experimental apparatus is pushed to the margins of its design and performance envelopes.