Harmonic Balance Method for Large-Scale Models with Krylov Subspace Recycling
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Motivation: Determine the length and opening of two lab-grown cracks, designated as LT-14 and LT-28, representative of stress corrosion cracks in spent nuclear fuel dry storage casks to supplement future testing of gas and aerosol transport. Problem: The extreme aspect ratio of the crack length to opening requires that imaging occurs in stages with the results merged before final analysis. Method: High magnification (1500x) optical images of both sides of the two plates were acquired. 20x stitched images with LSCM were acquired, fully stitched along the length, and leveled with newly developed PLATES Method in MATLAB®. Conclusion for LT-14: Side 1 is 47.25 mm long and has 366 separate crack features with an average length of 23.50 µm and an average opening of 8.27 µm. Side 2 is 69.44 mm long and has 550 separate crack features with an average length of 81.63 µm and an average opening of 67.70 µm. Conclusion for LT-28: Side 1 is 71.95 mm long and has 1,127 separate crack features with an average length of 42.27 µm and an average opening of 10.31 µm. Side 2 is 74.88 mm long and has 520 separate crack features with an average length of 98.13 µm and an average opening of 14.99 µm. The adjacent crack on side 1 is 18.95 mm long and has 37 separate crack features with an average length of 17.46 µm and an average opening of 10.42 µm. The adjacent crack on side 2 is 26.40 mm long and has 55 separate crack features with an average length of 87.26 µm and an average opening of 48.29 µm. Each adjacent crack is approximately 26 mm from the main crack.
This work summarizes the findings of a reduced order model (ROM) study performed using Sierra ROM module Pressio_Aria on Sandia National Laboratories' (SNL) Crash-Burn L2 milestone thermal model with pristine geometry. Comparisons are made to full order model (FOM) results for this same Crash-Burn model using Sierra multiphysics module Aria.
Abstract not provided.
Abstract not provided.
The Sandia National Laboratories, in California (Sandia/CA) is a research and development facility, owned by the U.S. Department of Energy’s National Nuclear Security Administration agency (DOE/NNSA). The laboratory is located in the City of Livermore (the City) and is comprised of approximately 410 acres. The Sandia/CA facility is operated by National Technology and Engineering Solutions of Sandia, LLC (NTESS) under a contract with the DOE/NNSA. The DOE/ NNSA’s Sandia Field Office (SFO) oversees the operations of the site. North of the Sandia/CA facility is the Lawrence Livermore National Laboratory (LLNL), in which Sandia/CA’s sewer system combines with before discharging to the City’s Publicly Owned Treatment Works (POTW) for final treatment and processing. The City’s POTW authorizes the wastewater discharge from Sandia/CA via the assigned Wastewater Discharge Permit #1251 (the Permit), which is issued to the DOE/NNSA’s main office for Sandia National Laboratories, located in New Mexico (Sandia/NM). The Permit requires the submittal of this Monthly Sewer Monitoring Report to the City by the twenty-fifth day of each month.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
In this report we demonstrate some relatively simple and inexpensive methods to effectively account for various sources of epistemic lack-of-knowledge type uncertainty in inverse problems. The demonstration problem involves inverse estimation of six parameters of a bolted joint that attaches a kettlebell shaped object to a thick plate. The parameters are efficiently inverted in a modal-based model calibration using gradient-based optimization. Two material properties of the kettlebell are treated as uncertain to within given epistemic uncertainty bounds. We apply and test interval and sparse-sample probabilistic approaches to account for uncertainty in the estimated parameters (and various scalar functionals of the parameters as generic quantities of interest, QOIs) due to uncertainties in the material properties. We also investigate the error effects of limited numbers of vibration sensors (accelerometers) on the kettlebell and plate, and therefore abbreviated excitation/response information in the parameter inversions. We propose and demonstrate a Leave-K-Sensors-Out “cross-prediction” UQ approach to estimate related uncertainties on the parameters and QOI functionals. We indicate how uncertainties from material properties and limited sensors are treated in a combined manner. The economical combined UQ approach involves just three to five samples (i.e. three to five inverse simulations), with no added complication or error/uncertainty from use of surrogate models for affordability. Finally, we describe a related economical UQ approach for handling potential parameter solution non-uniqueness and numerical optimization related precision uncertainties in the estimated parameter values. Indicated further research is identified.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The Spent Fuel Waste Disposition (SFWD) program under the U.S. Department of Energy (DOE) is planning a seismic shake table test of full-scale dry storage systems of spent nuclear fuel (SNF) to close the gap related to the seismic loads on the fuel assemblies in dry storage systems. This test will allow for quantifying the strains and accelerations on surrogate fuel assembly during representative earthquakes. A concrete layer will be installed on the shake table before the test to simulate conditions representative of an ISFSI pad. In the shake table tests with the vertical cask, the cask will be free-standing because this is representative of all, except two, ISFSIs in the U.S. with vertical dry storage casks. The static and dynamic friction coefficients between the steel bottom of the cask and the concrete layer on the shake table are important parameters that will affect cask behavior during the test. These parameters must be known for the pre- and post-test modelling, data analysis, and model validation. The friction experiment was performed at the Engineering Department of the University of New Mexico (UNM) to determine the friction coefficients between a steel plate with the same finish as the bottom of the vertical cask manufactured for the test and different concrete surfaces. In this experiment the steel plate was fixed and the concrete sample was pulled over the plate with a constant displacement rate using an MTS machine. This allowed for collecting continuous horizontal force data over the length of the steel plate. Four displacement rates and three vertical loads were used. The tests were performed with four concrete blocks with different degrees of the surface roughness - light sandblast, light to medium sandblast, medium bush hammer, and heavy sandblast. The total number of tests was 48. The data were used to calculate static and dynamic friction coefficients.
This Section covers an introduction to the objectives and techniques used in this analysis. The objectives of the report are given in Subsection 1.1. An introduction to aqueous thermodynamics and how variance might propagate through the relevant thermodynamic equations is given in Subsection 1.2. An introduction to Bayesian inference and its application to thermodynamic modeling is given in Subsection 1.3.
Journal of Applied Physics
The impact of 1.8 MeV proton irradiation on metalorganic chemical vapor deposition grown (010) β-Ga2O3 Schottky diodes is presented. It is found that after a 10.8 × 10 13 cm - 2 proton fluence the Schottky barrier height of (1.40 ± 0.05 eV) and the ideality factor of (1.05 ± 0.05) are unaffected. Capacitance-voltage extracted net ionized doping curves indicate a carrier removal rate of 268 ± 10 cm - 1. The defect states responsible for the observed carrier removal are studied through a combination of deep level transient and optical spectroscopies (DLTS/DLOS) as well as lighted capacitance-voltage (LCV) measurements. The dominating effect on the defect spectrum is due to the EC-2.0 eV defect state observed in DLOS and LCV. This state accounts for ∼ 75% of the total trap introduction rate and is the primary source of carrier removal from proton irradiation. Of the DLTS detected states, the EC-0.72 eV state dominated but had a comparably smaller contribution to the trap introduction. These two traps have previously been correlated with acceptor-like gallium vacancy-related defects. Several other trap states at EC-0.36, EC-0.63, and EC-1.09 eV were newly detected after proton irradiation, and two pre-existing states at EC-1.2 and EC-4.4 eV showed a slight increase in concentration after irradiation, together accounting for the remainder of trap introduction. However, a pre-existing trap at EC-0.40 eV was found to be insensitive to proton irradiation and, therefore, is likely of extrinsic origin. The comprehensive defect characterization of 1.8 MeV proton irradiation damage can aid the modeling and design for a range of radiation tolerant devices.
Journal of Microelectronics and Electronic Packaging
Here we report on AlGaN high electron mobility transistor (HEMT)-based logic development, using combined enhancement- and depletion-mode transistors to fabricate inverters with operation from room temperature up to 500°C. Our development approach included: (a) characterizing temperature-dependent carrier transport for different AlGaN HEMT heterostructures, (b) developing a suitable gate metal scheme for use in high temperatures, and (c) over-temperature testing of discrete devices and inverters. Hall mobility data (from 30°C to 500°C) revealed the reference GaN-channel HEMT experienced a 6.9x reduction in mobility, whereas the AlGaN channel HEMTs experienced about a 3.1x reduction. Furthermore, a greater aluminum contrast between the barrier and channel enabled higher carrier densities in the two-dimensional electron gas for all temperatures. The combination of reduced variation in mobility with temperature and high sheet carrier concentration showed that an Al-rich AlGaN-channel HEMT with a high barrier-to-channel aluminum contrast is the best option for an extreme temperature HEMT design. Three gate metal stacks were selected for low resistivity, high melting point, low thermal expansion coefficient, and high expected barrier height. The impact of thermal cycling was examined through electrical characterization of samples measured before and after rapid thermal anneal. The 200-nm tungsten gate metallization was the top performer with minimal reduction in drain current, a slightly positive threshold voltage shift, and about an order of magnitude advantage over the other gates in on-to-off current ratio. After incorporating the tungsten gate metal stack in device fabrication, characterization of transistors and inverters from room temperature up to 500°C was performed. The enhancement-mode (e-mode) devices’ resistance started increasing at about 200°C, resulting in drain current degradation. This phenomenon was not observed in depletion-mode (d-mode) devices but highlights a challenge for inverters in an e-mode driver and d-mode load configuration.
Macromolecules
Ionizable polymers form dynamic networks with domains controlled by two distinct energy scales, ionic interactions and van der Waals forces; both evolve under elongational flows during their processing into viable materials. A molecular level insight of their nonlinear response, paramount to controlling their structure, is attained by fully atomistic molecular dynamics simulations of a model ionizable polymer, polystyrene sulfonate. As a function of increasing elongational flow rate, the systems display an initial elastic response, followed by an ionic fraction-dependent strain hardening, stress overshoot, and eventually strain-thinning. As the sulfonation fraction increases, the chain elongation becomes more heterogeneous. Finally, flow-driven ionic assembly dynamics that continuously break and reform control the response of the system.
Journal of Physical Chemistry A
Automation of rate-coefficient calculations for gas-phase organic species became possible in recent years and has transformed how we explore these complicated systems computationally. Kinetics workflow tools bring rigor and speed and eliminate a large fraction of manual labor and related error sources. In this paper we give an overview of this quickly evolving field and illustrate, through five detailed examples, the capabilities of our own automated tool, KinBot. We bring examples from combustion and atmospheric chemistry of C-, H-, O-, and N-atom-containing species that are relevant to molecular weight growth and autoxidation processes. The examples shed light on the capabilities of automation and also highlight particular challenges associated with the various chemical systems that need to be addressed in future work.
The Synchronic Web is a distributed network for securing data provenance on the World Wide Web. By enabling clients around the world to freely commit digital information into a single shared view of history, it provides a foundational basis of truth on which to build decentralized and scalable trust across the Internet. Its core cryptographical capability allows mutually distrusting parties to create and verify statements of the following form: “I commit to this information—and only this information—at this moment in time.” The backbone of the Synchronic Web infrastructure is a simple, small, and semantic-free blockchain that is accessible to any Internet-enabled entity. The infrastructure is maintained by a permissioned network of well-known servers, called notaries, and accessed by a permissionless group of clients, called journals. Through an evolving stack of flexible and composable semantic specifications, the parties cooperate to generate synchronic commitments over arbitrary data. When integrated with existing infrastructures, adapted to diverse domains, and scaled across the breadth of cyberspace, the Synchronic Web provides a ubiquitous mechanism to lock the world’s data into unique points in discrete time and digital space. This document provides a technical description of the core Synchronic Web system. The distinguishing innovation in our design—and the enabling mechanism behind the model—is the novel use of verifiable maps to place authenticated content into canonically defined locations off-chain. While concrete specifications and software implementations of the Synchronic Web continue to evolve, the information covered in the body of this document should remain stable. We aim to present this information clearly and concisely for technical non-experts to understand the essential functionality and value proposition of the network. In the interest of promoting discourse, we take some liberty in projecting the potential implications of the new model.
Geochimica et Cosmochimica Acta
Calcite (CaCO3) composition and properties are defined by the chemical environment in which CaCO3 forms. However, a complete understanding of the relationship between aqueous chemistry during calcite precipitation and resulting chemical and physical CaCO3 properties remains elusive; therefore, we present an investigation into the coupled effects of divalent cations Sr2+ and Mg2+ on CaCO3 precipitation and subsequent crystal growth. Through chemical analysis of the aqueous phases and microscopy of the resulting calcite phases in compliment with density functional theory calculations, we elucidate the relationship between crystal growth and the resulting composition (elemental and isotopic) of calcite. The results of this experimental and modeling work suggest that Mg2+ and Sr2+ have cation-specific impacts that inhibit calcite crystal growth, including: (1) Sr2+ incorporates more readily into calcite than Mg2+ (DSr > DMg), and increasing [Sr2+]t or [Mg2+]t increases DSr; (2) the inclusion of Mg2+ into structure leads to a reduction in the calcite unit cell volume, whereas Sr2+ leads to an expansion; (3) the inclusion of both Mg2+ and Sr2+ results in a distribution of unit cell impacts based on the relative positions of the Sr2+ and Mg2+ in the lattice. These experiments were conducted at saturation indices of CaCO3 of ~4.1, favoring rapid precipitation. This rapid precipitation resulted in observed Sr isotope fractionation confirming Sr isotopic fractionation is dependent upon the precipitation rate. We further note that the precipitation and growth of calcite favors the incorporation of the lighter 86Sr isotope over the heavier 87Sr isotope, regardless of the initial solution conditions, and the degree of fractionation increases with DSr. In sum, these results demonstrate the impact of solution environment to influence the incorporation behavior and crystal growth behavior of calcite. These factors are important to understand in order to effectively use geochemical signatures resulting from calcite precipitation or dissolution to gain specific information.
Mechanical Systems and Signal Processing
Bifurcations are commonly encountered during force controlled swept and stepped sine testing of nonlinear structures, which generally leads to the so-called jump-down or jump-up phenomena between stable solutions. There are various experimental closed-loop control algorithms, such as control-based continuation and phase-locked loop, to stabilize dynamical systems through these bifurcations, but they generally rely on specialized control algorithms that are not readily available with many commercial data acquisition software packages. A recent method was developed to experimentally apply sequential continuation using the shaker voltage that can be readily deployed using commercially available software. By utilizing the stabilizing effects of electrodynamic shakers and the force dropout phenomena in fixed frequency voltage control sine tests, this approach has been demonstrated to stabilize the unstable branch of a nonlinear system with three branches, allowing for three multivalued solutions to be identified within a specific frequency bandwidth near resonance. Recent testing on a strongly nonlinear system with vibro-impact nonlinearity has revealed jumping behavior when performing sequential continuation along the voltage parameter, like the jump phenomena seen during more traditional force controlled swept and stepped sine testing. Here, this paper investigates the stabilizing effects of an electrodynamic shaker on strongly nonlinear structures in fixed frequency voltage control tests using both numerical and experimental methods. The harmonic balance method is applied to the coupled shaker-structure system with an electromechanical model to simulate the fixed voltage control tests and predict the stabilization for different parameters of the model. The simulated results are leveraged to inform the design of a set of experiments to demonstrate the stabilization characteristics on a fixture-pylon assembly with a vibro-impact nonlinearity. Through numerical simulation and experimental testing on two different strongly nonlinear systems, the various parameters that influence the stability of the coupled shaker-structure are revealed to better understand the performance of fixed frequency voltage control tests.
Chemistry of Materials
Vibrational spectroscopy is a nondestructive technique commonly used in chemical and physical analyses to determine atomic structures and associated properties. However, the evaluation and interpretation of spectroscopic profiles based on human-identifiable peaks can be difficult and convoluted. To address this challenge, we present a reliable protocol based on supervised manifold learning techniques meant to connect vibrational spectra to a variety of complex and diverse atomic structure configurations. As an illustration, we examined a large database of virtual vibrational spectroscopy profiles generated from atomistic simulations for silicon structures subjected to different stress, amorphization, and disordering states. We evaluated representative features in those spectra via various linear and nonlinear dimensionality reduction techniques and used the reduced representation of those features with decision trees to correlate them with structural information unavailable through classical human-identifiable peak analysis. We show that our trained model accurately (over 97% accuracy) and robustly (insensitive to noise) disentangles the contribution from the different material states, hence demonstrating a comprehensive decoding of spectroscopic profiles beyond classical (human-identifiable) peak analysis.
Journal of Computational Physics
For computational physics simulations, code verification plays a major role in establishing the credibility of the results by assessing the correctness of the implementation of the underlying numerical methods. In computational electromagnetics, surface integral equations, such as the method-of-moments implementation of the magnetic-field integral equation, are frequently used to solve Maxwell's equations on the surfaces of electromagnetic scatterers. These electromagnetic surface integral equations yield many code-verification challenges due to the various sources of numerical error and their possible interactions. In this paper, we provide approaches to separately measure the numerical errors arising from these different error sources. We demonstrate the effectiveness of these approaches for cases with and without coding errors.
Robust in situ power harvesting underlies all efforts to enable downhole autonomous sensors for real-time and long-term monitoring of CO2 plume movement and permeance, wellbore health, and induced seismicity. This project evaluated the potential use of downhole thermopile arrays, known as thermoelectric generators (TEGs), as power sources to charge sensors for in situ real-time, long-term data capture and transmission. Real-time downhole monitoring will enable “Big Data” techniques and machine learning, using massive amounts of continuous data from embedded sensors, to quantify short- and long-term stability and safety of enhanced oil recovery and/or commercial-scale geologic CO2 storage. This project evaluated possible placement of the TEGs at two different wellbore locations: on the outside of the casing; or on the production tubing. TEGs convert heat flux to electrical power, and in the borehole environment, would convert heat flux into or out of the borehole into power for downhole sensors. Such heat flux would be driven by pumping of cold or hot fluids into the borehole—for instance, injecting supercritical CO2—creating a thermal pulse that could power the downhole sensors. Hence, wireless power generation could be accomplished with in situ TEG energy harvesting. This final report summarizes the project’s efforts that accomplished the creation of a fully operational thermopile field unit, including selection of materials, laboratory benchtop experiments and thermal-hydrologic modeling for design and optimization of the field-scale power generation test unit. Finally, the report describes the field unit that has been built and presents results of performance and survivability testing. The performance and survivability testing evaluated the following: 1) downhole power generation in response to a thermal gradient produced by pumping a heated fluid down a borehole and through the field unit; and 2) component survivability and operation at elevated temperature and pressure conditions representative of field conditions. The performance and survivability testing show that TEG arrays are viable for generating ample energy to power downhole sensors, although it is important to note that developing or connecting to sensors was beyond the scope of this project. This project’s accomplishments thus traversed from a low Technical Readiness Level (TRL) on fundamental concepts of the application and modeling to TRL-5 via testing of the fully integrated field unit for power generation in relevant environments. A fully issued United States Patent covers the wellbore power harvesting technology and applications developed by this project.
Journal of Applied Physics
Hydrocarbon polymers are used in a wide variety of practical applications. In the field of dynamic compression at extreme pressures, these polymers are used at several high energy density (HED) experimental facilities. One of the most common polymers is poly(methyl methacrylate) or PMMA, also called Plexiglass® or Lucite®. Here, we present high-fidelity, hundreds of GPa range experimental shock compression data measured on Sandia's Z machine. We extend the principal shock Hugoniot for PMMA to more than threefold compression up to 650 GPa and re-shock Hugoniot states up to 1020 GPa in an off-Hugoniot regime, where experimental data are even sparser. These data can be used to put additional constraints on tabular equation of state (EOS) models. The present results provide clear evidence for the need to re-examine the existing tabular EOS models for PMMA above ∼120 GPa as well as perhaps revisit EOSs of similar hydrocarbon polymers commonly used in HED experiments investigating dynamic compression, hydrodynamics, or inertial confinement fusion.
iScience
Perovskite solar cells (PSCs) promise high efficiencies and low manufacturing costs. Most formulations, however, contain lead, which raises health and environmental concerns. In this review, we use a risk assessment approach to identify and evaluate the technology risks to the environment and human health. We analyze the risks by following the technology from production to transportation to installation to disposal and examine existing environmental and safety regulations in each context. We review published data from leaching and air emissions testing and highlight gaps in current knowledge and a need for more standardization. Methods to avoid lead release through introduction of absorbing materials or use of alternative PSC formulations are reviewed. We conclude with the recommendation to develop recycling programs for PSCs and further standardized testing to understand risks related to leaching and fires.
Physics of Plasmas
Improving the performance of inertial confinement fusion implosions requires physics models that can accurately predict the response to changes in the experimental inputs. Good predictive capability has been demonstrated for the fusion yield using a statistical mapping of simulated outcomes to experimental data [Gopalaswamy et al., Nature 565(771), 581–586 (2019)]. In this paper, a physics-based statistical mapping approach is used to extract and quantify all the major sources of degradation of fusion yield for direct-drive implosions on the OMEGA laser. Here, the yield is found to be dependent on the age of the deuterium tritium fill, the ℓ = 1 asymmetry in the implosion core, the laser beam-to-target size ratio, and parameters related to the hydrodynamic stability. A controlled set of experiments were carried out where only the target fill age was varied while keeping all other parameters constant. The measurements were found to be in excellent agreement with the fill age dependency inferred using the mapping model. In addition, a new implosion design was created, guided by the statistical mapping model by optimizing the trade-offs between increased laser energy coupling at larger target size and the degradations caused by the laser beam-to-target size ratio and hydrodynamic instabilities. When experimentally performed, an increased fusion yield was demonstrated in targets with larger diameters.
International Journal of Non-Linear Mechanics
Accurately modeling the impact force used in the analysis of loosely constrained cantilevered pipes conveying fluid is imperative. If little information is known of the motion-limiting constraints used in experiments, the analysis of the system may yield inaccurate predictions. Here in this work, multiple forcing representations of the impact force are defined and analyzed for a cantilevered pipe that conveys fluid. Depending on the representation of the impact force, the dynamics of the pipe can vary greatly when only the stiffness of the constraints is known from experiments. Three gap sizes of the constraints are analyzed, and the representation of the impact force used to analyze the system is found to significantly affect the response of the pipe at each gap size. An investigation on the effects of the vibro-impact force representation is performed through using basin of attraction analysis and nonlinear characterization of the system’s response.
AIAA SCITECH 2023 Forum
As the path towards Urban Air Mobility (UAM) continues to take shape, there are outstanding technical challenges to achieving safe and effective air transportation operations under this new paradigm. To inform and guide technology development for UAM, NASA is investigating the current state-of-the-art in key technology areas including traffic management, detect-and-avoid, and autonomy. In support of this effort, a new perception testbed was developed at NASA Ames Research Center to collect data from an array of sensing systems representative of those that could be found on a future UAM vehicle. This testbed, featuring a Light-Detection-and-Ranging (LIDAR) instrument, a long-wave infrared sensor, and a visible spectrum camera was deployed for a multiday test campaign in the Fog Chamber at Sandia National Laboratories (SNL), in Albuquerque, New Mexico. During the test campaign, fog conditions were created for tests with targets including a human, a resolution chart, and a small unmanned aerial vehicle (sUAV). Here, this paper describes in detail, the developed perception testbed, the experimental setup in the fog chamber, the resulting data, and presents an initial result from analysis of the data with the evaluation of methods to increase contrast through filtering techniques.
Journal of Applied Physics
Cesium vapor thermionic converters are an attractive method of converting high-temperature heat directly to electricity, but theoretical descriptions of the systems have been difficult due to the multi-step ionization of Cs through inelastic electron-neutral collisions. This work presents particle-in-cell simulations of these converters, using a direct simulation Monte Carlo collision model to track 52 excited states of Cs. These simulations show the dominant role of multi-step ionization, which also varies significantly based on both the applied voltage bias and pressure. The electron energy distribution functions are shown to be highly non-Maxwellian in the cases analyzed here. A comparison with previous approaches is presented, and large differences are found in ionization rates due especially to the fact that previous approaches have assumed Maxwellian electron distributions. Finally, an open question regarding the nature of the plasma sheaths in the obstructed regime is discussed. The one-dimensional simulations did not produce stable obstructed regime operation and thereby do not support the double-sheath hypothesis.
The Astrophysical Journal. Letters
Magnetized turbulence is ubiquitous in many astrophysical and terrestrial plasmas but no universal theory exists. Even the detailed energy dynamics in magnetohydrodynamic (MHD) turbulence are still not well understood. We present a suite of subsonic, super-Alfvénic, high plasma beta MHD turbulence simulations that only vary in their dynamical range, i.e., in their separation between the large-scale forcing and dissipation scales, and their dissipation mechanism (implicit large eddy simulation, ILES, and direct numerical simulation (DNS)). Using an energy transfer analysis framework we calculate the effective numerical viscosities and resistivities, and demonstrate that all ILES calculations of MHD turbulence are resolved and correspond to an equivalent visco-resistive MHD turbulence calculation. Increasing the number of grid points used in an ILES corresponds to lowering the dissipation coefficients, i.e., larger (kinetic and magnetic) Reynolds numbers for a constant forcing scale. Independently, we use this same framework to demonstrate that—contrary to hydrodynamic turbulence—the cross-scale energy fluxes are not constant in MHD turbulence. This applies both to different mediators (such as cascade processes or magnetic tension) for a given dynamical range as well as to a dependence on the dynamical range itself, which determines the physical properties of the flow. We do not observe any indication of convergence even at the highest resolution (largest Reynolds numbers) simulation at 20483 cells, calling into question whether an asymptotic regime in MHD turbulence exists, and, if so, what it looks like.
Ammonia (NH3) is an energy-dense chemical and a vital component of fertilizer. In addition, it is a carbon-neutral liquid fuel and a potential candidate for thermochemical energy storage for high-temperature concentrating solar power (CSP). Currently, NH3 synthesis occurs via the Haber-Bosch process, which requires high pressures (15-25 MPa) and medium to high temperatures (400-500 °C). N2 and H2 are essential feedstocks for this NH3 production process. H2 is generally derived from methane via steam reforming; N2 is sourced from air, after oxygen removal via combustion of hydrocarbons. Both processes consume hydrocarbons, resulting in the release of CO2. In addition, hydrocarbon fuels are burned to produce the heat and mechanical energy required to perform the NH3 reaction, further increasing CO2 emissions. Overall, the production of ammonia via the Haber-Bosch (H-B) process is responsible for up to 1.4% of the world’s carbon emissions. The development of a renewable pathway to NH3 synthesis, which utilizes concentrated solar irradiation as a process heat instead of fossil fuels and operates under low or ambient pressure, will result in a decrease (or elimination) of greenhouse gas emissions as well as avoid the cost, complexity, and safety issues inherent in high-pressure processes. Most current efforts to “green” ammonia production involve either electrolysis or simply replacing the energy source for H-B with renewable electricity, but otherwise leaving the process intact. The effort proposed here would create a new paradigm for the synthesis of NH3 utilizing solar-thermal heat, water, and air as feedstocks, providing a truly green method of production. The overall objective of the STAP (Solar Thermal Ammonia Production) project was to develop a solar thermochemical looping technology to produce and store nitrogen (N2) from air for the subsequent production of ammonia (NH3) via an advanced two-stage process. The goal is a cost-effective and energy efficient technology for the renewable N2 production and synthesis of NH3 from H2 (produced from H2O) and air using solar-thermal energy from concentrating sunlight, under pressures an order of magnitude lower than H-B NH3 production. Our process involves two looping cycles, which do not require catalysts and can be recycled. Over the course of the STAP project, we (1) developed and deeply characterized oxide materials for N2 separation; (2) developed a method for the synthesis of metal nitrides, producing a series of quaternary compounds that have been heretofore unreported; (3) modeled, designed, and fabricated bench-scale tube and on-sun reactors for the N2 production step and demonstrated the ability to separate N2 over multiple cycles in the tube reactor; (4) designed and fabricated a bench-scale Ammonia Synthesis Reactor (ASR) and demonstrated the proof of concept of NH3 synthesis via a novel looping process using metal nitrides over multiple cycles; and (5) completed a systems- and technoeconomic analysis showing the feasibility of ammonia production on a larger scale via the STAP process. The development of renewable, low-cost NH3 will be of great interest to the chemicals industry, particularly agricultural sectors. The CSP industry should be both an important customer and potential end-user of this technology, as it affords the capability of synthesizing a promising thermochemical storage material on-site. Since the NH3 synthesis step also requires H2, there will exist a symbiotic relationship between this technology and solar-thermochemical water-splitting applications. Green ammonia synthesis will result in the decarbonization of a hydrocarbon-intensive industry, helping to meet the Administration goal of industrial decarbonization by 2050. The resulting decrease in CO2 and related pollutants will improve health and well-being of society, particularly for those living in the vicinity of commercial production plants.
Journal of Power Sources
Fracture and short circuit in the Li7La3Zr2O12 (LLZO) solid electrolyte are two key issues that prevent its adoption in battery cells. In this paper, we utilize phase-field simulations that couple electrochemistry and fracture to evaluate the maximum electric potential that LLZO electrolytes can support as a function of crack density. In the case of a single crack, we find that the applied potential at the onset of crack propagation exhibits inverse square root scaling with respect to crack length, analogous to classical fracture mechanics. Here, we further find that the short-circuit potential scales linearly with crack length. In the realistic case where the solid electrolyte contains multiple cracks, we reveal that failure fits the Weibull model. The failure distributions shift to favor failure at lower overpotentials as areal crack density increases. Furthermore, when flawless interfacial buffers are placed between the applied potential and the bulk of the electrolyte, failure is mitigated. When constant currents are applied, current focuses in near-surface flaws, leading to crack propagation and short circuit. We find that buffered samples sustain larger currents without reaching unstable overpotentials and without failing. Our findings suggest several mitigation strategies for improving the ability of LLZO to support larger currents and improve operability.
Applied Physics Letters
We demonstrate a monolithic all-epitaxial resonant-cavity architecture for long-wave infrared photodetectors with substrate-side illumination. An nBn detector with an ultra-thin (t ≈ 350 nm) absorber layer is integrated into a leaky resonant cavity, formed using semi-transparent highly doped (n + +) epitaxial layers, and aligned to the anti-node of the cavity's standing wave. The devices are characterized electrically and optically and demonstrate an external quantum efficiency of ∼25% at T = 180 K in an architecture compatible with focal plane array configurations.
Physical Chemistry Chemical Physics. PCCP
The interplay between hydrogen and dislocations (e.g., core and elastic energies, and dislocation–dislocation interactions) has implications on hydrogen embrittlement but is poorly understood. Continuum models of hydrogen enhanced local plasticity have not considered the effect of hydrogen on dislocation core energies. Energy minimization atomistic simulations can only resolve dislocation core energies in hydrogen-free systems because hydrogen motion is omitted so hydrogen atmosphere formation can’t occur. Additionally, previous studies focused more on face-centered-cubic than body-centered-cubic metals. Discrete dislocation dynamics studies of hydrogen–dislocation interactions assume isotropic elasticity, but the validity of this assumption isn’t understood. Here, we perform time-averaged molecular dynamics simulations to study the effect of hydrogen on dislocation energies in body-centered-cubic iron for several dislocation character angles. We see atmosphere formation and highly converged dislocation energies. We find that hydrogen reduces dislocation core energies but can increase or decrease elastic energies of isolated dislocations and dislocation–dislocation interaction energies depending on character angle. We also find that isotropic elasticity can be well fitted to dislocation energies obtained from simulations if the isotropic elastic constants are not constrained to their anisotropic counterparts. These results are relevant to ongoing efforts in understanding hydrogen embrittlement and provide a foundation for future work in this field.
International Journal of Hydrogen Energy
In this work, we investigate the potential of liquid hydrogen storage (LH2) on-board Class-8 heavy duty trucks to resolve many of the range, weight, volume, refueling time and cost issues associated with 350 or 700-bar compressed H2 storage in Type-3 or Type-4 composite tanks. We present and discuss conceptual storage system configurations capable of supplying H2 to fuel cells at 5-bar with or without on-board LH2 pumps. Structural aspects of storing LH2 in double walled, vacuum insulated, and low-pressure Type-1 tanks are investigated. Structural materials and insulation methods are discussed for service at cryogenic temperatures and mitigation of heat leak to prevent LH2 boiloff. Failure modes of the liner and shell are identified and analyzed using the regulatory codes and detailed finite element (FE) methods. The conceptual systems are subjected to a Failure modes and effects analysis (FMEA) and a safety, codes, and standards (SCS) review to rank failures and identify safety gaps. The results indicate that the conceptual systems can reach 19.6% usable gravimetric capacity, 40.9 g-H2/L usable volumetric capacity and $174-183/kg-H2 cost (2016 USD) when manufactured 100,000 systems annually.
2023 IEEE Symposium on Electromagnetic Compatibility and Signal/Power Integrity, EMC+SIPI 2023
High-altitude electromagnetic pulse events are a growing concern for electric power grid vulnerability assessments and mitigation planning, and accurate modeling of surge arrester mitigations installed on the grid is necessary to predict pulse effects on existing equipment and to plan future mitigation. While some models of surge arresters at high frequency have been proposed, experimental backing for any given model has not been shown. This work examines a ZnO lightning surge arrester modeling approach previously developed for accurate prediction of nanosecond-scale pulse response. Four ZnO metal-oxide varistor pucks with different sizes and voltage ratings were tested for voltage and current response on a conducted electromagnetic pulse testbed. The measured clamping response was compared to SPICE circuit models to compare the electromagnetic pulse response and validate model accuracy. Results showed good agreement between simulation results and the experimental measurements, after accounting for stray testbed inductance between 100 and 250 nH.
Frontiers in Neuroinformatics
At the turn of the millennium the computational neuroscience community realized that neuroscience was in a software crisis: software development was no longer progressing as expected and reproducibility declined. The International Neuroinformatics Coordinating Facility (INCF) was inaugurated in 2007 as an initiative to improve this situation. The INCF has since pursued its mission to help the development of standards and best practices. In a community paper published this very same year, Brette et al. tried to assess the state of the field and to establish a scientific approach to simulation technology, addressing foundational topics, such as which simulation schemes are best suited for the types of models we see in neuroscience. In 2015, a Frontiers Research Topic “Python in neuroscience” by Muller et al. triggered and documented a revolution in the neuroscience community, namely in the usage of the scripting language Python as a common language for interfacing with simulation codes and connecting between applications. The review by Einevoll et al. documented that simulation tools have since further matured and become reliable research instruments used by many scientific groups for their respective questions. Open source and community standard simulators today allow research groups to focus on their scientific questions and leave the details of the computational work to the community of simulator developers. A parallel development has occurred, which has been barely visible in neuroscientific circles beyond the community of simulator developers: Supercomputers used for large and complex scientific calculations have increased their performance from ~10 TeraFLOPS (1013 floating point operations per second) in the early 2000s to above 1 ExaFLOPS (1018 floating point operations per second) in the year 2022. This represents a 100,000-fold increase in our computational capabilities, or almost 17 doublings of computational capability in 22 years. Moore's law (the observation that it is economically viable to double the number of transistors in an integrated circuit every other 18–24 months) explains a part of this; our ability and willingness to build and operate physically larger computers, explains another part. It should be clear, however, that such a technological advancement requires software adaptations and under the hood, simulators had to reinvent themselves and change substantially to embrace this technological opportunity. It actually is quite remarkable that—apart from the change in semantics for the parallelization—this has mostly happened without the users knowing. The current Research Topic was motivated by the wish to assemble an update on the state of neuroscientific software (mostly simulators) in 2022, to assess whether we can see more clearly which scientific questions can (or cannot) be asked due to our increased capability of simulation, and also to anticipate whether and for how long we can expect this increase of computational capabilities to continue.
Proceedings of the Annual Hawaii International Conference on System Sciences
The challenge of cyberattack detection can be illustrated by the complexity of the MITRE ATT&CKTM matrix, which catalogues >200 attack techniques (most with multiple sub-techniques). To reliably detect cyberattacks, we propose an evidence-based approach which fuses multiple cyber events over varying time periods to help differentiate normal from malicious behavior. We use Bayesian Networks (BNs) - probabilistic graphical models consisting of a set of variables and their conditional dependencies - for fusion/classification due to their interpretable nature, ability to tolerate sparse or imbalanced data, and resistance to overfitting. Our technique utilizes a small collection of expert-informed cyber intrusion indicators to create a hybrid detection system that combines data-driven training with expert knowledge to form a host-based intrusion detection system (HIDS). We demonstrate a software pipeline for efficiently generating and evaluating various BN classifier architectures for specific datasets and discuss explainability benefits thereof.
Conference Record of the IEEE Photovoltaic Specialists Conference
A method is presented to detect clear-sky periods for plane-of-array, time-averaged irradiance data that is based on the algorithm originally described by Reno and Hansen. We show this new method improves the state-of-the-art by providing accurate detection at longer data intervals, and by detecting clear periods in plane-of-array data, which is novel. We illustrate how accurate determination of clear-sky conditions helps to eliminate data noise and bias in the assessment of long-term performance of PV plants.
Proceedings of ASME 2023 17th International Conference on Energy Sustainability, ES 2023
This study investigated the durability of four high temperature coatings for use as a Gardon gauge foil coating. Failure modes and effects analysis have identified Gardon gauge foil coating as a critical component for the development of a robust flux gauge for high intensity flux measurements. Degradation of coating optical properties and physical condition alters flux gauge sensitivity, resulting in flux measurement errors. In this paper, four coatings were exposed to solar and thermal cycles to simulate real-world aging. Solar simulator and box furnace facilities at the National Solar Thermal Test Facility (NSTTF) were utilized in separate test campaigns. Coating absorptance and emissivity properties were measured and combined into a figure of merit (FOM) to characterize the optical property stability of each coating, and physical coating degradation was assessed qualitatively using microscope images. Results suggest rapid high temperature cycling did not significantly impact coating optical properties and physical state. In contrast, prolonged exposure of coatings to high temperatures degraded coating optical properties and physical state. Coatings degraded after 1 hour of exposure at temperatures above 400 °C and stabilized after 6-24 hours of exposure. It is concluded that the combination of high temperatures and prolonged exposure provide the energy necessary to sustain coating surface reactions and alter optical and physical coating properties. Results also suggest flux gauge foil coatings could benefit from long duration high temperature curing (>400 °C) prior to sensor calibration to stabilize coating properties and increase measurement reliability in high flux and high temperature applications.
Proceedings of 13th Nuclear Plant Instrumentation, Control and Human-Machine Interface Technologies, NPIC and HMIT 2023
The Sliding Scale of Cybersecurity is a framework for understanding the actions that contribute to cybersecurity. The model consists of five categories that provide varying value towards cybersecurity and incur varying implementation costs. These categories range from offensive cybersecurity measures providing the least value and incurring the greatest cost, to architecture providing the greatest value and incurring the least cost. This paper presents an application of the Sliding Scale of Cybersecurity to the Tiered Cybersecurity Analysis (TCA) of digital instrumentation and control systems for advanced reactors. The TCA consists of three tiers. Tier 1 is design and impact analysis. In Tier 1 it is assumed that the adversary has control over all digital systems, components, and networks in the plant, and that the adversary is only constrained by the physical limitations of the plant design. The plant’s safety design features are examined to determine whether the consequences of an attack by this cyber-enabled adversary are eliminated or mitigated. Accident sequences that are not eliminated or mitigated by security by design features are examined in Tier 2 analysis. In Tier 2, adversary access pathways are identified for the unmitigated accident sequences, and passive measures are implemented to deny system and network access to those pathways wherever feasible. Any systems with remaining susceptible access pathways are then examined in Tier 3. In Tier 3, active defensive cybersecurity architecture features and cybersecurity plan controls are applied to deny the adversary the ability to conduct the tasks needed to cause a severe consequence. Earlier application of the TCA in the design process provides greater opportunity for an efficient graded approach and defense-in-depth.
Conference Proceedings of the Society for Experimental Mechanics Series
While research in multiple-input/multiple-output (MIMO) random vibration testing techniques, control methods, and test design has been increasing in recent years, research into specifications for these types of tests has not kept pace. This is perhaps due to the very particular requirement for most MIMO random vibration control specifications – they must be narrowband, fully populated cross-power spectral density matrices. This requirement puts constraints on the specification derivation process and restricts the application of many of the traditional techniques used to define single-axis random vibration specifications, such as averaging or straight-lining. This requirement also restricts the applicability of MIMO testing by requiring a very specific and rich field test data set to serve as the basis for the MIMO test specification. Here, frequency-warping and channel averaging techniques are proposed to soften the requirements for MIMO specifications with the goal of expanding the applicability of MIMO random vibration testing and enabling tests to be run in the absence of the necessary field test data.
IEEE International Conference on Plasma Science
Laser-induced photoemission of electrons offers opportunities to trigger and control plasmas and discharges [1]. However, the underlying mechanisms are not sufficiently characterized to be fully utilized [2]. We present an investigation to characterize the effects of photoemission on plasma breakdown for different reduced electric fields, laser intensities, and photon energies. We perform Townsend breakdown experiments assisted by high-speed imaging and employ a quantum model of photoemission along with a 0D discharge model [3], [4] to interpret the experimental measurements.
AIAA SciTech Forum and Exposition, 2023
Measurements of gas-phase temperature and pressure in hypersonic flows are important for understanding gas-phase fluctuations which can drive dynamic loading on model surfaces and to study fundamental compressible flow turbulence. To achieve this capability, femtosecond coherent anti-Stokes Raman scattering (fs CARS) is applied in Sandia National Laboratories’ cold-flow hypersonic wind tunnel facility. Measurements were performed for tunnel freestream temperatures of 42–58 K and pressures of 1.5–2.2 Torr. The CARS measurement volume was translated in the flow direction during a 30-second tunnel run using a single computer-controlled translation stage. After broadband femtosecond laser excitation, the rotational Raman coherence was probed twice, once at an early time where the collisional environment has not affected the Raman coherence, and another at a later time after the collisional environment has led to significant dephasing of the Raman coherent. The gas-phase temperature was obtained primarily from the early-probe CARS spectra, while the gas-phase pressure was obtained primarily from the late-probe CARS spectra. Challenges in implementing fs CARS in this facility such as changes in the nonresonant spectrum at different measurement location are discussed.
2023 IEEE PES Innovative Smart Grid Technologies Latin America, ISGT-LA 2023
Due to their increased levels of reliability, meshed low-voltage (LV) grid and spot networks are common topologies for supplying power to dense urban areas and critical customers. Protection schemes for LV networks often use highly sensitive reverse current trip settings to detect faults in the medium-voltage system. As a result, interconnecting even low levels of distributed energy resources (DERs) can impact the reliability of the protection system and cause nuisance tripping. This work analyzes the possibility of modifying the reverse current relay trip settings to increase the DER hosting capacity of LV networks without impacting fault detection performance. The results suggest that adjusting relay settings can significantly increase DER hosting capacity on LV networks without adverse effects, and that existing guidance on connecting DERs to secondary networks, such as that contained in IEEE Std 1547-2018, could potentially be modified to allow higher DER deployment levels.
Proceedings of SPIE - The International Society for Optical Engineering
Despite state-of-the-art deep learning-based computer vision models achieving high accuracy on object recognition tasks, x-ray screening of baggage at checkpoints is largely performed by hand. Part of the challenge in automation of this task is the relatively small amount of available labeled training data. Furthermore, realistic threat objects may have forms or orientations that do not appear in any training data, and radiographs suffer from high amounts of occlusion. Using deep generative models, we explore data augmentation techniques to expand the intra-class variation of threat objects synthetically injected into baggage radiographs using openly available baggage x-ray datasets. We also benchmark the performance of object detection algorithms on raw and augmented data.
ASHRAE Transactions
Puerto Rico faced a double strike from hurricanes Irma and Maria in 2017. The resulting damage required a comprehensive rebuild of electric infrastructure. There are plans and pilot projects to rebuild with microgrids to increase resilience. This paper provides a techno-economic analysis technique and case study of a potential future community in Puerto Rico that combines probabilistic microgrid design analysis with tiered circuits in building energy modeling. Tiered circuits in buildings allow electric load reduction via remote disconnection of non-critiñl circuits during an emergency. When coupled to a microgrid, tiered circuitry can reduce the chances of a microgrid's storage and generation resources being depleted. The analysis technique is applied to show 1) Approximate cost savings due to a tiered circuit structure and 2) Approximate cost savings gained by simultaneously considering resilience and sustainability constraints in the microgrid optimization. The analysis technique uses a resistive capacitive thermal model with load profiles for four tiers (tier 1-3 and non-critical loads). Three analyses were conducted using: 1) open-source software called Tiered Energy in Buildings and 2) the Microgrid Design Toolkit. For a fossil fuel based microgrid 30% of the total microgrid costs of 1.18 million USD were calculated where the non-tiered case keeps all loads 99.9% available and the tiered case keeps tier 1 at 99.9%, tier 2 at 95%, tier 3 at 80% availability, with no requirement on non-critical loads. The same comparison for a sustainable microgrid showed 8% cost savings on a 5.10 million USD microgrid due to tiered circuits. The results also showed 6-7% cost savings when our analysis technique optimizes sustainability and resilience simultaneously in comparison to doing microgrid resilience analysis and renewables net present value analysis independently. Though highly specific to our case study, similar assessments using our analysis technique can elucidate value of tiered circuits and simultaneous consideration of sustainability and resilience in other locations.
American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP
The V31 containment vessel was procured by the US Army Recovered Chemical Material Directorate (RCMD) as a third-generation EDS containment vessel. It is the fifth EDS vessel to be fabricated under Code Case 2564 of the 2019 ASME Boiler and Pressure Vessel Code, which provides rules for the design of impulsively loaded vessels. The explosive rating for the vessel, based on the code case, is 24 lb (11 kg) TNT-equivalent for up to 1092 detonations. This report documents the results of explosive tests that were performed on the vessel at Sandia National Laboratories in Albuquerque, New Mexico to qualify the vessel for field operations use. There were three design basis configurations for qualification testing. Qualification test (1) consisted of a simulated M55 rocket motor and warhead assembly of 24 lb (11 kg) of Composition C-4 (30 lb [14 kg] TNT equivalent). This test was considered the maximum load case, based on modeling and simulation methods performed by Sandia prior to the vessel design phase. Qualification test (2) consisted of a regular, right circular cylinder, unitary charge, located central to the vessel interior of 19.2 lb (8.72 kg) of Composition C-4 (24 lb [11 kg] TNT equivalent). Qualification test (3) consisted of a 12-pack of regular, right circular cylinders of 2 lb (908 g) each, distributed evenly inside the vessel (totaling 19.2 lb [8.72 kg] of C-4, or 24 lb [11 kg] TNT equivalent). All vessel acceptance criteria were met.
SIAM Journal on Scientific Computing
Advanced finite-element discretizations and preconditioners for models of poroelasticity have attracted significant attention in recent years. The equations of poroelasticity offer significant challenges in both areas, due to the potentially strong coupling between unknowns in the system, saddle-point structure, and the need to account for wide ranges of parameter values, including limiting behavior such as incompressible elasticity. This paper was motivated by an attempt to develop monolithic multigrid preconditioners for the discretization developed in [C. Rodrigo et al., Comput. Methods App. Mech. Engrg, 341 (2018), pp. 467-484]; we show here why this is a difficult task and, as a result, we modify the discretization in [Rodrigo et al.] through the use of a reduced-quadrature approximation, yielding a more “solver-friendly” discretization. Local Fourier analysis is used to optimize parameters in the resulting monolithic multigrid method, allowing a fair comparison between the performance and costs of methods based on Vanka and Braess-Sarazin relaxation. Numerical results are presented to validate the local Fourier analysis predictions and demonstrate efficiency of the algorithms. Finally, a comparison to existing block-factorization preconditioners is also given.
Journal of Physics: Conference Series
Multiple rotors on single structures have long been proposed to increase wind turbine energy capture with no increase in rotor size, but at the cost of additional mechanical complexity in the yaw and tower designs. Standard turbines on their own very-closely-spaced towers avoid these disadvantages but create a significant disadvantage; for some wind directions the wake turbulence of a rotor enters the swept area of a very close downwind rotor causing low output, fatigue stress, and changes in wake recovery. Knowing how the performance of pairs of closely spaced rotors varies with wind direction is essential to design a layout that maximizes the useful directions and minimizes the losses and stress at other directions. In the current work, the high-fidelity large-eddy simulation (LES) code Exa-Wind/Nalu-Wind is used to simulate the wake interactions from paired-rotor configurations in a neutrally stratified atmospheric boundary layer to investigate performance and feasibility. Each rotor pair consists of two Vestas V27 turbines with hub-to-hub separation distances of 1.5 rotor diameters. The on-design wind direction results are consistent with previous literature. For an off-design wind direction of 26.6°, results indicate little change in power and far-wake recovery relative to the on-design case. At a direction of 45.0°, significant rotor-wake interactions produce an increase in power but also in far-wake velocity deficit and turbulence intensity. A severely off-design case is also considered.
American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP
Austenitic stainless steels are used in high-pressure hydrogen containment infrastructure for their resistance to hydrogen embrittlement. Applications for the use of austenitic stainless steels include pressure vessels, tubing, piping, valves, fittings and other piping components. Despite their resistance to brittle behavior in the presence of hydrogen, austenitic stainless steels can exhibit degraded fracture performance. The mechanisms of hydrogen-assisted fracture, however, remain elusive, which has motivated continued research on these alloys. There are two principal approaches to evaluate the influence of gaseous hydrogen on mechanical properties: internal and external hydrogen, respectively. The austenite phase has high solubility and low diffusivity of hydrogen at room temperature, which enables introduction of hydrogen into the material through thermal precharging at elevated temperature and pressure; a condition referred to as internal hydrogen. H-precharged material can subsequently be tested in ambient conditions. Alternatively, mechanical testing can be performed while test coupons are immersed in gaseous hydrogen thereby evaluating the effects of external hydrogen on property degradation. The slow diffusivity of hydrogen in austenite at room temperature can often be a limiting factor in external hydrogen tests and may not properly characterize lower bound fracture behavior in components exposed to hydrogen for long time periods. In this study, the differences between internal and external hydrogen environments are evaluated in the context of fracture resistance measurements. Fracture testing was performed on two different forged austenitic stainless steel alloys (304L and XM-11) in three different environments: 1) non-charged and tested in gaseous hydrogen at pressure of 1,000 bar (external H2), 2) hydrogen precharged and tested in air (internal H), 3) hydrogen precharged and tested in 1,000 bar H2 (internal H + external H2). For all environments, elastic-plastic fracture measurements were conducted to establish J-R curves following the methods of ASTM E1820. Following fracture testing, fracture surfaces were examined to reveal predominant fracture mechanisms for the different conditions and to characterize differences (and similarities) in the macroscale fracture processes associated with these environmental conditions.
2023 IEEE Design Methodologies Conference, DMC 2023
High reliability (Hi-Rel) electronics for mission critical applications are handled with extreme care; stress testing upon full assembly can increase a likelihood of degrading these systems before their deployment. Moreover, novel material parts, such as wide bandgap semiconductor devices, tend to have more complicated fabrication processing needs which could ultimately result in larger part variability or potential defects. Therefore, an intelligent screening and inspection technique for electronic parts, in particular gallium nitride (GaN) power transistors, is presented in this paper. We present a machine-learning-based non-intrusive technique that can enhance part-selection decisions to categorize the part samples to the population's expected electrical characteristics. This technique provides relevant information about GaN HEMT device characteristics without having to operate all of these devices at the high current region of the transfer and output characteristics, lowering the risk of damaging the parts prematurely. The proposed non-intrusive technique uses a small signal pulse width modulation (PWM) of various frequencies, ranging from 10 kHz to 500 kHz, injected into the transistor terminals and the corresponding output signals are observed and used as training dataset. Unsupervised clustering techniques with K-means and feature dimensional reduction through principal component analysis (PCA) have been used to correlate a population of GaN HEMT transistors to the expected mean of the devices' electrical characteristic performance.
Frontiers in Optics: Proceedings Frontiers in Optics + Laser Science 2023, FiO, LS 2023
Complex angle theory can offer new fundamental insights into refraction at the absorptive interface. In this work we propose a new method to induce isofrequency opening via addition of scattering in the dual interface system.
IEEE International Conference on Plasma Science
A challenge for TW-class accelerators, such as Sandia's Z machine, is efficient power coupling due to current loss in the final power feed. It is also important to understand how such losses will scale to larger next generation pulsed power (NGPP) facilities. While modeling is studying these power flow losses it is important to have diagnostic that can experimentally measure plasmas in these conditions and help inform simulations. The plasmas formed in the power flow region can be challenging to diagnose due to both limited lines of sight and being at significantly lower temperatures and densities than typical plasmas studied on Z. This necessitates special diagnostic development to accurately measure the power flow plasma on Z.
Physics of Fluids
Kolmogorov's theory of turbulence assumes that the small-scale turbulent structures in the energy cascade are universal and are determined by the energy dissipation rate and the kinematic viscosity alone. However, thermal fluctuations, absent from the continuum description, terminate the energy cascade near the Kolmogorov length scale. Here, we propose a simple superposition model to account for the effects of thermal fluctuations on small-scale turbulence statistics. For compressible Taylor-Green vortex flow, we demonstrate that the superposition model in conjunction with data from direct numerical simulation of the Navier-Stokes equations yields spectra and structure functions that agree with the corresponding quantities computed from the direct simulation Monte Carlo method of molecular gas dynamics, verifying the importance of thermal fluctuations in the dissipation range.
Conference Proceedings - IEEE SOUTHEASTCON
The error detection performance of cyclic redundancy check (CRC) codes combined with bit framing in digital serial communication systems is evaluated. Advantages and disadvantages of the combined method are treated in light of the probability of undetected errors. It is shown that bit framing can increase the burst error detection of the CRC but it can also adversely affect CRC random error detection performance. To quantify the effect of bit framing on CRC error detection the concept of error "exposure"is introduced. Our investigations lead us to propose resilient generator polynomials that, when combined with bit framing, can result in improved CRC error detection performance at no additional implementation cost. Example results are generated for short codewords showing that proper choice of CRC generator polynomial can improve error detection performance when combined with bit framing. The implication is that CRC combined with bit framing can reduce the probability of undetected errors even under high error rate conditions.
Proceedings of the Thermal and Fluids Engineering Summer Conference
Two relatively under-reported facets of fuel storage fire safety are examined in this work for a 250, 000 gallon two-tank storage system. Ignition probability is linked to the radiative flux from a presumed fire. First, based on observed features of existing designs, fires are expected to be largely contained within a designed footprint that will hold the full spilled contents of the fuel. The influence of the walls and the shape of the tanks on the magnitude of the fire is not a well-described aspect of conventional fire safety assessment utilities. Various resources are herein used to explore the potential hazard for a contained fire of this nature. Second, an explosive attack on the fuel storage has not been widely considered in prior work. This work explores some options for assessing this hazard. The various methods for assessing the constrained conventional fires are found to be within a reasonable degree of agreement. This agreement contrasts with the hazard from an explosive dispersal. Best available assessment techniques are used, which highlight some inadequacies in the existing toolsets for making predictions of this nature. This analysis, using the best available tools, suggests the offset distance for the ignition hazard from a fireball will be on the same order as the offset distance for the blast damage. This suggests the buy-down of risk by considering the fireball is minimal when considering the blast hazards. Assessment tools for the fireball predictions are not particularly mature, and ways to improve them for a higher-fidelity estimate are noted.
Proceedings of SPIE the International Society for Optical Engineering
Event-based sensors are a novel sensing technology which capture the dynamics of a scene via pixel-level change detection. This technology operates with high speed (>10 kHz), low latency (10 µs), low power consumption (<1 W), and high dynamic range (120 dB). Compared to conventional, frame-based architectures that consistently report data for each pixel at a given frame rate, event-based sensor pixels only report data if a change in pixel intensity occurred. This affords the possibility of dramatically reducing the data reported in bandwidth-limited environments (e.g., remote sensing) and thus, the data needed to be processed while still recovering significant events. Degraded visual environments, such as those generated by fog, often hinder situational awareness by decreasing optical resolution and transmission range via random scattering of light. To respond to this challenge, we present the deployment of an event-based sensor in a controlled, experimentally generated, well-characterized degraded visual environment (a fog analogue), for detection of a modulated signal and comparison of data collected from an event-based sensor and from a traditional framing sensor.
Proceedings - Electronic Components and Technology Conference
This paper presents a die-embedded glass interposer with minimum warpage for 5G/6G applications. The interposer performs high integration with low-loss interconnects by embedding multiple chips in the same glass substrate and interconnecting the chips through redistributive layers (RDL). Novel processes for cavity creation, multi-die embedding, carrier- less RDL build up and heat spreader attachment are proposed and demonstrated in this work. Performance of the interposer from 1 GHz to 110 GHz are evaluated. This work provides an advanced packaging solution for low-loss die-to-die and die-to-package interconnects, which is essential to high performance wireless system integration.
Conference Proceedings of the Society for Experimental Mechanics Series
Unlike traditional base excitation vibration qualification testing, multi-axis vibration testing methods can be significantly faster and more accurate. Here, a 12-shaker multiple-input/multiple-output (MIMO) test method called intrinsic connection excitation (ICE) is developed and assessed for use on an example aerospace component. In this study, the ICE technique utilizes 12 shakers, 1 for each boundary condition attachment degree of freedom to the component, specially designed fixtures, and MIMO control to provide an accurate set of loads and boundary conditions during the test. Acceleration, force, and voltage control provide insight into the viability of this testing method. System field test and ICE test results are compared to traditional single degree of freedom specification development and testing. Results indicate the multi-shaker ICE test provided a much more accurate replication of system field test response compared with single degree of freedom testing.
Proceedings - IEEE Symposium on Security and Privacy
Modern Industrial Control Systems (ICS) attacks evade existing tools by using knowledge of ICS processes to blend their activities with benign Supervisory Control and Data Acquisition (SCADA) operation, causing physical world damages. We present Scaphy to detect ICS attacks in SCADA by leveraging the unique execution phases of SCADA to identify the limited set of legitimate behaviors to control the physical world in different phases, which differentiates from attacker's activities. For example, it is typical for SCADA to setup ICS device objects during initialization, but anomalous during process-control. To extract unique behaviors of SCADA execution phases, Scaphy first leverages open ICS conventions to generate a novel physical process dependency and impact graph (PDIG) to identify disruptive physical states. Scaphy then uses PDIG to inform a physical process-aware dynamic analysis, whereby code paths of SCADA process-control execution is induced to reveal API call behaviors unique to legitimate process-control phases. Using this established behavior, Scaphy selectively monitors attacker's physical world-targeted activities that violates legitimate process-control behaviors. We evaluated Scaphy at a U.S. national lab ICS testbed environment. Using diverse ICS deployment scenarios and attacks across 4 ICS industries, Scaphy achieved 95% accuracy & 3.5% false positives (FP), compared to 47.5% accuracy and 25% FP of existing work. We analyze Scaphy's resilience to futuristic attacks where attacker knows our approach.
AIAA SciTech Forum and Exposition, 2023
Phosphor thermometry has become an established remote sensing technique for acquiring the temperature of surfaces and gas-phase flows. Often, phosphors are excited by a light source (typically emitting in the UV region), and their temperature-sensitive emission is captured. Temperature can be inferred from shifts in the emission spectra or the radiative decay lifetime during relaxation. While recent work has shown that the emission of several phosphors remains thermographic during x-ray excitation, the radiative decay lifetime was not investigated. The focus of the present study is to characterize the lifetime decay of the phosphor Gd2O2S:Tb for temperature sensitivity after excitation from a pulsed x-ray source. These results are compared to the lifetime decays found for this phosphor when excited using a pulsed UV laser. Results show that the lifetime of this phosphor exhibits comparable sensitivity to temperature between both excitation sources for a temperature range between 21 °C to 140 °C in increments of 20 °C. This work introduces a novel method of thermometry for researchers to implement when employing x-rays for diagnostics.
Nuclear Technology
The Information Harm Triangle (IHT) is a novel approach that aims to adapt intuitive engineering concepts to simplify defense in depth for instrumentation and control (I&C) systems at nuclear power plants. This approach combines digital harm, real-world harm, and unsafe control actions (UCAs) into a single graph named “Information Harm Triangle.” The IHT is based on the postulation that the consequences of cyberattacks targeting I&C systems can be expressed in terms of two orthogonal components: a component representing the magnitude of data harm (DH) (i.e., digital information harm) and a component representing physical information harm (PIH) (i.e., real-world harm, e.g., an inadvertent plant trip). The magnitude of the severity of the physical consequence is the aspect of risk that is of concern. The sum of these two components represents the total information harm. The IHT intuitively informs risk-informed cybersecurity strategies that employ independent measures that either act to prevent, reduce, or mitigate DH or PIH. Another aspect of the IHT is that the DH can result in cyber-initiated UCAs that result in severe physical consequences. The orthogonality of DH and PIH provides insights into designing effective defense in depth. The IHT can also represent cyberattacks that have the potential to impede, evade, or compromise countermeasures from taking appropriate action to reduce, stop, or mitigate the harm caused by such UCAs. Cyber-initiated UCAs transform DH to PIH.
Minerals, Metals and Materials Series
The structure-property linkage is one of the two most important relationships in materials science besides the process-structure linkage, especially for metals and polycrystalline alloys. The stochastic nature of microstructures begs for a robust approach to reliably address the linkage. As such, uncertainty quantification (UQ) plays an important role in this regard and cannot be ignored. To probe the structure-property linkage, many multi-scale integrated computational materials engineering (ICME) tools have been proposed and developed over the last decade to accelerate the material design process in the spirit of Material Genome Initiative (MGI), notably crystal plasticity finite element model (CPFEM) and phase-field simulations. Machine learning (ML) methods, including deep learning and physics-informed/-constrained approaches, can also be conveniently applied to approximate the computationally expensive ICME models, allowing one to efficiently navigate in both structure and property spaces effortlessly. Since UQ also plays a crucial role in verification and validation for both ICME and ML models, it is important to include UQ in the picture. In this paper, we summarize a few of our recent research efforts addressing UQ aspects of homogenized properties using CPFEM in a big picture context.
CIRP Annals
Machining-based deformation processing is used to produce metal foil and flat wire (strip) with suitable properties and quality for electrical power and renewable energy applications. In contrast to conventional multistage rolling, the strip is produced in a single-step and with much less process energy. Examples are presented from metal systems of varied workability, and strip product scale in terms of size and production rate. By utilizing the large-strain deformation intrinsic to cutting, bulk strip with ultrafine-grained microstructure, and crystallographic shear-texture favourable for formability, are achieved. Implications for production of commercial strip for electric motor applications and battery electrodes are discussed.
Conference Proceedings of the Society for Experimental Mechanics Series
Multiple Input Multiple Output (MIMO) vibration testing provides the capability to expose a system to a field environment in a laboratory setting, saving both time and money by mitigating the need to perform multiple and costly large-scale field tests. However, MIMO vibration test design is not straightforward oftentimes relying on engineering judgment and multiple test iterations to determine the proper selection of response Degree of Freedom (DOF) and input locations that yield a successful test. This work investigates two DOF selection techniques for MIMO vibration testing to assist with test design, an iterative algorithm introduced in previous work and an Optimal Experiment Design (OED) approach. The iterative-based approach downselects the control set by removing DOF that have the smallest impact on overall error given a target Cross Power Spectral Density matrix and laboratory Frequency Response Function (FRF) matrix. The Optimal Experiment Design (OED) approach is formulated with the laboratory FRF matrix as a convex optimization problem and solved with a gradient-based optimization algorithm that seeks a set of weighted measurement DOF that minimize a measure of model prediction uncertainty. The DOF selection approaches are used to design MIMO vibration tests using candidate finite element models and simulated target environments. The results are generalized and compared to exemplify the quality of the MIMO test using the selected DOF.
Proceedings of the 16th Hypervelocity Impact Symposium, HVIS 2022
Creation of a Sandia internally developed, shock-hardened Recoverable Data Recorder (RDR) necessitated experimentation by ballistically-firing the device into water targets at velocities up to 5,000 ft/s. The resultant mechanical environments were very severe—routinely achieving peak accelerations in excess of 200 kG and changes in pseudo-velocity greater than 38,000 inch/s. High-quality projectile deceleration datasets were obtained though high-speed imaging during the impact events. The datasets were then used to calibrate and validate computational models in both CTH and EPIC. Hydrodynamic stability in these environments was confirmed to differ from aerodynamic stability; projectile stability is maintained through a phenomenon known as “tail-slapping” or impingement of the rear of the projectile on the cavitation vapor-water interface which envelopes the projectile. As the projectile slows the predominate forces undergo a transition which is outside the codes’ capabilities to calculate accurately, however, CTH and EPIC both predict the projectile trajectory well in the initial hypervelocity regime. Stable projectile designs and the achievable acceleration space are explored through a large parameter sweep of CTH simulations. Front face chamfer angle has the largest influence on stability with low angles being more stable.
2023 IEEE PES Innovative Smart Grid Technologies Latin America, ISGT-LA 2023
The widespread adoption of residential solar PV requires distribution system studies to ensure the addition of solar PV at a customer location does not violate the system constraints, which can be referred to as locational hosting capacity (HC). These model-based analyses are prone to error due to their dependencies on the accuracy of the system information. Model-free approaches to estimate the solar PV hosting capacity for a customer can be a good alternative to this approach as their accuracies do not depend on detailed system information. In this paper, an Adaptive Boosting (AdaBoost) algorithm is deployed to utilize the statistical properties (mean, minimum, maximum, and standard deviation) of the customer's historical data (real power, reactive power, voltage) as inputs to estimate the voltage-constrained PV HC for the customer. A baseline comparison approach is also built that utilizes just the maximum voltage of the customer to predict PV HC. The results show that the ensemble-based AdaBoost algorithm outperformed the proposed baseline approach. The developed methods are also compared and validated by existing state-of-the-art model-free PV HC estimation methods.
Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering - OMAE
Experiments were conducted on a wave tank model of a bottom raised oscillating surge wave energy converter (OSWEC) model in regular waves. The OSWEC model shape was a thin rectangular flap, which was allowed to pitch in response to incident waves about a hinge located at the intersection of the flap and the top of the supporting foundation. Torsion springs were added to the hinge in order to position the pitch natural frequency at the center of the wave frequency range of the wave maker. The flap motion as well as the loads at the base of the foundation were measured. The OSWEC was modeled analytically using elliptic functions in order to obtain closed form expressions for added mass and radiation damping coefficients, along with the excitation force and torque. These formulations were derived and reported in a previous publication by the authors. While analytical predictions of the foundation loads agree very well with experiments, large discrepancies are seen in the pitch response close to resonance. These differences are analyzed by conducting a sensitivity study, in which system parameters, including damping and added mass values, are varied. The likely contributors to the differences between predictions and experiments are attributed to tank reflections, standing waves that can occur in long, narrow wave tanks, as well as the thin plate assumption employed in the analytical approach.
Proceedings of the ASME Design Engineering Technical Conference
Computational simulation allows scientists to explore, observe, and test physical regimes thought to be unattainable. Validation and uncertainty quantification play crucial roles in extrapolating the use of physics-based models. Bayesian analysis provides a natural framework for incorporating the uncertainties that undeniably exist in computational modeling. However, the ability to perform quality Bayesian and uncertainty analyses is often limited by the computational expense of first-principles physics models. In the absence of a reliable low-fidelity physics model, phenomenological surrogate or machine learned models can be used to mitigate this expense; however, these data-driven models may not adhere to known physics or properties. Furthermore, the interactions of complex physics in high-fidelity codes lead to dependencies between quantities of interest (QoIs) that are difficult to quantify and capture when individual surrogates are used for each observable. Although this is not always problematic, predicting multiple QoIs with a single surrogate preserves valuable insights regarding the correlated behavior of the target observables and maximizes the information gained from available data. A method of constructing a Gaussian Process (GP) that emulates multiple QoIs simultaneously is presented. As an exemplar, we consider Magnetized Liner Inertial Fusion, a fusion concept that relies on the direct compression of magnetized, laser-heated fuel by a metal liner to achieve thermonuclear ignition. Magneto-hydrodynamics (MHD) codes calculate diagnostics to infer the state of the fuel during experiments, which cannot be measured directly. The calibration of these diagnostic metrics is complicated by sparse experimental data and the expense of high-fidelity neutron transport models. The development of an appropriate surrogate raises long-standing issues in modeling and simulation, including calibration, validation, and uncertainty quantification. The performance of the proposed multi-output GP surrogate model, which preserves correlations between QoIs, is compared to the standard single-output GP for a 1D realization of the MagLIF experiment.
AIAA SciTech Forum and Exposition, 2023
This paper describes the methodology of designing a replacement blade tip and winglet for a wind turbine blade to demonstrate the potential of additive-manufacturing for wind energy. The team will later field-demonstrate this additive-manufactured, system-integrated tip (AMSIT) on a wind turbine. The blade tip aims to reduce the cost of wind energy by improving aerodynamic performance and reliability, while reducing transportation costs. This paper focuses on the design and modeling of a winglet for increased power production while maintaining acceptable structural loads of the original Vestas V27 blade design. A free-wake vortex model, WindDVE, was used for the winglet design analysis. A summary of the aerodynamic design process is presented along with a case study of a specific design.
Proceedings - International Symposium on Discharges and Electrical Insulation in Vacuum, ISDEIV
This presentation describes a new effort to better understand insulator flashover in high current, high voltage pulsed power systems. Both experimental and modeling investigations are described. Particular emphasis is put upon understand flashover that initiate in the anode triple junction (anode-vacuum-dielectric).
2023 IEEE 24th Workshop on Control and Modeling for Power Electronics, COMPEL 2023
A high altitude electromagnetic pulse (HEMP) or other similar geomagnetic disturbance (GMD) has the potential to severely impact the operation of large-scale electric power grids. By introducing low-frequency common-mode (CM) currents, these events can impact the performance of key system components such as large power transformers. In this work, a solid-state transformer (SST) that can replace susceptible equipment and improve grid resiliency by safely absorbing these CM insults is described. An overview of the proposed SST power electronics and controls architecture is provided, a system model is developed, and the performance of the SST in response to a simulated CM insult is evaluated. Compared to a conventional magnetic transformer, the SST is found to recover quickly from the insult while maintaining nominal ac input/output behavior.
Conference Proceedings of the Society for Experimental Mechanics Series
When exposed to mechanical environments such as shock and vibration, electrical connections may experience increased levels of contact resistance associated with the physical characteristics of the electrical interface. A phenomenon known as electrical chatter occurs when these vibrations are large enough to interrupt the electric signals. It is critical to understand the root causes behind these events because electrical chatter may result in unexpected performance or failure of the system. The root causes span a variety of fields, such as structural dynamics, contact mechanics, and tribology. Therefore, a wide range of analyses are required to fully explore the physical phenomenon. This paper intends to provide a better understanding of the relationship between structural dynamics and electrical chatter events. Specifically, electrical contact assembly composed of a cylindrical pin and bifurcated structure were studied using high fidelity simulations. Structural dynamic simulations will be performed with both linear and nonlinear reduced-order models (ROM) to replicate the relevant structural dynamics. Subsequent multi-physics simulations will be discussed to relate the contact mechanics associated with the dynamic interactions between the pin and receptacle to the chatter. Each simulation method was parametrized by data from a variety of dynamic experiments. Both structural dynamics and electrical continuity were observed in both the simulation and experimental approaches, so that the relationship between the two can be established.
International Conference on Nuclear Engineering, Proceedings, ICONE
Prescriptive approaches for the cybersecurity of digital nuclear instrumentation and control (I&C) systems can be cumbersome and costly. These considerations are of particular concern for advanced reactors that implement digital technologies for monitoring, diagnostics, and control. A risk-informed performance-based approach is needed to enable the efficient design of secure digital I&C systems for nuclear power plants. This paper presents a tiered cybersecurity analysis (TCA) methodology as a graded approach for cybersecurity design. The TCA is a sequence of analyses that align with the plant, system, and component stages of design. Earlier application of the TCA in the design process provides greater opportunity for an efficient graded approach and defense-in-depth. The TCA consists of three tiers. Tier 1 is design and impact analysis. In Tier 1 it is assumed that the adversary has control over all digital systems, components, and networks in the plant, and that the adversary is only constrained by the physical limitations of the plant design. The plant's safety design features are examined to determine whether the consequences of an attack by this cyber-enabled adversary are eliminated or mitigated. Accident sequences that are not eliminated or mitigated by security by design features are examined in Tier 2 analysis. In Tier 2, adversary access pathways are identified for the unmitigated accident sequences, and passive measures are implemented to deny system and network access to those pathways wherever feasible. Any systems with remaining susceptible access pathways are then examined in Tier 3. In Tier 3, active defensive cybersecurity architecture features and cybersecurity plan controls are applied to deny the adversary the ability to conduct the tasks needed to cause a severe consequence. Tier 3 is not performed in this analysis because of the design maturity required for this tier of analysis.
Proceedings of the Combustion Institute
A quantum-cascade-laser-absorption-spectroscopy (QCLAS) diagnostic was used to characterize post-detonation fireballs of RP-80 detonators via measurements of temperature, pressure, and CO column pressure at a repetition rate of 1 MHz. Scanned-wavelength direct-absorption spectroscopy was used to measure CO absorbance spectra near 2008.5 cm−1 which are dominated by the P(0,31), P(2,20), and P(3,14) transitions. Line-of-sight (LOS) measurements were acquired 51 and 91 mm above the detonator surface. Three strategies were employed to facilitate interpretation of the LAS measurements in this highly nonuniform environment and to evaluate the accuracy of four post-detonation fireball models: (1) High-energy transitions were used to deliberately bias the measurements to the high-temperature outer shell, (2) a novel dual-zone absorption model was used to extract temperature, pressure, and CO measurements in two distinct regions of the fireball at times where pressure variations along the LOS were pronounced, and (3) the LAS measurements were compared with synthetic LAS measurements produced using the simulated distributions of temperature, pressure, and gas composition predicted by reactive CFD modeling. The results indicate that the QCLAS diagnostic provides high-fidelity data for evaluating post-detonation fireball models, and that assumptions regarding thermochemical equilibrium and carbon freeze-out during expansion of detonation gases have a large impact on the predicted chemical composition of the fireball.
Lecture Notes in Networks and Systems
The DevOps movement, which aims to accelerate the continuous delivery of high-quality software, has taken a leading role in reshaping the software industry. Likewise, there is growing interest in applying DevOps tools and practices in the domains of computational science and engineering (CSE) to meet the ever-growing demand for scalable simulation and analysis. Translating insights from industry to research computing, however, remains an ongoing challenge; DevOps for science and engineering demands adaptation and innovation in those tools and practices. There is a need to better understand the challenges faced by DevOps practitioners in CSE contexts in bridging this divide. To that end, we conducted a participatory action research study to collect and analyze the experiences of DevOps practitioners at a major US national laboratory through the use of storytelling techniques. We share lessons learned and present opportunities for future investigation into DevOps practice in the CSE domain.
IEEE Radiation Effects Data Workshop
We present the SEU sensitivity and SEL results from proton and heavy ion testing performed on NVIDIA Xavier NX and AMD Ryzen V1605B GPU devices in both static and dynamic operation.
Conference Record of the IEEE Photovoltaic Specialists Conference
Subhourly changes in solar irradiance can lead to energy models being biased high if realistic distributions of irradiance values are not reflected in the resource data and model. This is particularly true in solar facility designs with high inverter loading ratios (ILRs). When resource data with sufficient temporal and spatial resolution is not available for a site, synthetic variability can be added to the data that is available in an attempt to address this issue. In this work, we demonstrate the use of anonymized commercial resource datasets with synthetic variability and compare results with previous estimates of model bias due to inverter clipping and increasing ILR.
AIAA SciTech Forum and Exposition, 2023
Here we examine models for particle curtain dispersion using drag based formalisms and their connection to streamwise pressure difference closures. Focusing on drag models, we specifically demonstrate that scaling arguments developed in DeMauro et. al. [1] using early time drag modeling can be extended to include late time particle curtain dispersion behavior by weighting the dynamic portion of the drag relative velocity e.g. (Formula Presented) by the inverse of the particle volume fraction to the ¼th power. The additional parameter e.g. α introduced in this scaling is related to the model drag parameters by employing an early-time latetime matching argument. Comparison with the scaled measurements of DeMauro et. al. suggest that the proposed modification is an effective formalism. Next, the connection between drag-based models and streamwise pressure difference-based expressions is explored by formulating simple analytical models that verify an empirical (Daniel and Wagner [2]) upstream-downstream expression. Though simple, these models provide physics-based approached describing shock particle curtain interaction behavior.
IEEE Open Access Journal of Power and Energy
Geomagnetic disturbances (GMDs) give rise to geomagnetically induced currents (GICs) on the earth's surface which find their way into power systems via grounded transformer neutrals. The quasi-dc nature of the GICs results in half-cycle saturation of the power grid transformers which in turn results in transformer failure, life reduction, and other adverse effects. Therefore, transformers need to be more resilient to dc excitation. This paper sets forth dc immunity metrics for transformers. Furthermore, this paper sets forth a novel transformer architecture and a design methodology which employs the dc immunity metrics to make it more resilient to dc excitation. This is demonstrated using a time-stepping 2D finite element analysis (FEA) simulation. It was found that a relatively small change in the core geometry significantly increases transformer resiliency with respect to dc excitation.
Nuclear Technology
Spent nuclear fuel repository simulations are currently not able to incorporate detailed fuel matrix degradation (FMD) process models due to their computational cost, especially when large numbers of waste packages breach. The current paper uses machine learning to develop artificial neural network and k-nearest neighbor regression surrogate models that approximate the detailed FMD process model while being computationally much faster to evaluate. Using fuel cask temperature, dose rate, and the environmental concentrations of CO32−, O2, Fe2+, and H2 as inputs, these surrogates show good agreement with the FMD process model predictions of the UO2 degradation rate for conditions within the range of the training data. A demonstration in a full-scale shale repository reference case simulation shows that the incorporation of the surrogate models captures local and temporal environmental effects on fuel degradation rates while retaining good computational efficiency.
AIAA Aviation and Aeronautics Forum and Exposition, AIAA AVIATION Forum 2023
The design of thermal protection systems (TPS), including heat shields for reentry vehicles, rely more and more on computational simulation tools for design optimization and uncertainty quantification. Since high-fidelity simulations are computationally expensive for full vehicle geometries, analysts primarily use reduced-physics models instead. Recent work has shown that projection-based reduced-order models (ROMs) can provide accurate approximations of high-fidelity models at a lower computational cost. ROMs are preferable to alternative approximation approaches for high-consequence applications due to the presence of rigorous error bounds. The following paper extends our previous work on projection-based ROMs for ablative TPS by considering hyperreduction methods which yield further reductions in computational cost and demonstrating the approach for simulations of a three-dimensional flight vehicle. We compare the accuracy and potential performance of several different hyperreduction methods and mesh sampling strategies. This paper shows that with the correct implementation, hyperreduction can make ROMs up to 1-3 orders of magnitude faster than the full order model by evaluating the residual at only a small fraction of the mesh nodes.