This work summarizes the findings of a reduced order model (ROM) study performed using Sierra ROM module Pressio_Aria on Sandia National Laboratories' (SNL) Crash-Burn L2 milestone thermal model with pristine geometry. Comparisons are made to full order model (FOM) results for this same Crash-Burn model using Sierra multiphysics module Aria.
Marine energy generation technologies such as wave and tidal power have great potential in meeting the need for renewable energy in the years ahead. Yet, many challenges remain associated with marine-based systems because of the corrosive environment. Conventional materials like metals are subject to rapid corrosive breakdown, crippling the lifespan of structures in such environments. Fiber-reinforced polymer composites offer an appealing alternative in their strength and corrosion resistance, but can experience degradation of mechanical properties as a result of moisture absorption. An investigation is conducted to test the application of a technique for micromechanical analysis of composites, known as multicontinuum theory and demonstrated in past works, as a mechanism for predicting the effects of prolonged moisture absorption on the performance of fiber-reinforced composites. Experimental tensile tests are performed on composite coupons with and without prolonged exposure to a salt water solution to obtain stiffness and strength properties. Multicontinuum theory is applied in conjunction with micromechanical modeling to deduce the effects of moisture absorption on the behavior of constituent materials within the composites. The results are consistent with experimental observations when guided by known mechanisms and trends from previous studies, indicating multicontinuum theory as a potentially effective tool in predicting the long-term performance of composites in marine environments.
Laser propagation experiments using four beams of the National Ignition Facility to deliver up to 35 kJ of laser energy at 351 nm laser wavelength to heat magnetized liner inertial fusion-scale (1 cm-long), hydrocarbon-filled gas pipe targets to ∼keV electron temperatures have demonstrated energy coupling >20 kJ with essentially no backscatter in 15% critical electron density gas fills with 0-19 T applied axial magnetic fields. The energy coupling is also investigated for an electron density of 11.5% critical and for applied field strengths up to 24 T at both densities. This spans a range of Hall parameters 0 < ω c e τ e i ≲2, where a Hall parameter of 0.5 is expected to reduce electron thermal conduction across the field lines by a factor of 4-5 for the conditions of these experiments. At sufficiently high applied field strength (and therefore Hall parameter), the measured laser propagation speed through the targets increases in the measurements, consistent with reduced perpendicular electron thermal transport; this reduces the coupled energy to the target once the laser burns through the gas pipe. The results compare well with a 1D analytic propagation model for inverse Bremsstrahlung absorption.
The previous separation distances in the National Fire Protection Association (NFPA) Hydrogen Technologies Code (NFPA 2, 2020 Edition) for bulk liquid hydrogen systems lack a well-documented basis and can be onerous. This report describes the technical justifications for revisions of the bulk liquid hydrogen storage setback distances in NFPA 2, 2023 Edition. Distances are calculated based on a leak area that is 5% of the nominal pipe flow area. Models from the open source HyRAM+ toolkit are used to justify the leak size as well as calculate consequence-based separation distances from that leak size. Validation and verification of the numerical models is provided, as well as justification for the harm criteria used for the determination of the setback distances for each exposure type. This report also reviews mitigations that could result in setback distance reduction. The resulting updates to the liquid hydrogen separation distances are well-documented, retrievable, repeatable, revisable, independently verified, and use experimental results to verify the models.
Physics-constrained machine learning is emerging as an important topic in the field of machine learning for physics. One of the most significant advantages of incorporating physics constraints into machine learning methods is that the resulting model requires significantly less data to train. By incorporating physical rules into the machine learning formulation itself, the predictions are expected to be physically plausible. Gaussian process (GP) is perhaps one of the most common methods in machine learning for small datasets. In this paper, we investigate the possibility of constraining a GP formulation with monotonicity on three different material datasets, where one experimental and two computational datasets are used. The monotonic GP is compared against the regular GP, where a significant reduction in the posterior variance is observed. The monotonic GP is strictly monotonic in the interpolation regime, but in the extrapolation regime, the monotonic effect starts fading away as one goes beyond the training dataset. Imposing monotonicity on the GP comes at a small accuracy cost, compared to the regular GP. The monotonic GP is perhaps most useful in applications where data are scarce and noisy, and monotonicity is supported by strong physical evidence.
Numerical integration is a basic step in the implementation of more complex numerical algorithms suitable, for example, to solve ordinary and partial differential equations. The straightforward extension of a one-dimensional integration rule to a multidimensional grid by the tensor product of the spatial directions is deemed to be practically infeasible beyond a relatively small number of dimensions, e.g., three or four. In fact, the computational burden in terms of storage and floating point operations scales exponentially with the number of dimensions. This phenomenon is known as the curse of dimensionality and motivated the development of alternative methods such as the Monte Carlo method. The tensor product approach can be very effective for high-dimensional numerical integration if we can resort to an accurate low-rank tensor-train representation of the integrand function. In this work, we discuss this approach and present numerical evidence showing that it is very competitive with the Monte Carlo method in terms of accuracy and computational costs up to several hundredths of dimensions if the integrand function is regular enough and a sufficiently accurate low-rank approximation is available.
This study investigates the effects of magnetic constraints on a piezoelectric energy harvesting absorber while simultaneously controlling a primary structure and harnessing energy. An accurate forcing representation of the magnetic force is investigated and developed. A reduced-order model is derived using the Euler–Lagrange principle, and the impact of the magnetic force is evaluated on the absorber’s static position and coupled natural frequency of the energy harvesting absorber and the coupled primary absorber system. The results show that attractive magnet configurations cannot improve the system substantially before pull-in occurs. A rigorous eigenvalue problem analysis is performed on the absorber’s substrate thickness and tip mass to effectively design an energy harvesting absorber for multiple initial gap sizes for the repulsive configurations. Then, the effects of the forcing amplitude on the primary structure absorber are studied and characterized by determining an effective design of the system for a simultaneous reduction in the primary structure’s motion and improvement in the harvester’s efficiency.
This document defines a proposed specification for representing gamma radiation spectra, as commonly produced by handheld Radioisotope Identifiers, as a QR code, or as a Uniform Resource Identifier (URI).
The cost of photovoltaic (PV) modules has declined by 85% since 2010. To achieve this reduction, manufacturers altered module designs and bill of materials; changes that could affect module durability and reliability. To determine if these changes have affected module durability, we measured the performance degradation of 834 fielded PV modules representing 13 module types from 7 manufacturers in 3 climates over 5 years. Degradation rates (Rd) are highly nonlinear over time, and seasonal variations are present in some module types. Mean and median degradation rate values of −0.62%/year and −0.58%/year, respectively, are consistent with rates measured for older modules. Of the 23 systems studied, 6 have degradation rates that will exceed the warranty limits in the future, whereas 13 systems demonstrate the potential of achieving lifetimes beyond 30 years, assuming Rd trends have stabilized.
The production of biochar from biomass and industrial wastes provides both environmental and economic sustainability. An effective way to ensure the sustainability of biochar is to produce high value-added activated carbon. The desirable characteristic of activated carbon is its high surface area for efficient adsorption of contaminants. Feedstocks can include a number of locally available materials with little or negative value, such as orchard slash and crop residue. In this context, it is necessary to determine and know the conversion effects of the feedstocks to be used in the production of activated carbon. In the study conducted for this purpose; several samples (piñon wood, pecan wood, hardwood, dried grass, Wyoming coal dust, Illinois coal dust, Missouri coal dust, and tire residue) of biomass and industrial waste products were investigated for their conversion into activated carbon. Small samples (approximately 0.02 g) of the feedstocks were pyrolyzed under inert or mildly oxidizing conditions in a thermal analyzer to determine their mass loss as a function of temperature and atmosphere. Once suitable conditions were established, larger quantities (up to 0.6 g) were pyrolyzed in a tube furnace and harvested for characterization of their surface area and porosity via gas sorption analysis. Among the samples used, piñon wood gave the best results, and pyrolysis temperatures between 600 and 650 °C gave the highest yield. Slow pyrolysis or hydrothermal carbonization have come to the fore as recommended production methods for the conversion of biochar, which can be produced from biomass and industrial wastes, into activated carbon.
Nakagawa, Seiji; Kibikas, William M.; Chang, Chun; Kneafsey, Timothy; Dobson, Patrick; Samuel, Abraham; Bruce, Stephen; Kaargeson-Loe, Nils; Bauer, Stephen J.
Strong gas-mineral interactions or slow adsorption kinetics require a molecular-level understanding of both adsorption and diffusion for these interactions to be properly described in transport models. In this combined molecular simulation and experimental study, noble gas adsorption and mobility is investigated in two naturally abundant zeolites whose pores are similar in size (clinoptilolite) and greater than (mordenite) the gas diameters. Simulated adsorption isotherms obtained from grand canonical Monte Carlo simulations indicate that both zeolites can accommodate even the largest gas (Rn). However, gas mobility in clinoptilolite is significantly hindered at pore-limiting window sites, as seen from molecular dynamics simulations in both bulk and slab zeolite models. Experimental gas adsorption isotherms for clinoptilolite confirm the presence of a kinetic barrier to Xe uptake, resulting in the unusual property of reverse Kr/Xe selectivity. Finally, a kinetic model is used to fit the simulated gas loading profiles, allowing a comparison of trends in gas diffusivity in the zeolite pores.
IEEE Transactions on Components, Packaging and Manufacturing Technology
Jia, Xiaofan; Li, Xingchen; Erdogan, Serhat; Moon, Kyoung S.; Kim, Joon W.; Jordan, Matthew J.; Swaminathan, Madhavan
This article presents the antenna-integrated glass interposer for $D$ -band 6G wireless applications using die-embedding technology. We implement the die-embedded package on glass substrates and characterize the electrical performance in the $D$ -band. The electrical characterization employs embedded test dies with the 50- $\Omega $ ground-signal-ground (GSG) ports and coplanar waveguides. We achieve low-loss die-to-package transitions by using staggered dielectric vias, which are compared with the transitions of wire-bonding and flip-chip assembly. This article provides detailed information on the design, modeling, fabrication, and characterization of the die-to-package interconnects. This article also demonstrates the integration of microstrip patch antenna array and embedded dies in the $D$ -band. The results show superior electrical performance provided by the die-embedded glass interposer. The die-to-package interconnect exhibits good matching (less than -10-dB S11) and low loss (0.2-dB loss) in the $D$ -band. The integrated $1\times8$ patch antenna array shows 11.6-dB broadside gain and good matching with the embedded die. In addition, by using a temporary carrier, the antenna-integrated glass interposer also has great potential for further heterogeneous integration and thermal management.
A fundamental task of radar, beyond merely detecting a target, is to estimate some parameters associated with it. For example, this might include range, direction, velocity, etc. In any case, multiple measurements, often noisy, need to be processed to yield a ‘best estimate’ of the parameter. A common mathematical method for doing so is called “Regression” analysis. The goal is to minimize the expected squared error in the estimate. Even when alternate algorithms are considered, the least s
Here we report on AlGaN high electron mobility transistor (HEMT)-based logic development, using combined enhancement- and depletion-mode transistors to fabricate inverters with operation from room temperature up to 500°C. Our development approach included: (a) characterizing temperature-dependent carrier transport for different AlGaN HEMT heterostructures, (b) developing a suitable gate metal scheme for use in high temperatures, and (c) over-temperature testing of discrete devices and inverters. Hall mobility data (from 30°C to 500°C) revealed the reference GaN-channel HEMT experienced a 6.9x reduction in mobility, whereas the AlGaN channel HEMTs experienced about a 3.1x reduction. Furthermore, a greater aluminum contrast between the barrier and channel enabled higher carrier densities in the two-dimensional electron gas for all temperatures. The combination of reduced variation in mobility with temperature and high sheet carrier concentration showed that an Al-rich AlGaN-channel HEMT with a high barrier-to-channel aluminum contrast is the best option for an extreme temperature HEMT design. Three gate metal stacks were selected for low resistivity, high melting point, low thermal expansion coefficient, and high expected barrier height. The impact of thermal cycling was examined through electrical characterization of samples measured before and after rapid thermal anneal. The 200-nm tungsten gate metallization was the top performer with minimal reduction in drain current, a slightly positive threshold voltage shift, and about an order of magnitude advantage over the other gates in on-to-off current ratio. After incorporating the tungsten gate metal stack in device fabrication, characterization of transistors and inverters from room temperature up to 500°C was performed. The enhancement-mode (e-mode) devices’ resistance started increasing at about 200°C, resulting in drain current degradation. This phenomenon was not observed in depletion-mode (d-mode) devices but highlights a challenge for inverters in an e-mode driver and d-mode load configuration.
Mcglone, Joe F.; Ghadi, Hemant; Cornuelle, Evan; Armstrong, Andrew A.; Burns, George B.; Feng, Zixuan; Uddin Bhuiyan, A.F.M.A.; Zhao, Hongping; Arehart, Aaron R.; Ringel, Steven A.
The impact of 1.8 MeV proton irradiation on metalorganic chemical vapor deposition grown (010) β-Ga2O3 Schottky diodes is presented. It is found that after a 10.8 × 10 13 cm - 2 proton fluence the Schottky barrier height of (1.40 ± 0.05 eV) and the ideality factor of (1.05 ± 0.05) are unaffected. Capacitance-voltage extracted net ionized doping curves indicate a carrier removal rate of 268 ± 10 cm - 1. The defect states responsible for the observed carrier removal are studied through a combination of deep level transient and optical spectroscopies (DLTS/DLOS) as well as lighted capacitance-voltage (LCV) measurements. The dominating effect on the defect spectrum is due to the EC-2.0 eV defect state observed in DLOS and LCV. This state accounts for ∼ 75% of the total trap introduction rate and is the primary source of carrier removal from proton irradiation. Of the DLTS detected states, the EC-0.72 eV state dominated but had a comparably smaller contribution to the trap introduction. These two traps have previously been correlated with acceptor-like gallium vacancy-related defects. Several other trap states at EC-0.36, EC-0.63, and EC-1.09 eV were newly detected after proton irradiation, and two pre-existing states at EC-1.2 and EC-4.4 eV showed a slight increase in concentration after irradiation, together accounting for the remainder of trap introduction. However, a pre-existing trap at EC-0.40 eV was found to be insensitive to proton irradiation and, therefore, is likely of extrinsic origin. The comprehensive defect characterization of 1.8 MeV proton irradiation damage can aid the modeling and design for a range of radiation tolerant devices.
Mohottalalage, Supun S.; Kosgallana, Chathurika; Meedin, Shalika; Connor, Gary S.'.; Grest, Gary S.; Perahia, Dvora
Ionizable polymers form dynamic networks with domains controlled by two distinct energy scales, ionic interactions and van der Waals forces; both evolve under elongational flows during their processing into viable materials. A molecular level insight of their nonlinear response, paramount to controlling their structure, is attained by fully atomistic molecular dynamics simulations of a model ionizable polymer, polystyrene sulfonate. As a function of increasing elongational flow rate, the systems display an initial elastic response, followed by an ionic fraction-dependent strain hardening, stress overshoot, and eventually strain-thinning. As the sulfonation fraction increases, the chain elongation becomes more heterogeneous. Finally, flow-driven ionic assembly dynamics that continuously break and reform control the response of the system.
Automation of rate-coefficient calculations for gas-phase organic species became possible in recent years and has transformed how we explore these complicated systems computationally. Kinetics workflow tools bring rigor and speed and eliminate a large fraction of manual labor and related error sources. In this paper we give an overview of this quickly evolving field and illustrate, through five detailed examples, the capabilities of our own automated tool, KinBot. We bring examples from combustion and atmospheric chemistry of C-, H-, O-, and N-atom-containing species that are relevant to molecular weight growth and autoxidation processes. The examples shed light on the capabilities of automation and also highlight particular challenges associated with the various chemical systems that need to be addressed in future work.
Bifurcations are commonly encountered during force controlled swept and stepped sine testing of nonlinear structures, which generally leads to the so-called jump-down or jump-up phenomena between stable solutions. There are various experimental closed-loop control algorithms, such as control-based continuation and phase-locked loop, to stabilize dynamical systems through these bifurcations, but they generally rely on specialized control algorithms that are not readily available with many commercial data acquisition software packages. A recent method was developed to experimentally apply sequential continuation using the shaker voltage that can be readily deployed using commercially available software. By utilizing the stabilizing effects of electrodynamic shakers and the force dropout phenomena in fixed frequency voltage control sine tests, this approach has been demonstrated to stabilize the unstable branch of a nonlinear system with three branches, allowing for three multivalued solutions to be identified within a specific frequency bandwidth near resonance. Recent testing on a strongly nonlinear system with vibro-impact nonlinearity has revealed jumping behavior when performing sequential continuation along the voltage parameter, like the jump phenomena seen during more traditional force controlled swept and stepped sine testing. Here, this paper investigates the stabilizing effects of an electrodynamic shaker on strongly nonlinear structures in fixed frequency voltage control tests using both numerical and experimental methods. The harmonic balance method is applied to the coupled shaker-structure system with an electromechanical model to simulate the fixed voltage control tests and predict the stabilization for different parameters of the model. The simulated results are leveraged to inform the design of a set of experiments to demonstrate the stabilization characteristics on a fixture-pylon assembly with a vibro-impact nonlinearity. Through numerical simulation and experimental testing on two different strongly nonlinear systems, the various parameters that influence the stability of the coupled shaker-structure are revealed to better understand the performance of fixed frequency voltage control tests.
Vibrational spectroscopy is a nondestructive technique commonly used in chemical and physical analyses to determine atomic structures and associated properties. However, the evaluation and interpretation of spectroscopic profiles based on human-identifiable peaks can be difficult and convoluted. To address this challenge, we present a reliable protocol based on supervised manifold learning techniques meant to connect vibrational spectra to a variety of complex and diverse atomic structure configurations. As an illustration, we examined a large database of virtual vibrational spectroscopy profiles generated from atomistic simulations for silicon structures subjected to different stress, amorphization, and disordering states. We evaluated representative features in those spectra via various linear and nonlinear dimensionality reduction techniques and used the reduced representation of those features with decision trees to correlate them with structural information unavailable through classical human-identifiable peak analysis. We show that our trained model accurately (over 97% accuracy) and robustly (insensitive to noise) disentangles the contribution from the different material states, hence demonstrating a comprehensive decoding of spectroscopic profiles beyond classical (human-identifiable) peak analysis.
Hydrocarbon polymers are used in a wide variety of practical applications. In the field of dynamic compression at extreme pressures, these polymers are used at several high energy density (HED) experimental facilities. One of the most common polymers is poly(methyl methacrylate) or PMMA, also called Plexiglass® or Lucite®. Here, we present high-fidelity, hundreds of GPa range experimental shock compression data measured on Sandia's Z machine. We extend the principal shock Hugoniot for PMMA to more than threefold compression up to 650 GPa and re-shock Hugoniot states up to 1020 GPa in an off-Hugoniot regime, where experimental data are even sparser. These data can be used to put additional constraints on tabular equation of state (EOS) models. The present results provide clear evidence for the need to re-examine the existing tabular EOS models for PMMA above ∼120 GPa as well as perhaps revisit EOSs of similar hydrocarbon polymers commonly used in HED experiments investigating dynamic compression, hydrodynamics, or inertial confinement fusion.
Accurately modeling the impact force used in the analysis of loosely constrained cantilevered pipes conveying fluid is imperative. If little information is known of the motion-limiting constraints used in experiments, the analysis of the system may yield inaccurate predictions. Here in this work, multiple forcing representations of the impact force are defined and analyzed for a cantilevered pipe that conveys fluid. Depending on the representation of the impact force, the dynamics of the pipe can vary greatly when only the stiffness of the constraints is known from experiments. Three gap sizes of the constraints are analyzed, and the representation of the impact force used to analyze the system is found to significantly affect the response of the pipe at each gap size. An investigation on the effects of the vibro-impact force representation is performed through using basin of attraction analysis and nonlinear characterization of the system’s response.
Torrence, Christa E.; Libby, Cara S.; Nie, Wanyi; Stein, Joshua S.
Perovskite solar cells (PSCs) promise high efficiencies and low manufacturing costs. Most formulations, however, contain lead, which raises health and environmental concerns. In this review, we use a risk assessment approach to identify and evaluate the technology risks to the environment and human health. We analyze the risks by following the technology from production to transportation to installation to disposal and examine existing environmental and safety regulations in each context. We review published data from leaching and air emissions testing and highlight gaps in current knowledge and a need for more standardization. Methods to avoid lead release through introduction of absorbing materials or use of alternative PSC formulations are reviewed. We conclude with the recommendation to develop recycling programs for PSCs and further standardized testing to understand risks related to leaching and fires.
Lees, Arnee; Betti, Riccardo; Knauer, James P.; Gopalaswamy, Varchas; Patel, Dhrumir; Woo, Ka M.; Anderson, Ken S.; Campbell, E.M.; Cao, Duc; Carroll-Nellenback, Jonathan; Epstein, Reuben; Forrest, Chad J.; Goncharov, Valeri N.; Harding, David R.; Hu, Suxing; Igumenshchev, Igor V.; Janezic, Roger T.; Mannion, Owen M.; Bahukutumbi, Radha; Regan, Sean P.; Shvydky, Alex; Shah, Rahul C.; Shmayda, Walter T.; Stoeckl, Christian; Theobald, Wolfgang; Thomas, Cliff A.
Improving the performance of inertial confinement fusion implosions requires physics models that can accurately predict the response to changes in the experimental inputs. Good predictive capability has been demonstrated for the fusion yield using a statistical mapping of simulated outcomes to experimental data [Gopalaswamy et al., Nature 565(771), 581–586 (2019)]. In this paper, a physics-based statistical mapping approach is used to extract and quantify all the major sources of degradation of fusion yield for direct-drive implosions on the OMEGA laser. Here, the yield is found to be dependent on the age of the deuterium tritium fill, the ℓ = 1 asymmetry in the implosion core, the laser beam-to-target size ratio, and parameters related to the hydrodynamic stability. A controlled set of experiments were carried out where only the target fill age was varied while keeping all other parameters constant. The measurements were found to be in excellent agreement with the fill age dependency inferred using the mapping model. In addition, a new implosion design was created, guided by the statistical mapping model by optimizing the trade-offs between increased laser energy coupling at larger target size and the degradations caused by the laser beam-to-target size ratio and hydrodynamic instabilities. When experimentally performed, an increased fusion yield was demonstrated in targets with larger diameters.
Cesium vapor thermionic converters are an attractive method of converting high-temperature heat directly to electricity, but theoretical descriptions of the systems have been difficult due to the multi-step ionization of Cs through inelastic electron-neutral collisions. This work presents particle-in-cell simulations of these converters, using a direct simulation Monte Carlo collision model to track 52 excited states of Cs. These simulations show the dominant role of multi-step ionization, which also varies significantly based on both the applied voltage bias and pressure. The electron energy distribution functions are shown to be highly non-Maxwellian in the cases analyzed here. A comparison with previous approaches is presented, and large differences are found in ionization rates due especially to the fact that previous approaches have assumed Maxwellian electron distributions. Finally, an open question regarding the nature of the plasma sheaths in the obstructed regime is discussed. The one-dimensional simulations did not produce stable obstructed regime operation and thereby do not support the double-sheath hypothesis.
Ammonia (NH3) is an energy-dense chemical and a vital component of fertilizer. In addition, it is a carbon-neutral liquid fuel and a potential candidate for thermochemical energy storage for high-temperature concentrating solar power (CSP). Currently, NH3 synthesis occurs via the Haber-Bosch process, which requires high pressures (15-25 MPa) and medium to high temperatures (400-500 °C). N2 and H2 are essential feedstocks for this NH3 production process. H2 is generally derived from methane via steam reforming; N2 is sourced from air, after oxygen removal via combustion of hydrocarbons. Both processes consume hydrocarbons, resulting in the release of CO2. In addition, hydrocarbon fuels are burned to produce the heat and mechanical energy required to perform the NH3 reaction, further increasing CO2 emissions. Overall, the production of ammonia via the Haber-Bosch (H-B) process is responsible for up to 1.4% of the world’s carbon emissions. The development of a renewable pathway to NH3 synthesis, which utilizes concentrated solar irradiation as a process heat instead of fossil fuels and operates under low or ambient pressure, will result in a decrease (or elimination) of greenhouse gas emissions as well as avoid the cost, complexity, and safety issues inherent in high-pressure processes. Most current efforts to “green” ammonia production involve either electrolysis or simply replacing the energy source for H-B with renewable electricity, but otherwise leaving the process intact. The effort proposed here would create a new paradigm for the synthesis of NH3 utilizing solar-thermal heat, water, and air as feedstocks, providing a truly green method of production. The overall objective of the STAP (Solar Thermal Ammonia Production) project was to develop a solar thermochemical looping technology to produce and store nitrogen (N2) from air for the subsequent production of ammonia (NH3) via an advanced two-stage process. The goal is a cost-effective and energy efficient technology for the renewable N2 production and synthesis of NH3 from H2 (produced from H2O) and air using solar-thermal energy from concentrating sunlight, under pressures an order of magnitude lower than H-B NH3 production. Our process involves two looping cycles, which do not require catalysts and can be recycled. Over the course of the STAP project, we (1) developed and deeply characterized oxide materials for N2 separation; (2) developed a method for the synthesis of metal nitrides, producing a series of quaternary compounds that have been heretofore unreported; (3) modeled, designed, and fabricated bench-scale tube and on-sun reactors for the N2 production step and demonstrated the ability to separate N2 over multiple cycles in the tube reactor; (4) designed and fabricated a bench-scale Ammonia Synthesis Reactor (ASR) and demonstrated the proof of concept of NH3 synthesis via a novel looping process using metal nitrides over multiple cycles; and (5) completed a systems- and technoeconomic analysis showing the feasibility of ammonia production on a larger scale via the STAP process. The development of renewable, low-cost NH3 will be of great interest to the chemicals industry, particularly agricultural sectors. The CSP industry should be both an important customer and potential end-user of this technology, as it affords the capability of synthesizing a promising thermochemical storage material on-site. Since the NH3 synthesis step also requires H2, there will exist a symbiotic relationship between this technology and solar-thermochemical water-splitting applications. Green ammonia synthesis will result in the decarbonization of a hydrocarbon-intensive industry, helping to meet the Administration goal of industrial decarbonization by 2050. The resulting decrease in CO2 and related pollutants will improve health and well-being of society, particularly for those living in the vicinity of commercial production plants.
Magnetized turbulence is ubiquitous in many astrophysical and terrestrial plasmas but no universal theory exists. Even the detailed energy dynamics in magnetohydrodynamic (MHD) turbulence are still not well understood. We present a suite of subsonic, super-Alfvénic, high plasma beta MHD turbulence simulations that only vary in their dynamical range, i.e., in their separation between the large-scale forcing and dissipation scales, and their dissipation mechanism (implicit large eddy simulation, ILES, and direct numerical simulation (DNS)). Using an energy transfer analysis framework we calculate the effective numerical viscosities and resistivities, and demonstrate that all ILES calculations of MHD turbulence are resolved and correspond to an equivalent visco-resistive MHD turbulence calculation. Increasing the number of grid points used in an ILES corresponds to lowering the dissipation coefficients, i.e., larger (kinetic and magnetic) Reynolds numbers for a constant forcing scale. Independently, we use this same framework to demonstrate that—contrary to hydrodynamic turbulence—the cross-scale energy fluxes are not constant in MHD turbulence. This applies both to different mediators (such as cascade processes or magnetic tension) for a given dynamical range as well as to a dependence on the dynamical range itself, which determines the physical properties of the flow. We do not observe any indication of convergence even at the highest resolution (largest Reynolds numbers) simulation at 20483 cells, calling into question whether an asymptotic regime in MHD turbulence exists, and, if so, what it looks like.
Fracture and short circuit in the Li7La3Zr2O12 (LLZO) solid electrolyte are two key issues that prevent its adoption in battery cells. In this paper, we utilize phase-field simulations that couple electrochemistry and fracture to evaluate the maximum electric potential that LLZO electrolytes can support as a function of crack density. In the case of a single crack, we find that the applied potential at the onset of crack propagation exhibits inverse square root scaling with respect to crack length, analogous to classical fracture mechanics. Here, we further find that the short-circuit potential scales linearly with crack length. In the realistic case where the solid electrolyte contains multiple cracks, we reveal that failure fits the Weibull model. The failure distributions shift to favor failure at lower overpotentials as areal crack density increases. Furthermore, when flawless interfacial buffers are placed between the applied potential and the bulk of the electrolyte, failure is mitigated. When constant currents are applied, current focuses in near-surface flaws, leading to crack propagation and short circuit. We find that buffered samples sustain larger currents without reaching unstable overpotentials and without failing. Our findings suggest several mitigation strategies for improving the ability of LLZO to support larger currents and improve operability.
The interplay between hydrogen and dislocations (e.g., core and elastic energies, and dislocation-dislocation interactions) has implications on hydrogen embrittlement but is poorly understood. Continuum models of hydrogen enhanced local plasticity have not considered the effect of hydrogen on dislocation core energies. Energy minimization atomistic simulations can only resolve dislocation core energies in hydrogen-free systems because hydrogen motion is omitted so hydrogen atmosphere formation can’t occur. Additionally, previous studies focused more on face-centered-cubic than body-centered-cubic metals. Discrete dislocation dynamics studies of hydrogen-dislocation interactions assume isotropic elasticity, but the validity of this assumption isn’t understood. We perform time-averaged molecular dynamics simulations to study the effect of hydrogen on dislocation energies in body-centered-cubic iron for several dislocation character angles. We see atmosphere formation and highly converged dislocation energies. We find that hydrogen reduces dislocation core energies but can increase or decrease elastic energies of isolated dislocations and dislocation-dislocation interaction energies depending on character angle. We also find that isotropic elasticity can be well fitted to dislocation energies obtained from simulations if the isotropic elastic constants are not constrained to their anisotropic counterparts. These results are relevant to ongoing efforts in understanding hydrogen embrittlement and provide a foundation for future work in this field.
We demonstrate a monolithic all-epitaxial resonant-cavity architecture for long-wave infrared photodetectors with substrate-side illumination. An nBn detector with an ultra-thin (t ≈ 350 nm) absorber layer is integrated into a leaky resonant cavity, formed using semi-transparent highly doped (n + +) epitaxial layers, and aligned to the anti-node of the cavity's standing wave. The devices are characterized electrically and optically and demonstrate an external quantum efficiency of ∼25% at T = 180 K in an architecture compatible with focal plane array configurations.
While research in multiple-input/multiple-output (MIMO) random vibration testing techniques, control methods, and test design has been increasing in recent years, research into specifications for these types of tests has not kept pace. This is perhaps due to the very particular requirement for most MIMO random vibration control specifications – they must be narrowband, fully populated cross-power spectral density matrices. This requirement puts constraints on the specification derivation process and restricts the application of many of the traditional techniques used to define single-axis random vibration specifications, such as averaging or straight-lining. This requirement also restricts the applicability of MIMO testing by requiring a very specific and rich field test data set to serve as the basis for the MIMO test specification. Here, frequency-warping and channel averaging techniques are proposed to soften the requirements for MIMO specifications with the goal of expanding the applicability of MIMO random vibration testing and enabling tests to be run in the absence of the necessary field test data.
Adler, James H.; He, Yunhui; Hu, Xiaozhe; Maclachlan, Scott; Ohm, Peter B.
Advanced finite-element discretizations and preconditioners for models of poroelasticity have attracted significant attention in recent years. The equations of poroelasticity offer significant challenges in both areas, due to the potentially strong coupling between unknowns in the system, saddle-point structure, and the need to account for wide ranges of parameter values, including limiting behavior such as incompressible elasticity. This paper was motivated by an attempt to develop monolithic multigrid preconditioners for the discretization developed in [C. Rodrigo et al., Comput. Methods App. Mech. Engrg, 341 (2018), pp. 467-484]; we show here why this is a difficult task and, as a result, we modify the discretization in [Rodrigo et al.] through the use of a reduced-quadrature approximation, yielding a more “solver-friendly” discretization. Local Fourier analysis is used to optimize parameters in the resulting monolithic multigrid method, allowing a fair comparison between the performance and costs of methods based on Vanka and Braess-Sarazin relaxation. Numerical results are presented to validate the local Fourier analysis predictions and demonstrate efficiency of the algorithms. Finally, a comparison to existing block-factorization preconditioners is also given.
This work investigates the low- and high-temperature ignition and combustion processes, applied to the Engine Combustion Network Spray A flame, combining advanced optical diagnostics and large-eddy simulations (LES). Simultaneous high-speed (50 kHz) formaldehyde (CH2O) planar laser-induced fluorescence (PLIF) and line-of-sight OH* chemiluminescence imaging were used to measure the low- and high-temperature flame, during ignition as well as during quasi-steady combustion. While tracking the cool flame at the laser sheet plane, the present experimental setup allows detection of distinct ignition spots and dynamic fluctuations of the lift-off length over time, which overcomes limitations for flame tracking when using schlieren imaging [Sim et al.Proc. Combust. Inst. 38 (4) (2021) 5713–5721]. After significant development to improve LES prediction of the low-and high-temperature flame position, both during the ignition processes and quasi-steady combustion, the simulations were analyzed to gain understanding of the mixture variance and how this variance affects formation/consumption of CH2O. Analysis of the high-temperature ignition period shows that a key improvement in the LES is the ability to predict heterogeneous ignition sites, not only in the head of the jet, but in shear layers at the jet edge close to the position where flame lift-off eventually stabilizes. The LES analysis also shows concentrated pockets of CH2O, in the center of jet and at 20 mm downstream of the injector (in regions where the equivalence ratio is greater than 6), that are of similar length scale and frequency as the experiment (approximately 5–6 kHz). The periodic oscillation of CH2O match the frequency of pressure waves generated during auto-ignition and reflected within the constant-volume vessel throughout injection. The ability of LES to capture the periodic appearance and destruction of CH2O is particularly important because these structures travel downstream and become rich premixed flames that affect soot production.
Creation of a Sandia internally developed, shock-hardened Recoverable Data Recorder (RDR) necessitated experimentation by ballistically-firing the device into water targets at velocities up to 5,000 ft/s. The resultant mechanical environments were very severe—routinely achieving peak accelerations in excess of 200 kG and changes in pseudo-velocity greater than 38,000 inch/s. High-quality projectile deceleration datasets were obtained though high-speed imaging during the impact events. The datasets were then used to calibrate and validate computational models in both CTH and EPIC. Hydrodynamic stability in these environments was confirmed to differ from aerodynamic stability; projectile stability is maintained through a phenomenon known as “tail-slapping” or impingement of the rear of the projectile on the cavitation vapor-water interface which envelopes the projectile. As the projectile slows the predominate forces undergo a transition which is outside the codes’ capabilities to calculate accurately, however, CTH and EPIC both predict the projectile trajectory well in the initial hypervelocity regime. Stable projectile designs and the achievable acceleration space are explored through a large parameter sweep of CTH simulations. Front face chamfer angle has the largest influence on stability with low angles being more stable.
Computational simulation allows scientists to explore, observe, and test physical regimes thought to be unattainable. Validation and uncertainty quantification play crucial roles in extrapolating the use of physics-based models. Bayesian analysis provides a natural framework for incorporating the uncertainties that undeniably exist in computational modeling. However, the ability to perform quality Bayesian and uncertainty analyses is often limited by the computational expense of first-principles physics models. In the absence of a reliable low-fidelity physics model, phenomenological surrogate or machine learned models can be used to mitigate this expense; however, these data-driven models may not adhere to known physics or properties. Furthermore, the interactions of complex physics in high-fidelity codes lead to dependencies between quantities of interest (QoIs) that are difficult to quantify and capture when individual surrogates are used for each observable. Although this is not always problematic, predicting multiple QoIs with a single surrogate preserves valuable insights regarding the correlated behavior of the target observables and maximizes the information gained from available data. A method of constructing a Gaussian Process (GP) that emulates multiple QoIs simultaneously is presented. As an exemplar, we consider Magnetized Liner Inertial Fusion, a fusion concept that relies on the direct compression of magnetized, laser-heated fuel by a metal liner to achieve thermonuclear ignition. Magneto-hydrodynamics (MHD) codes calculate diagnostics to infer the state of the fuel during experiments, which cannot be measured directly. The calibration of these diagnostic metrics is complicated by sparse experimental data and the expense of high-fidelity neutron transport models. The development of an appropriate surrogate raises long-standing issues in modeling and simulation, including calibration, validation, and uncertainty quantification. The performance of the proposed multi-output GP surrogate model, which preserves correlations between QoIs, is compared to the standard single-output GP for a 1D realization of the MagLIF experiment.
Criticality Control Overpack (CCO) containers are being considered for the disposal of defense-related nuclear waste at the Waste Isolation Pilot Plant (WIPP).
The International Electrotechnical Commission (IEC) Subcommittee SC45A has been active in development of cybersecurity standards and technical reports on the protection of Instrumentation and Control (I&C) and Electrical Power Systems (ES) that perform significant functions necessary for the safe and secure operation of Nuclear Power Plants (NPP). These international standards and reports advance and promote the implementation of good practices around the world. In recent years, there have been advances in NPP cybersecurity risk management nationally and internationally. For example, IAEA publications NSS 17-T [1] and NSS 33-T [2], propose a framework for computer security risk management that implements a risk management program at both the facility and individual system levels. These international approaches (i.e., IAEA), national approaches (e.g., Canada’s HTRA [3]) and technical methods (e.g., HAZCADS [4], Cyber Informed Engineering [5], France’s EBIOS [6]) have advanced risk management within NPP cybersecurity programmes that implement international and national standards. This paper summarizes key elements of the analysis that developed the new IEC Technical Report. The paper identifies the eleven challenges for applying ISO/IEC 27005:2018 [7]. cybersecurity risk management to I&C Systems and EPS of NPPs and a summary comparison of how national approaches address these challenges.
Pulsed dielectric barrier discharges (DBD) in He-H2O and He-H2O-O2 mixtures are studied in near atmospheric conditions using temporally and spatially resolved quantitative 2D imaging of the hydroxyl radical (OH) and hydrogen peroxide (H2O2). The primary goal was to detect and quantify the production of these strongly oxidative species in water-laden helium discharges in a DBD jet configuration, which is of interest for biomedical applications such as disinfection of surfaces and treatment of biological samples. Hydroxyl profiles are obtained by laser-induced fluorescence (LIF) measurements using 282 nm laser excitation. Hydrogen peroxide profiles are measured by photo-fragmentation LIF (PF-LIF), which involves photo-dissociating H2O2 into OH with a 212.8 nm laser sheet and detecting the OH fragments by LIF. The H2O2 profiles are calibrated by measuring PF-LIF profiles in a reference mixture of He seeded with a known amount of H2O2. OH profiles are calibrated by measuring OH-radical decay times and comparing these with predictions from a chemical kinetics model. Two different burst discharge modes with five and ten pulses per burst are studied, both with a burst repetition rate of 50 Hz. In both cases, dynamics of OH and H2O2 distributions in the afterglow of the discharge are investigated. Gas temperatures determined from the OH-LIF spectra indicate that gas heating due to the plasma is insignificant. The addition of 5% O2 in the He admixture decreases the OH densities and increases the H2O2 densities. The increased coupled energy in the ten-pulse discharge increases OH and H2O2 mole fractions, except for the H2O2 in the He-H2O-O2 mixture which is relatively insensitive to the additional pulses.
Neural ordinary differential equations (NODEs) have recently regained popularity as large-depth limits of a large class of neural networks. In particular, residual neural networks (ResNets) are equivalent to an explicit Euler discretization of an underlying NODE, where the transition from one layer to the next is one time step of the discretization. The relationship between continuous and discrete neural networks has been of particular interest. Notably, analysis from the ordinary differential equation viewpoint can potentially lead to new insights for understanding the behavior of neural networks in general. In this work, we take inspiration from differential equations to define the concept of stiffness for a ResNet via the interpretation of a ResNet as the discretization of a NODE. Here, we then examine the effects of stiffness on the ability of a ResNet to generalize, via computational studies on example problems coming from climate and chemistry models. We find that penalizing stiffness does have a unique regularizing effect, but we see no benefit to penalizing stiffness over L2 regularization (penalization of network parameter norms) in terms of predictive performance.
The current interest in hypersonic flows and the growing importance of plasma applications necessitate the development of diagnostics for high-enthalpy flow environments. Reliable and novel experimental data at relevant conditions will drive engineering and modeling efforts forward significantly. This study demonstrates the usage of nanosecond Coherent Anti-Stokes Raman Scattering (CARS) to measure temperature in an atmospheric, high-temperature (> 5500 K) air plasma. The experimental configuration is of interest as the plasma is close to thermodynamic equilibrium and the setup is a test-bed for heat shield materials. The determination of the non-resonant background at such high-temperatures is explored and rotational-vibrational equilibrium temperatures of the N2 ground state are determined via fits of the theory to measured spectra. Results show that the accuracy of the temperature measurements is affected by slow periodic variations in the plasma, causing sampling error. Moreover, depending on the experimental configuration, the measurements can be affected by two-beam interaction, which causes a bias towards lower temperatures, and stimulated Raman pumping, which causes a bias towards higher temperatures. The successful demonstration of CARS at the present conditions, and the exploration of its sensitivities, paves the way towards more complex measurements, e.g. close to interfaces in high-enthalpy plasma flows.
We present the SEU sensitivity and SEL results from proton and heavy ion testing performed on NVIDIA Xavier NX and AMD Ryzen V1605B GPU devices in both static and dynamic operation.
Manin, Julien L.; Vander Wal, Randy L.; Singh, Madhu; Bachalo, William; Payne, Greg; Howard, Robert
Carbonaceous particulate produced by a diesel engine and turbojet engine combustor are analyzed by transmission electron microscopy (TEM) for differences in nanostructure before and after pulsed laser annealing. Soot is examined between low/high diesel engine torque and low/high turbojet engine thrust. Small differences in nascent nanostructure are magnified by the action of high-temperature annealing induced by pulsed laser heating. Lamellae length distributions show occurrence of graphitization while tortuosity analyses reveal lamellae straightening. Differences in internal particle structure (hollow shells versus internal graphitic ribbons) are interpreted as due to higher internal sp3 and O-atom content under the higher power conditions with hypothesized greater turbulence and resulting partial premixing. TEM in concert with fringe analyses reveal that a similar degree of annealing occurs in the primary particles in soot from both diesel engine and turbojet engine combustors—despite the aggregate and primary size differences between these sources. Implications of these results for source identification of the combustion particulate and for laser-induced incandescence (LII) measurements of concentration are discussed with inter-instrument comparison of soot mass from both diesel and turbojet soot sources.
Creation of a Sandia internally developed, shock-hardened Recoverable Data Recorder (RDR) necessitated experimentation by ballistically-firing the device into water targets at velocities up to 5,000 ft/s. The resultant mechanical environments were very severe—routinely achieving peak accelerations in excess of 200 kG and changes in pseudo-velocity greater than 38,000 inch/s. High-quality projectile deceleration datasets were obtained though high-speed imaging during the impact events. The datasets were then used to calibrate and validate computational models in both CTH and EPIC. Hydrodynamic stability in these environments was confirmed to differ from aerodynamic stability; projectile stability is maintained through a phenomenon known as “tail-slapping” or impingement of the rear of the projectile on the cavitation vapor-water interface which envelopes the projectile. As the projectile slows the predominate forces undergo a transition which is outside the codes’ capabilities to calculate accurately, however, CTH and EPIC both predict the projectile trajectory well in the initial hypervelocity regime. Stable projectile designs and the achievable acceleration space are explored through a large parameter sweep of CTH simulations. Front face chamfer angle has the largest influence on stability with low angles being more stable.
While significant investments have been made in the exploration of ethics in computation, recent advances in high performance computing (HPC) and artificial intelligence (AI) have reignited a discussion for more responsible and ethical computing with respect to the design and development of pervasive sociotechnical systems within the context of existing and evolving societal norms and cultures. The ubiquity of HPC in everyday life presents complex sociotechnical challenges for all who seek to practice responsible computing and ethical technological innovation. The present paper provides guidelines which scientists, researchers, educators, and practitioners alike, can employ to become more aware of one’s personal values system that may unconsciously shape one’s approach to computation and ethics.
A method is presented to detect clear-sky periods for plane-of-array, time-averaged irradiance data that is based on the algorithm originally described by Reno and Hansen. We show this new method improves the state-of-the-art by providing accurate detection at longer data intervals, and by detecting clear periods in plane-of-array data, which is novel. We illustrate how accurate determination of clear-sky conditions helps to eliminate data noise and bias in the assessment of long-term performance of PV plants.
This presentation describes a new effort to better understand insulator flashover in high current, high voltage pulsed power systems. Both experimental and modeling investigations are described. Particular emphasis is put upon understand flashover that initiate in the anode triple junction (anode-vacuum-dielectric).
Measurements of gas-phase temperature and pressure in hypersonic flows are important for understanding gas-phase fluctuations which can drive dynamic loading on model surfaces and to study fundamental compressible flow turbulence. To achieve this capability, femtosecond coherent anti-Stokes Raman scattering (fs CARS) is applied in Sandia National Laboratories’ cold-flow hypersonic wind tunnel facility. Measurements were performed for tunnel freestream temperatures of 42–58 K and pressures of 1.5–2.2 Torr. The CARS measurement volume was translated in the flow direction during a 30-second tunnel run using a single computer-controlled translation stage. After broadband femtosecond laser excitation, the rotational Raman coherence was probed twice, once at an early time where the collisional environment has not affected the Raman coherence, and another at a later time after the collisional environment has led to significant dephasing of the Raman coherent. The gas-phase temperature was obtained primarily from the early-probe CARS spectra, while the gas-phase pressure was obtained primarily from the late-probe CARS spectra. Challenges in implementing fs CARS in this facility such as changes in the nonresonant spectrum at different measurement location are discussed.
A comprehensive study of the mechanical response of a 316 stainless steel is presented. The split-Hopkinson bar technique was used to evaluate the mechanical behavior at dynamic strain rates of 500 s−1, 1500 s−1, and 3000 s−1 and temperatures of 22 °C and 300 °C under tension and compression loading, while the Drop-Hopkinson bar was used to characterize the tension behavior at an intermediate strain rate of 200 s−1. The experimental results show that the tension and compression flow stress are reasonably symmetric, exhibit positive strain rate sensitivity, and are inversely dependent on temperature. The true failure strain was determined by measuring the minimum diameter of the post-test tension specimen. The 316 stainless steel exhibited a ductile response, and the true failure strain increased with increasing temperature and decreased with increasing strain rate.
A challenge for TW-class accelerators, such as Sandia's Z machine, is efficient power coupling due to current loss in the final power feed. It is also important to understand how such losses will scale to larger next generation pulsed power (NGPP) facilities. While modeling is studying these power flow losses it is important to have diagnostic that can experimentally measure plasmas in these conditions and help inform simulations. The plasmas formed in the power flow region can be challenging to diagnose due to both limited lines of sight and being at significantly lower temperatures and densities than typical plasmas studied on Z. This necessitates special diagnostic development to accurately measure the power flow plasma on Z.
In this work, a modular and open-source platform has been developed for integrating hybrid battery energy storage systems that are intended for grid applications. Alongside integration, this platform will facilitate testing and optimal operation of hybrid storage technologies. Here, a hardware testbed and a control software have been designed, where the former comprises commercial Lithium-iron-phosphate (LiFePO4) and Lead Acid (Pb - acid) cells, custom built Dual Active Bridge (DAB) DC-DC converters, and a commercial DC-AC conversion system. In this testbed the batteries have an operating voltage range of 11-15V, the DC-AC conversion stage has a DC link voltage of 24V, and it connects to a 208V3-φ grid. The hardware testbed can be scaled up to higher voltages. The control software is developed in Python, and the firmware for all the hardware components is developed in C. This software implements hybrid charge/discharge protocols that are suitable for each battery technology for preventing cell degradation, and perform uninter-rupted quality checks on selected battery packs. The developed platform provides flexibility, modularity, safety and economic benefits to utility-scale storage integration.
A quantum-cascade-laser-absorption-spectroscopy (QCLAS) diagnostic was used to characterize post-detonation fireballs of RP-80 detonators via measurements of temperature, pressure, and CO column pressure at a repetition rate of 1 MHz. Scanned-wavelength direct-absorption spectroscopy was used to measure CO absorbance spectra near 2008.5 cm−1 which are dominated by the P(0,31), P(2,20), and P(3,14) transitions. Line-of-sight (LOS) measurements were acquired 51 and 91 mm above the detonator surface. Three strategies were employed to facilitate interpretation of the LAS measurements in this highly nonuniform environment and to evaluate the accuracy of four post-detonation fireball models: (1) High-energy transitions were used to deliberately bias the measurements to the high-temperature outer shell, (2) a novel dual-zone absorption model was used to extract temperature, pressure, and CO measurements in two distinct regions of the fireball at times where pressure variations along the LOS were pronounced, and (3) the LAS measurements were compared with synthetic LAS measurements produced using the simulated distributions of temperature, pressure, and gas composition predicted by reactive CFD modeling. The results indicate that the QCLAS diagnostic provides high-fidelity data for evaluating post-detonation fireball models, and that assumptions regarding thermochemical equilibrium and carbon freeze-out during expansion of detonation gases have a large impact on the predicted chemical composition of the fireball.
The Information Harm Triangle (IHT) is a novel approach that aims to adapt intuitive engineering concepts to simplify defense in depth for instrumentation and control (I&C) systems at nuclear power plants. This approach combines digital harm, real-world harm, and unsafe control actions (UCAs) into a single graph named “Information Harm Triangle.” The IHT is based on the postulation that the consequences of cyberattacks targeting I&C systems can be expressed in terms of two orthogonal components: a component representing the magnitude of data harm (DH) (i.e., digital information harm) and a component representing physical information harm (PIH) (i.e., real-world harm, e.g., an inadvertent plant trip). The magnitude of the severity of the physical consequence is the aspect of risk that is of concern. The sum of these two components represents the total information harm. The IHT intuitively informs risk-informed cybersecurity strategies that employ independent measures that either act to prevent, reduce, or mitigate DH or PIH. Another aspect of the IHT is that the DH can result in cyber-initiated UCAs that result in severe physical consequences. The orthogonality of DH and PIH provides insights into designing effective defense in depth. The IHT can also represent cyberattacks that have the potential to impede, evade, or compromise countermeasures from taking appropriate action to reduce, stop, or mitigate the harm caused by such UCAs. Cyber-initiated UCAs transform DH to PIH.
The V31 containment vessel was procured by the US Army Recovered Chemical Material Directorate (RCMD) as a third-generation EDS containment vessel. It is the fifth EDS vessel to be fabricated under Code Case 2564 of the 2019 ASME Boiler and Pressure Vessel Code, which provides rules for the design of impulsively loaded vessels. The explosive rating for the vessel, based on the code case, is 24 lb (11 kg) TNT-equivalent for up to 1092 detonations. This report documents the results of explosive tests that were performed on the vessel at Sandia National Laboratories in Albuquerque, New Mexico to qualify the vessel for field operations use. There were three design basis configurations for qualification testing. Qualification test (1) consisted of a simulated M55 rocket motor and warhead assembly of 24 lb (11 kg) of Composition C-4 (30 lb [14 kg] TNT equivalent). This test was considered the maximum load case, based on modeling and simulation methods performed by Sandia prior to the vessel design phase. Qualification test (2) consisted of a regular, right circular cylinder, unitary charge, located central to the vessel interior of 19.2 lb (8.72 kg) of Composition C-4 (24 lb [11 kg] TNT equivalent). Qualification test (3) consisted of a 12-pack of regular, right circular cylinders of 2 lb (908 g) each, distributed evenly inside the vessel (totaling 19.2 lb [8.72 kg] of C-4, or 24 lb [11 kg] TNT equivalent). All vessel acceptance criteria were met.
The DevOps movement, which aims to accelerate the continuous delivery of high-quality software, has taken a leading role in reshaping the software industry. Likewise, there is growing interest in applying DevOps tools and practices in the domains of computational science and engineering (CSE) to meet the ever-growing demand for scalable simulation and analysis. Translating insights from industry to research computing, however, remains an ongoing challenge; DevOps for science and engineering demands adaptation and innovation in those tools and practices. There is a need to better understand the challenges faced by DevOps practitioners in CSE contexts in bridging this divide. To that end, we conducted a participatory action research study to collect and analyze the experiences of DevOps practitioners at a major US national laboratory through the use of storytelling techniques. We share lessons learned and present opportunities for future investigation into DevOps practice in the CSE domain.
Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering - OMAE
Laros, James H.; Davis, Jacob; Sharman, Krish; Tom, Nathan; Husain, Salman
Experiments were conducted on a wave tank model of a bottom raised oscillating surge wave energy converter (OSWEC) model in regular waves. The OSWEC model shape was a thin rectangular flap, which was allowed to pitch in response to incident waves about a hinge located at the intersection of the flap and the top of the supporting foundation. Torsion springs were added to the hinge in order to position the pitch natural frequency at the center of the wave frequency range of the wave maker. The flap motion as well as the loads at the base of the foundation were measured. The OSWEC was modeled analytically using elliptic functions in order to obtain closed form expressions for added mass and radiation damping coefficients, along with the excitation force and torque. These formulations were derived and reported in a previous publication by the authors. While analytical predictions of the foundation loads agree very well with experiments, large discrepancies are seen in the pitch response close to resonance. These differences are analyzed by conducting a sensitivity study, in which system parameters, including damping and added mass values, are varied. The likely contributors to the differences between predictions and experiments are attributed to tank reflections, standing waves that can occur in long, narrow wave tanks, as well as the thin plate assumption employed in the analytical approach.
Due to their increased levels of reliability, meshed low-voltage (LV) grid and spot networks are common topologies for supplying power to dense urban areas and critical customers. Protection schemes for LV networks often use highly sensitive reverse current trip settings to detect faults in the medium-voltage system. As a result, interconnecting even low levels of distributed energy resources (DERs) can impact the reliability of the protection system and cause nuisance tripping. This work analyzes the possibility of modifying the reverse current relay trip settings to increase the DER hosting capacity of LV networks without impacting fault detection performance. The results suggest that adjusting relay settings can significantly increase DER hosting capacity on LV networks without adverse effects, and that existing guidance on connecting DERs to secondary networks, such as that contained in IEEE Std 1547-2018, could potentially be modified to allow higher DER deployment levels.
This paper describes the methodology of designing a replacement blade tip and winglet for a wind turbine blade to demonstrate the potential of additive-manufacturing for wind energy. The team will later field-demonstrate this additive-manufactured, system-integrated tip (AMSIT) on a wind turbine. The blade tip aims to reduce the cost of wind energy by improving aerodynamic performance and reliability, while reducing transportation costs. This paper focuses on the design and modeling of a winglet for increased power production while maintaining acceptable structural loads of the original Vestas V27 blade design. A free-wake vortex model, WindDVE, was used for the winglet design analysis. A summary of the aerodynamic design process is presented along with a case study of a specific design.
Uncertainty quantification (UQ) plays a critical role in verifying and validating forward integrated computational materials engineering (ICME) models. Among numerous ICME models, the crystal plasticity finite element method (CPFEM) is a powerful tool that enables one to assess microstructure-sensitive behaviors and thus, bridge material structure to performance. Nevertheless, given its nature of constitutive model form and the randomness of microstructures, CPFEM is exposed to both aleatory uncertainty (microstructural variability), as well as epistemic uncertainty (parametric and model-form error). Therefore, the observations are often corrupted by the microstructure-induced uncertainty, as well as the ICME approximation and numerical errors. In this work, we highlight several ongoing research topics in UQ, optimization, and machine learning applications for CPFEM to efficiently solve forward and inverse problems. The first aspect of this work addresses the UQ of constitutive models for epistemic uncertainty, including both phenomenological and dislocation-density-based constitutive models, where the quantities of interest (QoIs) are related to the initial yield behaviors. We apply a stochastic collocation (SC) method to quantify the uncertainty of the three most commonly used constitutive models in CPFEM, namely phenomenological models (with and without twinning), and dislocation-density-based constitutive models, for three different types of crystal structures, namely face-centered cubic (fcc) copper (Cu), body-centered cubic (bcc) tungsten (W), and hexagonal close packing (hcp) magnesium (Mg). The second aspect of this work addresses the aleatory and epistemic uncertainty with multiple mesh resolutions and multiple constitutive models by the multi-index Monte Carlo method, where the QoI is also related to homogenized materials properties. We present a unified approach that accounts for various fidelity parameters, such as mesh resolutions, integration time-steps, and constitutive models simultaneously. We illustrate how multilevel sampling methods, such as multilevel Monte Carlo (MLMC) and multi-index Monte Carlo (MIMC), can be applied to assess the impact of variations in the microstructure of polycrystalline materials on the predictions of macroscopic mechanical properties. The third aspect of this work addresses the crystallographic texture study of a single void in a cube. Using a parametric reduced-order model (also known as parametric proper orthogonal decomposition) with a global orthonormal basis as a model reduction technique, we demonstrate that the localized dynamic stress and strain fields can be predicted as a spatiotemporal problem.
We propose a set of benchmark tests for current-voltage (IV) curve fitting algorithms. Benchmark tests enable transparent and repeatable comparisons among algorithms, allowing for measuring algorithm improvement over time. An absence of such tests contributes to the proliferation of fitting methods and inhibits achieving consensus on best practices. Benchmarks include simulated curves with known parameter solutions, with and without simulated measurement error. We implement the reference tests on an automated scoring platform and invite algorithm submissions in an open competition for accurate and performant algorithms.
Spent nuclear fuel repository simulations are currently not able to incorporate detailed fuel matrix degradation (FMD) process models due to their computational cost, especially when large numbers of waste packages breach. The current paper uses machine learning to develop artificial neural network and k-nearest neighbor regression surrogate models that approximate the detailed FMD process model while being computationally much faster to evaluate. Using fuel cask temperature, dose rate, and the environmental concentrations of CO32−, O2, Fe2+, and H2 as inputs, these surrogates show good agreement with the FMD process model predictions of the UO2 degradation rate for conditions within the range of the training data. A demonstration in a full-scale shale repository reference case simulation shows that the incorporation of the surrogate models captures local and temporal environmental effects on fuel degradation rates while retaining good computational efficiency.
A jet is formed from venting gases of lithium-ion batteries during thermal runaway. Heat fluxes to surrounding surfaces from vented gases are calculated with simulations of an impinging jet in a narrow gap. Heat transfer correlations for the impinging jet are used as a point of reference. Three cases of different gap sizes and jet velocities are investigated and safety hazards are assessed. Local and global safety hazard issues are addressed based on average heat flux, average temperature, and average temperature rise in a cell. The Results show that about 40% to about 70% of venting gases energy can leave the module gap where it can be transferred to other modules or causes combustion at the end of the gap if suitable conditions are satisfied. This work shows that multiple vents are needed to increase the temperatures of the other modules’ cells to go into thermal runaway. This work is a preliminary assessment for future analysis that will consider heat transfer to the adjacent modules from multiple venting events.
Geomagnetic disturbances (GMDs) give rise to geomagnetically induced currents (GICs) on the earth's surface which find their way into power systems via grounded transformer neutrals. The quasi-dc nature of the GICs results in half-cycle saturation of the power grid transformers which in turn results in transformer failure, life reduction, and other adverse effects. Therefore, transformers need to be more resilient to dc excitation. This paper sets forth dc immunity metrics for transformers. Furthermore, this paper sets forth a novel transformer architecture and a design methodology which employs the dc immunity metrics to make it more resilient to dc excitation. This is demonstrated using a time-stepping 2D finite element analysis (FEA) simulation. It was found that a relatively small change in the core geometry significantly increases transformer resiliency with respect to dc excitation.
The error detection performance of cyclic redundancy check (CRC) codes combined with bit framing in digital serial communication systems is evaluated. Advantages and disadvantages of the combined method are treated in light of the probability of undetected errors. It is shown that bit framing can increase the burst error detection of the CRC but it can also adversely affect CRC random error detection performance. To quantify the effect of bit framing on CRC error detection the concept of error "exposure"is introduced. Our investigations lead us to propose resilient generator polynomials that, when combined with bit framing, can result in improved CRC error detection performance at no additional implementation cost. Example results are generated for short codewords showing that proper choice of CRC generator polynomial can improve error detection performance when combined with bit framing. The implication is that CRC combined with bit framing can reduce the probability of undetected errors even under high error rate conditions.
Low loss silicon nitride ring resonator reflectors provide feedback to a III/V gain chip, achieving single-mode lasing at 772nm. The Si3N4 is fabricated in a CMOS foundry compatible process that achieves loss values of 0.036dB/cm.
A high altitude electromagnetic pulse (HEMP) or other similar geomagnetic disturbance (GMD) has the potential to severely impact the operation of large-scale electric power grids. By introducing low-frequency common-mode (CM) currents, these events can impact the performance of key system components such as large power transformers. In this work, a solid-state transformer (SST) that can replace susceptible equipment and improve grid resiliency by safely absorbing these CM insults is described. An overview of the proposed SST power electronics and controls architecture is provided, a system model is developed, and the performance of the SST in response to a simulated CM insult is evaluated. Compared to a conventional magnetic transformer, the SST is found to recover quickly from the insult while maintaining nominal ac input/output behavior.
Two-dimensional (2D) layered oxides have recently attracted wide attention owing to the strong coupling among charges, spins, lattice, and strain, which allows great flexibility and opportunities in structure designs as well as multifunctionality exploration. In parallel, plasmonic hybrid nanostructures exhibit exotic localized surface plasmon resonance (LSPR) providing a broad range of applications in nanophotonic devices and sensors. A hybrid material platform combining the unique multifunctional 2D layered oxides and plasmonic nanostructures brings optical tuning into the new level. In this work, a novel self-assembled Bi2MoO6 (BMO) 2D layered oxide incorporated with plasmonic Au nanoinclusions has been demonstrated via one-step pulsed laser deposition (PLD) technique. Comprehensive microstructural characterizations, including scanning transmission electron microscopy (STEM), differential phase contrast imaging (DPC), and STEM tomography, have demonstrated the high epitaxial quality and particle-in-matrix morphology of the BMO-Au nanocomposite film. DPC-STEM imaging clarifies the magnetic domain structures of BMO matrix. Three different BMO structures including layered supercell (LSC) and superlattices have been revealed which is attributed to the variable strain states throughout the BMO-Au film. Owing to the combination of plasmonic Au and layered structure of BMO, the nanocomposite film exhibits a typical LSPR in visible wavelength region and strong anisotropy in terms of its optical and ferromagnetic properties. This study opens a new avenue for developing novel 2D layered complex oxides incorporated with plasmonic metal or semiconductor phases showing great potential for applications in multifunctional nanoelectronics devices. [Figure not available: see fulltext.]
Extreme meteorological events, such as hurricanes and floods, cause significant infrastructure damage and, as a result, prolonged grid outages. To mitigate the negative effect of these outages and enhance the resilience of communities, microgrids consisting of solar photovoltaics (PV), energy storage (ES) technologies, and backup diesel generation are being considered. Furthermore, it is necessary to take into account how the extreme event affects the systems' performance during the outage, often referred to as black-sky conditions. In this paper, an optimization model is introduced to properly size ES and PV technologies to meet various duration of grid outages for selected critical infrastructure while considering black-sky conditions. A case study of the municipality of Villalba, Puerto Rico is presented to identify the several potential microgrid configurations that increase the community's resilience. Sensitivity analyses are performed around the grid outage durations and black-sky conditions to better decide what factors should be considered when scoping potential microgrids for community resilience.
Multiple rotors on single structures have long been proposed to increase wind turbine energy capture with no increase in rotor size, but at the cost of additional mechanical complexity in the yaw and tower designs. Standard turbines on their own very-closely-spaced towers avoid these disadvantages but create a significant disadvantage; for some wind directions the wake turbulence of a rotor enters the swept area of a very close downwind rotor causing low output, fatigue stress, and changes in wake recovery. Knowing how the performance of pairs of closely spaced rotors varies with wind direction is essential to design a layout that maximizes the useful directions and minimizes the losses and stress at other directions. In the current work, the high-fidelity large-eddy simulation (LES) code Exa-Wind/Nalu-Wind is used to simulate the wake interactions from paired-rotor configurations in a neutrally stratified atmospheric boundary layer to investigate performance and feasibility. Each rotor pair consists of two Vestas V27 turbines with hub-to-hub separation distances of 1.5 rotor diameters. The on-design wind direction results are consistent with previous literature. For an off-design wind direction of 26.6°, results indicate little change in power and far-wake recovery relative to the on-design case. At a direction of 45.0°, significant rotor-wake interactions produce an increase in power but also in far-wake velocity deficit and turbulence intensity. A severely off-design case is also considered.
This study investigated the durability of four high temperature coatings for use as a Gardon gauge foil coating. Failure modes and effects analysis have identified Gardon gauge foil coating as a critical component for the development of a robust flux gauge for high intensity flux measurements. Degradation of coating optical properties and physical condition alters flux gauge sensitivity, resulting in flux measurement errors. In this paper, four coatings were exposed to solar and thermal cycles to simulate real-world aging. Solar simulator and box furnace facilities at the National Solar Thermal Test Facility (NSTTF) were utilized in separate test campaigns. Coating absorptance and emissivity properties were measured and combined into a figure of merit (FOM) to characterize the optical property stability of each coating, and physical coating degradation was assessed qualitatively using microscope images. Results suggest rapid high temperature cycling did not significantly impact coating optical properties and physical state. In contrast, prolonged exposure of coatings to high temperatures degraded coating optical properties and physical state. Coatings degraded after 1 hour of exposure at temperatures above 400 °C and stabilized after 6-24 hours of exposure. It is concluded that the combination of high temperatures and prolonged exposure provide the energy necessary to sustain coating surface reactions and alter optical and physical coating properties. Results also suggest flux gauge foil coatings could benefit from long duration high temperature curing (>400 °C) prior to sensor calibration to stabilize coating properties and increase measurement reliability in high flux and high temperature applications.
Puerto Rico faced a double strike from hurricanes Irma and Maria in 2017. The resulting damage required a comprehensive rebuild of electric infrastructure. There are plans and pilot projects to rebuild with microgrids to increase resilience. This paper provides a techno-economic analysis technique and case study of a potential future community in Puerto Rico that combines probabilistic microgrid design analysis with tiered circuits in building energy modeling. Tiered circuits in buildings allow electric load reduction via remote disconnection of non-critiñl circuits during an emergency. When coupled to a microgrid, tiered circuitry can reduce the chances of a microgrid's storage and generation resources being depleted. The analysis technique is applied to show 1) Approximate cost savings due to a tiered circuit structure and 2) Approximate cost savings gained by simultaneously considering resilience and sustainability constraints in the microgrid optimization. The analysis technique uses a resistive capacitive thermal model with load profiles for four tiers (tier 1-3 and non-critical loads). Three analyses were conducted using: 1) open-source software called Tiered Energy in Buildings and 2) the Microgrid Design Toolkit. For a fossil fuel based microgrid 30% of the total microgrid costs of 1.18 million USD were calculated where the non-tiered case keeps all loads 99.9% available and the tiered case keeps tier 1 at 99.9%, tier 2 at 95%, tier 3 at 80% availability, with no requirement on non-critical loads. The same comparison for a sustainable microgrid showed 8% cost savings on a 5.10 million USD microgrid due to tiered circuits. The results also showed 6-7% cost savings when our analysis technique optimizes sustainability and resilience simultaneously in comparison to doing microgrid resilience analysis and renewables net present value analysis independently. Though highly specific to our case study, similar assessments using our analysis technique can elucidate value of tiered circuits and simultaneous consideration of sustainability and resilience in other locations.
This is an investigation on two experimental datasets of laminar hypersonic flows, over a double-cone geometry, acquired in Calspan—University at Buffalo Research Center’s Large Energy National Shock (LENS)-XX expansion tunnel. These datasets have yet to be modeled accurately. A previous paper suggested that this could partly be due to mis-specified inlet conditions. The authors of this paper solved a Bayesian inverse problem to infer the inlet conditions of the LENS-XX test section and found that in one case they lay outside the uncertainty bounds specified in the experimental dataset. However, the inference was performed using approximate surrogate models. In this paper, the experimental datasets are revisited and inversions for the tunnel test-section inlet conditions are performed with a Navier–Stokes simulator. The inversion is deterministic and can provide uncertainty bounds on the inlet conditions under a Gaussian assumption. It was found that deterministic inversion yields inlet conditions that do not agree with what was stated in the experiments. An a posteriori method is also presented to check the validity of the Gaussian assumption for the posterior distribution. This paper contributes to ongoing work on the assessment of datasets from challenging experiments conducted in extreme environments, where the experimental apparatus is pushed to the margins of its design and performance envelopes.
Prescriptive approaches for the cybersecurity of digital nuclear instrumentation and control (I&C) systems can be cumbersome and costly. These considerations are of particular concern for advanced reactors that implement digital technologies for monitoring, diagnostics, and control. A risk-informed performance-based approach is needed to enable the efficient design of secure digital I&C systems for nuclear power plants. This paper presents a tiered cybersecurity analysis (TCA) methodology as a graded approach for cybersecurity design. The TCA is a sequence of analyses that align with the plant, system, and component stages of design. Earlier application of the TCA in the design process provides greater opportunity for an efficient graded approach and defense-in-depth. The TCA consists of three tiers. Tier 1 is design and impact analysis. In Tier 1 it is assumed that the adversary has control over all digital systems, components, and networks in the plant, and that the adversary is only constrained by the physical limitations of the plant design. The plant's safety design features are examined to determine whether the consequences of an attack by this cyber-enabled adversary are eliminated or mitigated. Accident sequences that are not eliminated or mitigated by security by design features are examined in Tier 2 analysis. In Tier 2, adversary access pathways are identified for the unmitigated accident sequences, and passive measures are implemented to deny system and network access to those pathways wherever feasible. Any systems with remaining susceptible access pathways are then examined in Tier 3. In Tier 3, active defensive cybersecurity architecture features and cybersecurity plan controls are applied to deny the adversary the ability to conduct the tasks needed to cause a severe consequence. Tier 3 is not performed in this analysis because of the design maturity required for this tier of analysis.
Proceedings - Electronic Components and Technology Conference
Li, Xingchen; Jia, Xiaofan; Kim, Joon W.; Moon, Kyoung S.; Jordan, Matthew J.; Swaminathan, Madhavan
This paper presents a die-embedded glass interposer with minimum warpage for 5G/6G applications. The interposer performs high integration with low-loss interconnects by embedding multiple chips in the same glass substrate and interconnecting the chips through redistributive layers (RDL). Novel processes for cavity creation, multi-die embedding, carrier- less RDL build up and heat spreader attachment are proposed and demonstrated in this work. Performance of the interposer from 1 GHz to 110 GHz are evaluated. This work provides an advanced packaging solution for low-loss die-to-die and die-to-package interconnects, which is essential to high performance wireless system integration.
Michelsen, Hope A.; Boigne, Emeric; Schrader, Paul E.; Johansson, K.O.; Campbell, Matthew F.; Bambha, Ray P.; Ihme, Matthias
We have developed a new method for extracting particulates and gas-phase species from flames. This technique involves directing a small jet of inert gas through the flame to entrain the sample, which is then collected by a probe on the other side of the flame. This sampling technique does not require inserting a probe or sampling surface into the flame and thus avoids effects on the flame due to conductive cooling by the probe and recombination, quenching, and deposition reactions at the sampling surface in contact with the flame. This approach thus allows for quenching and diluting the sample during extraction while minimizing the perturbations to the flame that have a substantial impact on flame chemistry. It also circumvents clogging of the probe with soot, a problem that commonly occurs when a probe is inserted into a hydrocarbon-rich premixed or diffusion flame. In this paper, we present experimental results demonstrating the application of this technique to the extraction of soot particles from a co-flow ethylene/air diffusion flame. The extracted samples were analyzed using transmission electron microscopy (TEM), and the results are compared with measurements using in situ diagnostics, i.e., laser-induced incandescence and small-angle X-ray scattering. We also compare TEM images of particles sampled using this approach with those sampled using rapid-insertion thermophoretic sampling, a common technique for extracting particles from flames. In addition, we have performed detailed numerical simulations of the flow field associated with this new sampling approach to assess the impact it has on the flame structure and sample following extraction. The results presented in this paper demonstrate that this jet-entrainment sampling technique has significant advantages over other common sample-extraction methods.
Kolmogorov's theory of turbulence assumes that the small-scale turbulent structures in the energy cascade are universal and are determined by the energy dissipation rate and the kinematic viscosity alone. However, thermal fluctuations, absent from the continuum description, terminate the energy cascade near the Kolmogorov length scale. Here, we propose a simple superposition model to account for the effects of thermal fluctuations on small-scale turbulence statistics. For compressible Taylor-Green vortex flow, we demonstrate that the superposition model in conjunction with data from direct numerical simulation of the Navier-Stokes equations yields spectra and structure functions that agree with the corresponding quantities computed from the direct simulation Monte Carlo method of molecular gas dynamics, verifying the importance of thermal fluctuations in the dissipation range.
A microgrid is characterized by a high R/X ratio, making the voltage more sensitive to active power changes unlike in bulk power systems where voltage is mostly regulated by reactive power. Because of its sensitivity to active power, control approach should incorporate active power as well. Thus, the voltage control approach for microgrids is very different from conventional power systems. The energy costs associated with these power are different. Furthermore, because of diverse generation sources and different components such as distributed energy resources, energy storage systems, etc, model-based control approaches might not perform very well. This paper proposes a reinforcement learning-based voltage support framework for a microgrid where an agent learns control policy by interacting with the microgrid without requiring a mathematical model of the system. A MATLAB/Simulink simulation study on a test system from Cordova, Alaska shows that there is a large reduction in voltage deviation (about 2.5-4.5 times). This reduction in voltage deviation can improve the power quality of the microgrid: ensuring a reliable supply, longer equipment lifespan, and stable user operations.
Researchers at Sandia National Laboratories, in conjunction with the Nuclear Energy Institute and Light Water Reactor Sustainability Programs, have conducted testing and analysis to reevaluate and redefine the minimum passible opening size through which a person can effectively pass and navigate. Physical testing with a representative population has been performed on both simple two-dimensional (rectangular and circular cross sections up to 91.4 cm in depth) and more complex three-dimensional (circular cross sections of longer lengths up to 9.1 m and changes in direction) opening configurations. The primary impact of this effort is to define the physical design in which an adversary could successfully pass through a potentially complex opening, as well as to define the designs in which an adversary would not be expected to successfully traverse a complex opening. These data can then be used to support risk-informed decision making.
Austenitic stainless steels are used in high-pressure hydrogen containment infrastructure for their resistance to hydrogen embrittlement. Applications for the use of austenitic stainless steels include pressure vessels, tubing, piping, valves, fittings and other piping components. Despite their resistance to brittle behavior in the presence of hydrogen, austenitic stainless steels can exhibit degraded fracture performance. The mechanisms of hydrogen-assisted fracture, however, remain elusive, which has motivated continued research on these alloys. There are two principal approaches to evaluate the influence of gaseous hydrogen on mechanical properties: internal and external hydrogen, respectively. The austenite phase has high solubility and low diffusivity of hydrogen at room temperature, which enables introduction of hydrogen into the material through thermal precharging at elevated temperature and pressure; a condition referred to as internal hydrogen. H-precharged material can subsequently be tested in ambient conditions. Alternatively, mechanical testing can be performed while test coupons are immersed in gaseous hydrogen thereby evaluating the effects of external hydrogen on property degradation. The slow diffusivity of hydrogen in austenite at room temperature can often be a limiting factor in external hydrogen tests and may not properly characterize lower bound fracture behavior in components exposed to hydrogen for long time periods. In this study, the differences between internal and external hydrogen environments are evaluated in the context of fracture resistance measurements. Fracture testing was performed on two different forged austenitic stainless steel alloys (304L and XM-11) in three different environments: 1) non-charged and tested in gaseous hydrogen at pressure of 1,000 bar (external H2), 2) hydrogen precharged and tested in air (internal H), 3) hydrogen precharged and tested in 1,000 bar H2 (internal H + external H2). For all environments, elastic-plastic fracture measurements were conducted to establish J-R curves following the methods of ASTM E1820. Following fracture testing, fracture surfaces were examined to reveal predominant fracture mechanisms for the different conditions and to characterize differences (and similarities) in the macroscale fracture processes associated with these environmental conditions.
Surrogate construction is an essential component for all non-deterministic analyses in science and engineering. The efficient construction of easy and cheaper-to-run alternatives to a computationally expensive code paves the way for outer loop workflows for forward and inverse uncertainty quantification and optimization. Unfortunately, the accurate construction of a surrogate still remains a task that often requires a prohibitive number of computations, making the approach unattainable for large-scale and high-fidelity applications. Multifidelity approaches offer the possibility to lower the computational expense requirement on the highfidelity code by fusing data from additional sources. In this context, we have demonstrated that multifidelity Bayesian Networks (MFNets) can efficiently fuse information derived from models with an underlying complex dependency structure. In this contribution, we expand on our previous work by adopting a basis adaptation procedure for the selection of the linear model representing each data source. Our numerical results demonstrate that this procedure is computationally advantageous because it can maximize the use of limited data to learn and exploit the important structures shared among models. Two examples are considered to demonstrate the benefits of the proposed approach: an analytical problem and a nuclear fuel finite element assembly. From these two applications, a lower dependency of MFnets on the model graph has been also observed.
When exposed to mechanical environments such as shock and vibration, electrical connections may experience increased levels of contact resistance associated with the physical characteristics of the electrical interface. A phenomenon known as electrical chatter occurs when these vibrations are large enough to interrupt the electric signals. It is critical to understand the root causes behind these events because electrical chatter may result in unexpected performance or failure of the system. The root causes span a variety of fields, such as structural dynamics, contact mechanics, and tribology. Therefore, a wide range of analyses are required to fully explore the physical phenomenon. This paper intends to provide a better understanding of the relationship between structural dynamics and electrical chatter events. Specifically, electrical contact assembly composed of a cylindrical pin and bifurcated structure were studied using high fidelity simulations. Structural dynamic simulations will be performed with both linear and nonlinear reduced-order models (ROM) to replicate the relevant structural dynamics. Subsequent multi-physics simulations will be discussed to relate the contact mechanics associated with the dynamic interactions between the pin and receptacle to the chatter. Each simulation method was parametrized by data from a variety of dynamic experiments. Both structural dynamics and electrical continuity were observed in both the simulation and experimental approaches, so that the relationship between the two can be established.
Diffusion bonding of two immiscible, binary metallic systems, Cu-Ta and Cu-W was employed to make repeatable and predictable dual-layer impactors for shock-reshock experiments. The diffusion bonded impactors were characterized using ultrasonic imaging and optical microscopy to ensure bonding and the absence of excessive Cu grain coarsening. The diffusion bonded impactors were launched via a two-stage gas gun at [100] LiF windows instrumented with multiple interferometry probes spanning nearly the entire impactor area. Consistent interferometry data was obtained from all experiments with no evidence of release prior to recompression, indicating a uniform bond. Comparisons to hydrocode simulations show excellent agreement for all experiments, facilitating easy application of these impactors to future experiments.
Two relatively under-reported facets of fuel storage fire safety are examined in this work for a 250, 000 gallon two-tank storage system. Ignition probability is linked to the radiative flux from a presumed fire. First, based on observed features of existing designs, fires are expected to be largely contained within a designed footprint that will hold the full spilled contents of the fuel. The influence of the walls and the shape of the tanks on the magnitude of the fire is not a well-described aspect of conventional fire safety assessment utilities. Various resources are herein used to explore the potential hazard for a contained fire of this nature. Second, an explosive attack on the fuel storage has not been widely considered in prior work. This work explores some options for assessing this hazard. The various methods for assessing the constrained conventional fires are found to be within a reasonable degree of agreement. This agreement contrasts with the hazard from an explosive dispersal. Best available assessment techniques are used, which highlight some inadequacies in the existing toolsets for making predictions of this nature. This analysis, using the best available tools, suggests the offset distance for the ignition hazard from a fireball will be on the same order as the offset distance for the blast damage. This suggests the buy-down of risk by considering the fireball is minimal when considering the blast hazards. Assessment tools for the fireball predictions are not particularly mature, and ways to improve them for a higher-fidelity estimate are noted.
Awile, Omar; Knight, James C.; Nowotny, Thomas; Aimone, James B.; Diesmann, Markus; Schurmann, Felix
At the turn of the millennium the computational neuroscience community realized that neuroscience was in a software crisis: software development was no longer progressing as expected and reproducibility declined. The International Neuroinformatics Coordinating Facility (INCF) was inaugurated in 2007 as an initiative to improve this situation. The INCF has since pursued its mission to help the development of standards and best practices. In a community paper published this very same year, Brette et al. tried to assess the state of the field and to establish a scientific approach to simulation technology, addressing foundational topics, such as which simulation schemes are best suited for the types of models we see in neuroscience. In 2015, a Frontiers Research Topic “Python in neuroscience” by Muller et al. triggered and documented a revolution in the neuroscience community, namely in the usage of the scripting language Python as a common language for interfacing with simulation codes and connecting between applications. The review by Einevoll et al. documented that simulation tools have since further matured and become reliable research instruments used by many scientific groups for their respective questions. Open source and community standard simulators today allow research groups to focus on their scientific questions and leave the details of the computational work to the community of simulator developers. A parallel development has occurred, which has been barely visible in neuroscientific circles beyond the community of simulator developers: Supercomputers used for large and complex scientific calculations have increased their performance from ~10 TeraFLOPS (1013 floating point operations per second) in the early 2000s to above 1 ExaFLOPS (1018 floating point operations per second) in the year 2022. This represents a 100,000-fold increase in our computational capabilities, or almost 17 doublings of computational capability in 22 years. Moore's law (the observation that it is economically viable to double the number of transistors in an integrated circuit every other 18–24 months) explains a part of this; our ability and willingness to build and operate physically larger computers, explains another part. It should be clear, however, that such a technological advancement requires software adaptations and under the hood, simulators had to reinvent themselves and change substantially to embrace this technological opportunity. It actually is quite remarkable that—apart from the change in semantics for the parallelization—this has mostly happened without the users knowing. The current Research Topic was motivated by the wish to assemble an update on the state of neuroscientific software (mostly simulators) in 2022, to assess whether we can see more clearly which scientific questions can (or cannot) be asked due to our increased capability of simulation, and also to anticipate whether and for how long we can expect this increase of computational capabilities to continue.
Previous research has provided strong evidence that CO2 and H2O gasification reactions can provide non-negligible contributions to the consumption rates of pulverized coal (pc) char during combustion, particularly in oxy-fuel environments. Fully quantifying the contribution of these gasification reactions has proven to be difficult, due to the dearth of knowledge of gasification rates at the elevated particle temperatures associated with typical pc char combustion processes, as well as the complex interaction of oxidation and gasification reactions. Gasification reactions tend to become more important at higher char particle temperatures (because of their high activation energy) and they tend to reduce pc oxidation due to their endothermicity (i.e. cooling effect). The work reported here attempts to quantify the influence of the gasification reaction of CO2 in a rigorous manner by combining experimental measurements of the particle temperatures and consumption rates of size-classified pc char particles in tailored oxy-fuel environments with simulations from a detailed reacting porous particle model. The results demonstrate that a specific gasification reaction rate relative to the oxidation rate (within an accuracy of approximately +/- 20% of the pre-exponential value), is consistent with the experimentally measured char particle temperatures and burnout rates in oxy-fuel combustion environments. Conversely, the results also show, in agreement with past calculations, that it is extremely difficult to construct a set of kinetics that does not substantially overpredict particle temperature increase in strongly oxygen-enriched N2 environments. This latter result is believed to result from deficiencies in standard oxidation mechanisms that fail to account for falloff in char oxidation rates at high temperatures.
This paper elaborates the results of the hardware implementation of a traveling wave (TW) protection device (PD) for DC microgrids. The proposed TWPD is implemented on a commercial digital signal processor (DSP) board. In the developed TWPD, first, the DSP board's Analog to Digital Converter (ADC) is used to sample the input at a 1 MHz sampling rate. The Analog Input card of DSP board measures the pole current at the TWPD location in DC microgrid. Then, a TW detection algorithm is applied on the output of the ADC to detect the fault occurrence instance. Once this instance is detected, multi-resolution analysis (MRA) is performed on a 128-sample data butter that is created around the fault instance. The MRA utilizes discrete wavelet transform (DWT) to extract the high-frequency signatures of measured pole current. To quantity the extracted TW features, the Parseval theorem is used to calculate the Parseval energy of reconstructed wavelet coefficients created by MRA. These Parseval energy values are later used as inputs to a polynomial linear regression tool to estimate the fault location. The performance of the created TWPD is verified using an experimental testbed.
The design of thermal protection systems (TPS), including heat shields for reentry vehicles, rely more and more on computational simulation tools for design optimization and uncertainty quantification. Since high-fidelity simulations are computationally expensive for full vehicle geometries, analysts primarily use reduced-physics models instead. Recent work has shown that projection-based reduced-order models (ROMs) can provide accurate approximations of high-fidelity models at a lower computational cost. ROMs are preferable to alternative approximation approaches for high-consequence applications due to the presence of rigorous error bounds. The following paper extends our previous work on projection-based ROMs for ablative TPS by considering hyperreduction methods which yield further reductions in computational cost and demonstrating the approach for simulations of a three-dimensional flight vehicle. We compare the accuracy and potential performance of several different hyperreduction methods and mesh sampling strategies. This paper shows that with the correct implementation, hyperreduction can make ROMs up to 1-3 orders of magnitude faster than the full order model by evaluating the residual at only a small fraction of the mesh nodes.
The Big Hill SPR site has a rich data set consisting of multi-arm caliper (MAC) logs collected from the cavern wells. This data set provides insight into the on-going casing deformation at the Big Hill site. This report summarizes the MAC surveys for each well and presents well longevity estimates where possible. Included in the report is an examination of the well twins for each cavern and a discussion on what may or may not be responsible for the different levels of deformation between some of the well twins. The report also takes a systematic view of the MAC data presenting spatial patterns of casing deformation and deformation orientation in an effort to better understand the underlying causes. The conclusions present a hypothesis suggesting the small-scale variations in casing deformation are attributable to similar scale variations in the character of the salt-caprock interface. These variations do not appear directly related to shear zones or faults.
Filamentous fungi can synthesize a variety of nanoparticles (NPs), a process referred to as mycosynthesis that requires little energy input, do not require the use of harsh chemicals, occurs at near neutral pH, and do not produce toxic byproducts. While NP synthesis involves reactions between metal ions and exudates produced by the fungi, the chemical and biochemical parameters underlying this process remain poorly understood. Here, the role of fungal species and precursor salt on the mycosynthesis of zinc oxide (ZnO) NPs is investigated. This data demonstrates that all five fungal species tested are able to produce ZnO structures that can be morphologically classified into i) well-defined NPs, ii) coalesced/dissolving NPs, and iii) micron-sized square plates. Further, species-dependent preferences for these morphologies are observed, suggesting potential differences in the profile or concentration of the biochemical constituents in their individual exudates. This data also demonstrates that mycosynthesis of ZnO NPs is independent of the anion species, with nitrate, sulfate, and chloride showing no effect on NP production. Finally, these results enhance the understanding of factors controlling the mycosynthesis of ceramic NPs, supporting future studies that can enable control over the physical and chemical properties of NPs formed through this “green” synthesis method.
The challenge of cyberattack detection can be illustrated by the complexity of the MITRE ATT&CKTM matrix, which catalogues >200 attack techniques (most with multiple sub-techniques). To reliably detect cyberattacks, we propose an evidence-based approach which fuses multiple cyber events over varying time periods to help differentiate normal from malicious behavior. We use Bayesian Networks (BNs) - probabilistic graphical models consisting of a set of variables and their conditional dependencies - for fusion/classification due to their interpretable nature, ability to tolerate sparse or imbalanced data, and resistance to overfitting. Our technique utilizes a small collection of expert-informed cyber intrusion indicators to create a hybrid detection system that combines data-driven training with expert knowledge to form a host-based intrusion detection system (HIDS). We demonstrate a software pipeline for efficiently generating and evaluating various BN classifier architectures for specific datasets and discuss explainability benefits thereof.
As the width and depth of quantum circuits implemented by state-of-the-art quantum processors rapidly increase, circuit analysis and assessment via classical simulation are becoming unfeasible. It is crucial, therefore, to develop new methods to identify significant error sources in large and complex quantum circuits. In this work, we present a technique that pinpoints the sections of a quantum circuit that affect the circuit output the most and thus helps to identify the most significant sources of error. The technique requires no classical verification of the circuit output and is thus a scalable tool for debugging large quantum programs in the form of circuits. We demonstrate the practicality and efficacy of the proposed technique by applying it to example algorithmic circuits implemented on IBM quantum machines.
Mann, James B.; Mohanty, Debapriya P.; Kustas, Andrew K.; Stiven Puentes Rodriguez, B.; Issahaq, Mohammed N.; Udupa, Anirudh; Sugihara, Tatsuya; Trumble, Kevin P.; M'Saoubi, Rachid; Chandrasekar, Srinivasan
Machining-based deformation processing is used to produce metal foil and flat wire (strip) with suitable properties and quality for electrical power and renewable energy applications. In contrast to conventional multistage rolling, the strip is produced in a single-step and with much less process energy. Examples are presented from metal systems of varied workability, and strip product scale in terms of size and production rate. By utilizing the large-strain deformation intrinsic to cutting, bulk strip with ultrafine-grained microstructure, and crystallographic shear-texture favourable for formability, are achieved. Implications for production of commercial strip for electric motor applications and battery electrodes are discussed.
We propose a set of benchmark tests for current-voltage (IV) curve fitting algorithms. Benchmark tests enable transparent and repeatable comparisons among algorithms, allowing for measuring algorithm improvement over time. An absence of such tests contributes to the proliferation of fitting methods and inhibits achieving consensus on best practices. Benchmarks include simulated curves with known parameter solutions, with and without simulated measurement error. We implement the reference tests on an automated scoring platform and invite algorithm submissions in an open competition for accurate and performant algorithms.
Multiple Input Multiple Output (MIMO) vibration testing provides the capability to expose a system to a field environment in a laboratory setting, saving both time and money by mitigating the need to perform multiple and costly large-scale field tests. However, MIMO vibration test design is not straightforward oftentimes relying on engineering judgment and multiple test iterations to determine the proper selection of response Degree of Freedom (DOF) and input locations that yield a successful test. This work investigates two DOF selection techniques for MIMO vibration testing to assist with test design, an iterative algorithm introduced in previous work and an Optimal Experiment Design (OED) approach. The iterative-based approach downselects the control set by removing DOF that have the smallest impact on overall error given a target Cross Power Spectral Density matrix and laboratory Frequency Response Function (FRF) matrix. The Optimal Experiment Design (OED) approach is formulated with the laboratory FRF matrix as a convex optimization problem and solved with a gradient-based optimization algorithm that seeks a set of weighted measurement DOF that minimize a measure of model prediction uncertainty. The DOF selection approaches are used to design MIMO vibration tests using candidate finite element models and simulated target environments. The results are generalized and compared to exemplify the quality of the MIMO test using the selected DOF.
Conference Record of the IEEE Photovoltaic Specialists Conference
Hobbs, William B.; Black, Chloe L.; Holmgren, William F.; Anderson, Kevin
Subhourly changes in solar irradiance can lead to energy models being biased high if realistic distributions of irradiance values are not reflected in the resource data and model. This is particularly true in solar facility designs with high inverter loading ratios (ILRs). When resource data with sufficient temporal and spatial resolution is not available for a site, synthetic variability can be added to the data that is available in an attempt to address this issue. In this work, we demonstrate the use of anonymized commercial resource datasets with synthetic variability and compare results with previous estimates of model bias due to inverter clipping and increasing ILR.
Modern Industrial Control Systems (ICS) attacks evade existing tools by using knowledge of ICS processes to blend their activities with benign Supervisory Control and Data Acquisition (SCADA) operation, causing physical world damages. We present Scaphy to detect ICS attacks in SCADA by leveraging the unique execution phases of SCADA to identify the limited set of legitimate behaviors to control the physical world in different phases, which differentiates from attacker's activities. For example, it is typical for SCADA to setup ICS device objects during initialization, but anomalous during process-control. To extract unique behaviors of SCADA execution phases, Scaphy first leverages open ICS conventions to generate a novel physical process dependency and impact graph (PDIG) to identify disruptive physical states. Scaphy then uses PDIG to inform a physical process-aware dynamic analysis, whereby code paths of SCADA process-control execution is induced to reveal API call behaviors unique to legitimate process-control phases. Using this established behavior, Scaphy selectively monitors attacker's physical world-targeted activities that violates legitimate process-control behaviors. We evaluated Scaphy at a U.S. national lab ICS testbed environment. Using diverse ICS deployment scenarios and attacks across 4 ICS industries, Scaphy achieved 95% accuracy & 3.5% false positives (FP), compared to 47.5% accuracy and 25% FP of existing work. We analyze Scaphy's resilience to futuristic attacks where attacker knows our approach.
With increasing penetration of variable renewable generation, battery energy storage systems (BESS) are becoming important for power system stability due to their operational flexibility. In this paper, we propose a method for determining the minimum BESS rated power that guarantees security constraints in a grid subject to disturbances induced by variable renewable generation. The proposed framework leverages sensitivity-based inverse uncertainty propagation where the dynamical responses of the states are parameterized with respect to random variables. Using this approach, the original nonlinear optimization problem for finding the security-constrained uncertainty interval may be formulated as a quadratically-constrained linear program. The resulting estimated uncertainty interval is utilized to find the BESS rated power required to satisfy grid stability constraints.
Laser-induced photoemission of electrons offers opportunities to trigger and control plasmas and discharges [1]. However, the underlying mechanisms are not sufficiently characterized to be fully utilized [2]. We present an investigation to characterize the effects of photoemission on plasma breakdown for different reduced electric fields, laser intensities, and photon energies. We perform Townsend breakdown experiments assisted by high-speed imaging and employ a quantum model of photoemission along with a 0D discharge model [3], [4] to interpret the experimental measurements.
High-altitude electromagnetic pulse events are a growing concern for electric power grid vulnerability assessments and mitigation planning, and accurate modeling of surge arrester mitigations installed on the grid is necessary to predict pulse effects on existing equipment and to plan future mitigation. While some models of surge arresters at high frequency have been proposed, experimental backing for any given model has not been shown. This work examines a ZnO lightning surge arrester modeling approach previously developed for accurate prediction of nanosecond-scale pulse response. Four ZnO metal-oxide varistor pucks with different sizes and voltage ratings were tested for voltage and current response on a conducted electromagnetic pulse testbed. The measured clamping response was compared to SPICE circuit models to compare the electromagnetic pulse response and validate model accuracy. Results showed good agreement between simulation results and the experimental measurements, after accounting for stray testbed inductance between 100 and 250 nH.
Unlike traditional base excitation vibration qualification testing, multi-axis vibration testing methods can be significantly faster and more accurate. Here, a 12-shaker multiple-input/multiple-output (MIMO) test method called intrinsic connection excitation (ICE) is developed and assessed for use on an example aerospace component. In this study, the ICE technique utilizes 12 shakers, 1 for each boundary condition attachment degree of freedom to the component, specially designed fixtures, and MIMO control to provide an accurate set of loads and boundary conditions during the test. Acceleration, force, and voltage control provide insight into the viability of this testing method. System field test and ICE test results are compared to traditional single degree of freedom specification development and testing. Results indicate the multi-shaker ICE test provided a much more accurate replication of system field test response compared with single degree of freedom testing.
The Sliding Scale of Cybersecurity is a framework for understanding the actions that contribute to cybersecurity. The model consists of five categories that provide varying value towards cybersecurity and incur varying implementation costs. These categories range from offensive cybersecurity measures providing the least value and incurring the greatest cost, to architecture providing the greatest value and incurring the least cost. This paper presents an application of the Sliding Scale of Cybersecurity to the Tiered Cybersecurity Analysis (TCA) of digital instrumentation and control systems for advanced reactors. The TCA consists of three tiers. Tier 1 is design and impact analysis. In Tier 1 it is assumed that the adversary has control over all digital systems, components, and networks in the plant, and that the adversary is only constrained by the physical limitations of the plant design. The plant’s safety design features are examined to determine whether the consequences of an attack by this cyber-enabled adversary are eliminated or mitigated. Accident sequences that are not eliminated or mitigated by security by design features are examined in Tier 2 analysis. In Tier 2, adversary access pathways are identified for the unmitigated accident sequences, and passive measures are implemented to deny system and network access to those pathways wherever feasible. Any systems with remaining susceptible access pathways are then examined in Tier 3. In Tier 3, active defensive cybersecurity architecture features and cybersecurity plan controls are applied to deny the adversary the ability to conduct the tasks needed to cause a severe consequence. Earlier application of the TCA in the design process provides greater opportunity for an efficient graded approach and defense-in-depth.
Phosphor thermometry has become an established remote sensing technique for acquiring the temperature of surfaces and gas-phase flows. Often, phosphors are excited by a light source (typically emitting in the UV region), and their temperature-sensitive emission is captured. Temperature can be inferred from shifts in the emission spectra or the radiative decay lifetime during relaxation. While recent work has shown that the emission of several phosphors remains thermographic during x-ray excitation, the radiative decay lifetime was not investigated. The focus of the present study is to characterize the lifetime decay of the phosphor Gd2O2S:Tb for temperature sensitivity after excitation from a pulsed x-ray source. These results are compared to the lifetime decays found for this phosphor when excited using a pulsed UV laser. Results show that the lifetime of this phosphor exhibits comparable sensitivity to temperature between both excitation sources for a temperature range between 21 °C to 140 °C in increments of 20 °C. This work introduces a novel method of thermometry for researchers to implement when employing x-rays for diagnostics.
The structure-property linkage is one of the two most important relationships in materials science besides the process-structure linkage, especially for metals and polycrystalline alloys. The stochastic nature of microstructures begs for a robust approach to reliably address the linkage. As such, uncertainty quantification (UQ) plays an important role in this regard and cannot be ignored. To probe the structure-property linkage, many multi-scale integrated computational materials engineering (ICME) tools have been proposed and developed over the last decade to accelerate the material design process in the spirit of Material Genome Initiative (MGI), notably crystal plasticity finite element model (CPFEM) and phase-field simulations. Machine learning (ML) methods, including deep learning and physics-informed/-constrained approaches, can also be conveniently applied to approximate the computationally expensive ICME models, allowing one to efficiently navigate in both structure and property spaces effortlessly. Since UQ also plays a crucial role in verification and validation for both ICME and ML models, it is important to include UQ in the picture. In this paper, we summarize a few of our recent research efforts addressing UQ aspects of homogenized properties using CPFEM in a big picture context.