Decarbonizing natural gas networks is a challenging enterprise. Replacing natural gas with renewable hydrogen is one option under global consideration to decarbonize heating, power and residential uses of natural gas. Hydrogen is known to degrade fatigue and fracture properties of structural steels, including pipeline steels. In this study, we describe environmental testing strategies aimed at generating baseline fatigue and fracture trends with efficient use of testing resources. For example, by controlling the stress intensity factor (K) in both K-increasing and K-decreasing modes, fatigue crack growth can be measured for multiple load ratios with a single specimen. Additionally, tests can be designed such that fracture tests can be performed at the conclusion of the fatigue crack growth test, further reducing the resources needed to evaluate the fracture mechanics parameters utilized in design. These testing strategies are employed to establish the fatigue crack growth behavior and fracture resistance of API grade steels in gaseous hydrogen environments. In particular, we explore the effects of load ratio and hydrogen partial pressure on the baseline fatigue and fracture trends of line pipe steels in gaseous hydrogen. These data are then used to test the applicability of a simple, universal fatigue crack growth model that accounts for both load ratio and hydrogen partial pressure. The appropriateness of this model for use as an upper bound fatigue crack growth is discussed.
This is an addendum to the Sierra/SolidMechanics 5.4 User’s Guide that documents additional capabilities available only in alternate versions of the Sierra/SolidMechanics (Sierra/SM) code. These alternate versions are enhanced to provide capabilities that are regulated under the U.S. Department of State’s International Traffic in Arms Regulations (ITAR) export control rules. The ITAR regulated codes are only distributed to entities that comply with the ITAR export control requirements. The ITAR enhancements to Sierra/SM include material models with an energy-dependent pressure response (appropriate for very large deformations and strain rates) and capabilities for blast modeling. This document is an addendum only; the standard Sierra/SolidMechanics 5.4 User’s Guide should be referenced for most general descriptions of code capability and use.
We evaluate the use of reference modules for monitoring effective irradiance in PV power plants, as compared with traditional plane-of-array (POA) irradiance sensors, for PV monitoring and capacity tests. Common POA sensors such as pyranometers and reference cells are unable to capture module-level irradiance nonuniformity and require several correction factors to accurately represent the conditions for fielded modules. These problems are compounded for bifacial systems, where the power loss due to rear side shading and rear-side plane-of-array (RPOA) irradiance gradients are greater and more difficult to quantify. The resulting inaccuracy can have costly real-world consequences, particularly when the data are used to perform power ratings and capacity tests. Here we analyze data from a bifacial single-axis tracking PV power plant, (175.6 MWdc) using 5 meteorological (MET) stations, located on corresponding inverter blocks with capacities over 4 MWdc. Each MET station consists of bifacial reference modules as well pyranometers mounted in traditional POA and RPOA installations across the PV power plant. Short circuit current measurements of the reference modules are converted to effective irradiance with temperature correction and scaling based on flash test or nameplate short circuit values. Our work shows that bifacial effective irradiance measured by pyranometers averages 3.6% higher than the effective irradiance measured by bifacial reference modules, even when accounting for spectral, angle of incidence, and irradiance nonuniformity. We also performed capacity tests using effective irradiance measured by pyranometers and reference modules for each of the 5 bifacial single-axis tracking inverter blocks mentioned above. These capacity tests evaluated bifacial plant performance at ∼3.9% lower when using bifacial effective irradiance from pyranometers as compared to the same calculation performed with reference modules.
In transit visualization offers a desirable approach to performing in situ visualization by decoupling the simulation and visualization components. This decoupling requires that the data be transferred from the simulation to the visualization, which is typically done using some form of aggregation and redistribution. As the data distribution is adjusted to match the visualization’s parallelism during redistribution, the data transport layer must have knowledge of the input data structures to partition or merge them. In this chapter, we will discuss an alternative approach suitable for quickly integrating in transit visualization into simulations without incurring significant overhead or aggregation cost. Our approach adopts an abstract view of the input simulation data and works only on regions of space owned by the simulation ranks, which are sent to visualization clients on demand.
Structural alloys may experience corrosion when exposed to molten chloride salts due to selective dissolution of active alloying elements. One way to prevent this is to make the molten salt reducing. For the KCl + MgCl2 eutectic salt mixture, pure Mg can be added to achieve this. However, Mg can form intermetallic compounds with nickel at high temperatures, which may cause alloy embrittlement. This study shows that an optimum level of excess Mg could be added to the molten salt which will prevent corrosion of alloys like 316 H, while not forming any detectable Ni-Mg intermetallic phases on Ni-rich alloy surfaces.
Large scale non-intrusive inspection (NII) of commercial vehicles is being adopted in the U.S. at a pace and scale that will result in a commensurate growth in adjudication burdens at land ports of entry. The use of computer vision and machine learning models to augment human operator capabilities is critical in this sector to ensure the flow of commerce and to maintain efficient and reliable security operations. The development of models for this scale and speed requires novel approaches to object detection and novel adjudication pipelines. Here we propose a notional combination of existing object detection tools using a novel ensembling framework to demonstrate the potential for hierarchical and recursive operations. Further, we explore the combination of object detection with image similarity as an adjacent capability to provide post-hoc oversight to the detection framework. The experiments described herein, while notional and intended for illustrative purposes, demonstrate that the judicious combination of diverse algorithms can result in a resilient workflow for the NII environment.
Fires of practical interest are often large in scale and involve turbulent behavior. Fire simulation tools are often utilized in an under-resolved prediction to assess fire behavior. Data are scarce for large fires because they are difficult to instrument. A helium plume scenario has been used as a surrogate for much of the fire phenomenology (O'Hern et al., 2005), including buoyancy, mixing, and advection. A clean dataset of this nature makes an excellent platform for assessing model accuracy. We have been participating in a community effort to validate fire simulation tools, and the SIERRA/Fuego code is compared here with the historical dataset. Our predictions span a wide range of length-scales, and comparisons are made to species mass fraction and two velocity components for a number of heights in the core of the plume. We detail our approach to the comparisons, which involves some accommodation for the uncertainty in the inflow boundary condition from the test. We show evolving improvement in simulation accuracy with increasing mesh resolution and benchmark the accuracy through comparisons with the data.
The detonation of explosives produces luminous fireballs often containing particulates such as carbon soot or remnants of partially reacted explosives. The spatial distribution of these particulates is of great interest for the derivation and validation of models. In this work, three ultra-high-speed imaging techniques: diffuse back-illumination extinction, schlieren, and emission imaging, are utilized to investigate the particulate quantity, spatial distribution, and structure in a small-scale fireball. The measurements show the evolution of the particulate cloud in the fireball, identifying possible emission sources and regions of high optical thickness. Extinction measurements performed at two wavelengths shows that extinction follows the inverse wavelength behavior expected of absorptive particles in the Rayleigh scattering regime. The estimated mass from these extinction measurements shows an average soot yield consistent with previous soot collection experiments. The imaging diagnostics discussed in the current work can provide detailed information on the spatial distribution and concentration of soot, crucial for validation opportunities in the future.
This report describes recommended abuse testing procedures for rechargeable energy storage systems (RESSs) for electric vehicles. This report serves as a revision to the USABC Electrical Energy Storage System Abuse Test Manual for Electric and Hybrid Electric Vehicle Applications (SAND99-0497).
This user’s guide documents capabilities in Sierra/SolidMechanics which remain “in-development” and thus are not tested and hardened to the standards of capabilities listed in Sierra/SM 5.4 User’s Guide. Capabilities documented herein are available in Sierra/SM for experimental use only until their official release. These capabilities include, but are not limited to, novel discretization approaches such as the conforming reproducing kernel (CRK) method, numerical fracture and failure modeling aids such as the extended finite element method (XFEM) and J-integral, explicit time step control techniques, dynamic mesh rebalancing, as well as a variety of new material models and finite element formulations.
Creation of streaming video stimuli that allow for strict experimental control while providing ease of scene manipulation is difficult to achieve but desired by researchers seeking to approach ecological validity in contexts that involve processing streaming visual information. To that end, we propose leveraging video game modding tools as a method of creating research quality stimuli. As a pilot effort, we used a video game sandbox tool (Garry’s Mod) to create three steaming video scenarios designed to mimic video feeds that physical security personnel might observe. All scenarios required participants to identify the presences of a threat appearing during the video feed. Each scenario differed in level of complexity, in that one scenario required only location monitoring, one required location and action monitoring, and one required location, action, and conjunction monitoring in that when an action was performed it was only considered a threat when performed by a certain character model. While there was no behavioral effect of scenario in terms of accuracy or response times, in all scenarios we found evidence of a P300 when comparing response to threatening stimuli to that of standard stimuli. Results therefore indicate that sufficient levels of experimental control may be achieved to allow for the precise timing required for ERP analysis. Thus, we demonstrate the feasibility of using existing modding tools to create video scenarios amenable to neuroimaging analysis.
We have extended the computational singular perturbation (CSP) method to differential algebraic equation (DAE) systems and demonstrated its application in a heterogeneous-catalysis problem. The extended method obtains the CSP basis vectors for DAEs from a reduced Jacobian matrix that takes the algebraic constraints into account. We use a canonical problem in heterogeneous catalysis, the transient continuous stirred tank reactor (T-CSTR), for illustration. The T-CSTR problem is modelled fundamentally as an ordinary differential equation (ODE) system, but it can be transformed to a DAE system if one approximates typically fast surface processes using algebraic constraints for the surface species. We demonstrate the application of CSP analysis for both ODE and DAE constructions of a T-CSTR problem, illustrating the dynamical response of the system in each case. We also highlight the utility of the analysis in commenting on the quality of any particular DAE approximation built using the quasi-steady state approximation (QSSA), relative to the ODE reference case.
Simple but mission-critical internet-based applications that require extremely high reliability, availability, and verifiability (e.g., auditability) could benefit from running on robust public programmable blockchain platforms such as Ethereum. Unfortunately, program code running on such blockchains is normally publicly viewable, rendering these platforms unsuitable for applications requiring strict privacy of application code, data, and results. In this work, we investigate using MPC techniques to protect the privacy of a blockchain computation. While our main goal is to hide both the data and the computed function itself, we also consider the standard MPC setting where the function is public. We describe GABLE (Garbled Autonomous Bots Leveraging Ethereum), a blockchain MPC architecture and system. The GABLE architecture specifies the roles and capabilities of the players. GABLE includes two approaches for implementing MPC over blockchain: Garbled Circuits (GC), evaluating universal circuits, and Garbled Finite State Automata (GFSA). We formally model and prove the security of GABLE implemented over garbling schemes, a popular abstraction of GC and GFSA from (Bellare et al., CCS 2012). We analyze in detail the performance (including Ethereum gas costs) of both approaches and discuss the trade-offs. We implement a simple prototype of GABLE and report on the implementation issues and experience.
With the increase in penetration of inverter-based resources (IBRs) in the electrical power system, the ability of these devices to provide grid support to the system has become a necessity. With standards previously developed for the interconnection requirements of grid-following inverters (GFLI) (most commonly photovoltaic inverters), it has been well documented how these inverters 'should' respond to changes in voltage and frequency. However, with other IBRs such as grid-forming inverters (GFMIs) (used for energy storage systems, standalone systems, and as uninterruptable power supplies) these requirements are either: not yet documented, or require a more in deep analysis. With the increased interest in microgrids, GFMIs that can be paralleled onto a distribution system have become desired. With the proper control schemes, a GFMI can help maintain grid stability through fast response compared to rotating machines. This paper will present an experimental comparison of commercially available GFMIand GFLI ' responses to voltage and frequency deviation, as well as the GFMIoperating as a standalone system and subjected to various changes in loads.
We examine coupling into azimuthal slots on an infinite cylinder with a infinite length interior cavity operating both at the fundamental cavity modal frequencies, with small slots and a resonant slot, as well as higher frequencies. The coupling model considers both radiation on an infinite cylindrical exterior as well as a half space approximation. Bounding calculations based on maximum slot power reception and interior power balance are also discussed in detail and compared with the prior calculations. For higher frequencies limitations on matching are imposed by restricting the loads ability to shift the slot operation to the nearest slot resonance; this is done in combination with maximizing the power reception as a function of angle of incidence. Finally, slot power mismatch based on limited cavity load quality factor is considered below the first slot resonance.
This paper presents a run-to-run (R2R) controller for mechanical serial sectioning (MSS). MSS is a destructive material analysis process which repeatedly removes a thin layer of material and images the exposed surface. The images are then used to gain insight into the material properties and often to construct a 3-dimensional reconstruction of the material sample. Currently, an experience human operator selects the parameters of the MSS to achieve the desired thickness. The proposed R2R controller will automate this process while improving the precision of the material removal. The proposed R2R controller solves an optimization problem designed to minimize the variance of the material removal subject to achieving the expected target removal. This optimization problem was embedded in an R2R framework to provide iterative feedback for disturbance rejection and convergence to the target removal amount. Since an analytic model of the MSS system is unavailable, we adopted a data-driven approach to synthesize our R2R controller from historical data. The proposed R2R controller is demonstrated through simulations. Future work will empirically demonstrate the proposed R2R through experiments with a real MSS system.
Stochasticity is ubiquitous in the world around us. However, our predominant computing paradigm is deterministic. Random number generation (RNG) can be a computationally inefficient operation in this system especially for larger workloads. Our work leverages the underlying physics of emerging devices to develop probabilistic neural circuits for RNGs from a given distribution. However, codesign for novel circuits and systems that leverage inherent device stochasticity is a hard problem. This is mostly due to the large design space and complexity of doing so. It requires concurrent input from multiple areas in the design stack from algorithms, architectures, circuits, to devices. In this paper, we present examples of optimal circuits developed leveraging AI-enhanced codesign techniques using constraints from emerging devices and algorithms. Our AI-enhanced codesign approach accelerated design and enabled interactions between experts from different areas of the micro-electronics design stack including theory, algorithms, circuits, and devices. We demonstrate optimal probabilistic neural circuits using magnetic tunnel junction and tunnel diode devices that generate an RNG from a given distribution.
Zhang, Chen; Jacobson, Clas; Zhang, Qi; Biegler, Lorenz T.; Eslick, John C.; Zamarripa, Miguel A.; Stinchfeld, Georgia; Siirola, John D.; Laird, Carl D.
For many industries addressing varied customer needs means producing a family of products that satisfy a range of design requirements. Manufacturers seek to design this family of products while exploiting opportunities for shared components to reduce manufacturing cost and complexity. We present a mixed-integer programming formulation that determines the optimal design for each product, the number and design of shared components, and the allocation of those shared components across the products in the family. This formulation and workflow for product family design has created significant business impact on the industrial design of product families for large-scale commercial HVAC chillers in Carrier Global Corporation. We demonstrate the approach on an open case study based on a transcritical CO2 refrigeration cycle. This case study and our industrial experience show that the formulation is computationally tractable and can significantly reduce engineering time by replacing the manual design process with an automated approach.
In accident scenarios involving release of tritium during handling and storage, the level of risk to human health is dominated by the extent to which radioactive tritium is oxidized to the water form (T2O or THO). At some facilities, tritium inventories consist of very small quantities stored at sub-atmospheric pressure, which means that tritium release accident scenarios will likely produce concentrations in air that are well below the lower flammability limit. It is known that isotope effects on reaction rates should result in slower oxidation rates for heavier isotopes of hydrogen, but this effect has not previously been quantified for oxidation at concentrations well below the lower flammability limit for hydrogen. This work describes hydrogen isotope oxidation measurements in an atmospheric tube furnace reactor. These measurements consist of five concentration levels between 0.01% and 1% protium or deuterium and two residence times. Oxidation is observed to occur between about 550°C and 800°C, with higher levels of conversion achieved at lower temperatures for protium with respect to deuterium at the same volumetric inlet concentration and residence time. Computational fluid dynamics simulations of the experiments were used to customize reaction orders and Arrhenius parameters in a 1-step oxidation mechanism. The trends in the rates for protium and deuterium are extrapolated based on guidance from literature to produce kinetic rate parameters appropriate for tritium oxidation at low concentrations.
In high temperature (HT) environments often encountered in geothermal wells, data rate transfers for downhole instrumentation are relatively limited due to transmission line bandwidth and insertion loss and the processing speed of HT microcontrollers. In previous research, Sandia National Laboratory Geothermal Department obtained 3.8 Mbps data rates over 1524 m (5000 ft) for single conductor wireline cable with less than a 1x10-8 bit error rate utilizing low temperature NITM hardware (formerly National InstrumentsTM). Our protocol technique was a combination of orthogonal frequency-division multiplexing and quadrature amplitude modulation across the bandwidth of the single conductor wireline. This showed it is possible to obtain high data rates in low bandwidth wirelines. This paper focuses on commercial HT microcontrollers (µC), rather than low temperature NITM modules, to enable high-speed communication in an HT environment. As part of this effort, four devices were evaluated, and an optimal device (SM320F28335-HT) was selected for its high clock rates, floating-point unit, and on-board analog-to-digital converter. A printed circuit board was assembled with the HT µC, an HT resistor digital-to-analog converter, and an HT line driver. The board was tested at the microcontroller's rated maximum temperature (210°C) for a week while transmitting through a 1524 m (5000 ft) wireline. A final test was conducted to the point of failure at elevated temperatures. This paper will discuss communication methods, achieved data rates, and hardware selection. This effort contributes to the enhancement of HT instrumentation by enabling greater sensor counts and improving data accuracy and transfer rates.
Ship emissions can form linear cloud structures, or ship tracks, when atmospheric water vapor condenses on aerosols in the ship exhaust. These structures are of interest because they are observable and traceable examples of MCB, a mechanism that has been studied as a potential approach for solar climate intervention. Ship tracks can be observed throughout the diurnal cycle via space-borne assets like the advanced baseline imagers on the national oceanic and atmospheric administration geostationary operational environmental satellites, the GOES-R series. Due to complex atmospheric dynamics, it can be difficult to track these aerosol perturbations over space and time to precisely characterize how long a single emission source can significantly contribute to indirect radiative forcing. We propose an optical flow approach to estimate the trajectories of ship-emitted aerosols after they begin mixing with low boundary layer clouds using GOES-17 satellite imagery. Most optical flow estimation methods have only been used to estimate large scale atmospheric motion. We demonstrate the ability of our approach to precisely isolate the movement of ship tracks in low-lying clouds from the movement of large swaths of high clouds that often dominate the scene. This efficient approach shows that ship tracks persist as visible, linear features beyond 9 h and sometimes longer than 24 h.
We propose a two-stage scenario-based stochastic optimization problem to determine investments that enhance power system resilience. The proposed optimization problem minimizes the Conditional Value at Risk (CVaR) of load loss to target low-probability high-impact events. We provide results in the context of generator winterization investments in Texas using winter storm scenarios generated from historical data collected from Winter Storm Uri. Results illustrate how the CVaR metric can be used to minimize the tail of the distribution of load loss and illustrate how risk-Aversity impacts investment decisions.
Proceedings of the Nuclear Criticality Safety Division Topical Meeting, NCSD 2022 - Embedded with the 2022 ANS Annual Meeting
Salazar, Alex
The postclosure criticality safety assessment for the direct disposal of dual-purpose canisters (DPCs) in a geologic repository includes considerations of transient criticality phenomena. The power pulse from a hypothetical transient criticality event in an unsaturated alluvial repository is evaluated for a DPC containing 37 spent pressurized water reactor (PWR) assemblies. The scenario assumes that the conditions for baseline criticality are achieved through flooding with groundwater and progressive failure of neutron absorbing media. A preliminary series of steady-state criticality calculations is conducted to characterize reactivity feedback due to absorber degradation, Doppler broadening, and thermal expansion. These feedback coefficients are used in an analysis with a reactor kinetics code to characterize the transient pulse given a positive reactivity insertion for a given length of time. The time-integrated behavior of the pulse can be used to model effects on the DPC and surrounding barriers in future studies and determine if transient criticality effects are consequential.
The precise estimation of performance loss rate (PLR) of photovoltaic (PV) systems is vital for reducing investment risks and increasing the bankability of the technology. Until recently, the PLR of fielded PV systems was mainly estimated through the extraction of a linear trend from a time series of performance indicators. However, operating PV systems exhibit failures and performance losses that cause variability in the performance and may bias the PLR results obtained from linear trend techniques. Change-point (CP) methods were thus introduced to identify nonlinear trend changes and behaviour. The aim of this work is to perform a comparative analysis among different CP techniques for estimating the annual PLR of eleven grid-connected PV systems installed in Cyprus. Outdoor field measurements over an 8-year period (June 2006-June 2014) were used for the analysis. The obtained results when applying different CP algorithms to the performance ratio time series (aggregated into monthly blocks) demonstrated that the extracted trend may not always be linear but sometimes can exhibit nonlinearities. The application of different CP methods resulted to PLR values that differ by up to 0.85% per year (for the same number of CPs/segments).
We present a procedure for randomly generating realistic steady-state contingency scenarios based on the historical outage data from a particular event. First, we divide generation into classes and fit a probability distribution of outage magnitude for each class. Second, we provide a method for randomly synthesizing generator resilience levels in a way that preserves the data-driven probability distributions of outage magnitude. Finally, we devise a simple method of scaling the storm effects based on a single global parameter. We apply our methods using data from historical Winter Storm Uri to simulate contingency events for the ACTIVSg2000 synthetic grid on the footprint of Texas.
The penetration of renewable energy resources (RER) and energy storage systems (ESS) into the power grid has been accelerated in recent times due to the aggressive emission and RER penetration targets. The Integrated resource planning (IRP) framework can help in ensuring long-term resource adequacy while satisfying RER integration and emission reduction targets in a cost-effective and reliable manner. In this paper, we present pIRP (probabilistic Integrated Resource Planning), an open-source Python-based software tool designed for optimal portfolio planning for an RER and ESS rich future grid and for addressing the capacity expansion problem. The tool, which is planned to be released publicly, with its ESS and RER modeling capabilities along with enhanced uncertainty handling make it one of the more advanced non-commercial IRP tools available currently. Additionally, the tool is equipped with an intuitive graphical user interface and expansive plotting capabilities. Impacts of uncertainties in the system are captured using Monte Carlo simulations and lets the users analyze hundreds of scenarios with detailed scenario reports. A linear programming based architecture is adopted which ensures sufficiently fast solution time while considering hundreds of scenarios and characterizing profile risks with varying levels of RER and ESS penetration levels. Results for a test case using data from parts of the Eastern Interconnection are provided in this paper to demonstrate the capabilities offered by the tool.
This paper presents the formulation, implementation, and demonstration of a new, largely phenomenological, model for the damage-free (micro-crack-free) thermomechanical behavior of rock salt. Unlike most salt constitutive models, the new model includes both drag stress (isotropic) and back stress (kinematic) hardening. The implementation utilizes a semi-implicit scheme and a fall-back fully-implicit scheme to numerically integrate the model's differential equations. Particular attention was paid to the initial guesses for the fully-implicit scheme. Of the four guesses investigated, an initial guess that interpolated between the previous converged state and the fully saturated hardening state had the best performance. The numerical implementation was then used in simulations that highlighted the difference between drag stress hardening versus combined drag and back stress hardening. Simulations of multi-stage constant stress tests showed that only combined hardening could qualitatively represent reverse (inverse transient) creep, as well as the large transient strains experimentally observed upon switching from axisymmetric compression to axisymmetric extension. Simulations of a gas storage cavern subjected to high and low gas pressure cycles showed that combined hardening led to substantially greater volume loss over time than drag stress hardening alone.
Metasurface lenses are fabricated using membrane projection lithography following a CMOS-compatible process flow. The lenses are 10-mm in diameter and employ 3-dimensional unit cells designed to function in the mid-infrared spectral range.
Dynamical systems subject to intermittent contact are often modeled with piecewise-smooth contact forces. However, the discontinuous nature of the contact can cause inaccuracies in numerical results or failure in numerical solvers. Representing the piecewise contact force with a continuous and smooth function can mitigate these problems, but not all continuous representations may be appropriate for this use. In this work, five representations used by previous researchers (polynomial, rational polynomial, hyperbolic tangent, arctangent, and logarithm-arctangent functions) are studied to determine which ones most accurately capture nonlinear behaviors including super- and subharmonic resonances, multiple solutions, and chaos. The test case is a single-DOF forced Duffing oscillator with freeplay nonlinearity, solved using direct time integration. This work intends to expand on past studies by determining the limits of applicability for each representation and what numerical problems may occur.
Proceedings of SPIE - The International Society for Optical Engineering
Fredricksen, C.J.; Peale, R.E.; Dhakal, N.; Barrett, C.L.; Boykin II, O.; Maukonen, D.; Davis, L.; Ferarri, B.; Chernyak, L.; Zeidan, O.A.; Hawkins, Samuel D.; Klem, John F.; Krishna, Sanjay; Kazemi, Alireza; Schuler-Sandy, Ted
Effects of gamma and proton irradiation, and of forward bias minority carrier injection, on minority carrier diffusion and photoresponse were investigated for long-wave (LW) and mid-wave (MW) infrared detectors with engineered majoritycarrier barriers. The LWIR detector was a type-II GaSb/InAs strained-layer superlattice pBiBn structure. The MWIR detector was a InAsSb/AlAsSb nBp structure without superlattices. Room temperature gamma irradiations degraded the minority carrier diffusion length of the LWIR structure, and minority carrier injections caused dramatic improvements, though there was little effect from either treatment on photoresponse. For the MWIR detector, effects of room temperature gamma irradiation and injection on minority carrier diffusion and photoresponse were negligible. Subsequently, both types of detectors were subjected to gamma irradiation at 77 K. In-situ photoresponse was unchanged for the LWIR detectors, while that for the MWIR ones decreased 19% after cumulative dose of ~500 krad(Si). Minority carrier injection had no effect on photoresponse for either. The LWIR detector was then subjected to 4 Mrad(Si) of 30 MeV proton irradiation at 77 K, and showed a 35% decrease in photoresponse, but again no effect from forward bias injection. These results suggest that photoresponse of the LWIR detectors is not limited by minority carrier diffusion.
We propose the use of balanced iterative reducing and clustering using hierarchies (BIRCH) combined with linear regression to predict the reduced Young's modulus and hardness of highly heterogeneous materials from a set of nanoindentation experiments. We first use BIRCH to cluster the dataset according to its mineral compositions, which are derived from the spectral matching of energy-dispersive spectroscopy data through the modular automated processing system (MAPS) platform. We observe that grouping our dataset into five clusters yields the best accuracy as well as a reasonable representation of mineralogy in each cluster. Subsequently, we test four types of regression models, namely linear regression, support vector regression, Gaussian process regression, and extreme gradient boosting regression. The linear regression and Gaussian process regression provide the most accurate prediction, and the proposed framework yields R2 = 0.93 for the test set. Although the study is needed more comprehensively, our results shows that machine learning methods such as linear regression or Gaussian process regression can be used to accurately estimate mechanical properties with a proper number of grouping based on compositional data.
It is impossible in practice to comprehensively test even small software programs due to the vastness of the reachable state space; however, modern cyber-physical systems such as aircraft require a high degree of confidence in software safety and reliability. Here we explore methods of generating test sets to effectively and efficiently explore the state space for a module based on the Traffic Collision Avoidance System (TCAS) used on commercial aircraft. A formal model of TCAS in the model-checking language NuSMV provides an output oracle. We compare test sets generated using various methods, including covering arrays, random, and a low-complexity input paradigm applied to 28 versions of the TCAS C program containing seeded errors. Faults are triggered by tests for all 28 programs using a combination of covering arrays and random input generation. Complexity-based inputs perform more efficiently than covering arrays, and can be paired with random input generation to create efficient and effective test sets. A random forest classifier identifies variable values that can be targeted to generate tests even more efficiently in future work, by combining a machine-learned fuzzing algorithm with more complex model oracles developed in model-based systems engineering (MBSE) software.
An array of Wave Energy Converters (WEC) is required to supply a significant power level to the grid. However, the control and optimization of such an array is still an open research question. This paper analyzes two aspects that have a significant impact on the power production. First the spacing of the buoys in a WEC array will be analyzed to determine the optimal shift between the buoys in an array. Then the wave force interacting with the buoys will be angled to create additional sequencing between the electrical signals. A cost function is proposed to minimize the power variation and energy storage while maximizing the delivered energy to the onshore point of common coupling to the electrical grid.
Refractory complex concentrated alloys are an emerging class of materials that attracts attention due to their stability and performance at high temperatures. In this study, we investigate the variations in the mechanical and thermal properties across a broad compositional space for the refractory MoNbTaTi quaternary using high-throughput ab-initio calculations and experimental characterization. For all the properties surveyed, we note a good agreement between our modeling predictions and the experimentally measured values. We reveal the particular role of molybdenum (Mo) to achieve high strength when in high concentration. We trace the origin of this phenomenon to a shift from metallic to covalent bonding when the Mo content is increased. Additionally, a mechanistic, dislocation-based description of the yield strength further explains such high strength due to a combination of high bulk and shear moduli, accompanied by the relatively small size of the Mo atom compared to the other atoms in the alloy. Our analysis of the thermodynamics properties shows that regardless of the composition, this class of quaternary alloys shows good stability and low sensitivity to temperature. Taken together, these results pave the way for the design of new high-performance refractory alloys beyond the equimolar composition found in high-entropy alloys.
This work explores deriving transmissibility functions for a missile from a measured location at the base of the fairing to a desired location within the payload. A pressure on the outside of the fairing and the rocket motor’s excitation creates an acceleration at a measured location and a desired location. Typically, the desired location is not measured. In fact, it is typical that the payload may change, but measured acceleration at the base of the fairing is generally similar to previous test flights. Given this knowledge, it is desired to use a finite-element model to create a transmissibility function which relates acceleration from the previous test flight’s measured location at the base of the fairing to acceleration at a location in the new payload. Four methods are explored for deriving this transmissibility, with the goal of finding an appropriate transmissibility when both the pressure and rocket motor excitation are equally present. These methods are assessed using transient results from a simple example problem, and it is found that one of the methods gives good agreement with the transient results for the full range of loads considered.
Applications such as counterfeit identification, quality control, and non-destructive material identification benefit from improved spatial and compositional analysis. X-ray Computed Tomography is used in these applications but is limited by the X-ray focal spot size and the lack of energy-resolved data. Recently developed hyperspectral X-ray detectors estimate photon energy, which enables composition analysis but lacks spatial resolution. Moving beyond bulk homogeneous transmission anodes toward multi-metal patterned anodes enables improvements in spatial resolution and signal-to-noise ratios in these hyperspectral X-ray imaging systems. We aim to design and fabricate transmission anodes that facilitate confirmation of previous simulation results. These anodes are fabricated on diamond substrates with conventional photolithography and metal deposition processes. The final transmission anode design consists of a cluster of three disjoint metal bumps selected from molybdenum, silver, samarium, tungsten, and gold. These metals are chosen for their k-lines, which are positioned within distinct energy intervals of interest and are readily available in standard clean rooms. The diamond substrate is chosen for its high thermal conductivity and high transmittance of X-rays. The feature size of the metal bumps is chosen such that the cluster is smaller than the 100 m diameter of the impinging electron beam in the X-ray tube. This effectively shrinks the X-ray focal spot in the selected energy bands. Once fabricated, our transmission anode is packaged in a stainless-steel holder that can be retrofitted into our existing X-ray tube. Innovations in anode design enable an inexpensive and simple method to improve existing X-ray imaging systems.
Frequent changes in penetration levels of distributed energy resources (DERs) and grid control objectives have caused the maintenance of accurate and reliable grid models for behind-the-meter (BTM) photovoltaic (PV) system impact studies to become an increasingly challenging task. At the same time, high adoption rates of advanced metering infrastructure (AMI) devices have improved load modeling techniques and have enabled the application of machine learning algorithms to a wide variety of model calibration tasks. Therefore, we propose that these algorithms can be applied to improve the quality of the input data and grid models used for PV impact studies. In this paper, these potential improvements were assessed for their ability to improve the accuracy of locational BTM PV hosting capacity analysis (HCA). Specifically, the voltage- and thermal-constrained hosting capacities of every customer location on a distribution feeder (1,379 in total) were calculated every 15 minutes for an entire year before and after each calibration algorithm or load modeling technique was applied. Overall, the HCA results were found to be highly sensitive to the various modeling deficiencies under investigation, illustrating the opportunity for more data-centric/model-free approaches to PV impact studies.
Transportation of sodium-bonded spent fuel appears to present no unique challenges. Storage systems for this fuel should be designed to keep water, both liquid and vapor, from contacting the spent fuel. This fuel is not suitable for geologic disposal; therefore, how the spent sodium bonded fuel will be processed and the characteristics of the final disposal waste form(s) need to be considered. TRISO spent fuel appears to present no unique challenges in terms of transportation, storage, or disposal. If the graphite block is disposed of with the TRISO spent fuel, the 14C and 3H generated would need to be considered in the postclosure performance assessment. Salt waste from the molten salt reactor has yet to be transported or stored and might be a challenge to dispose of in a non-salt repositories. Like sodium-bonded spent fuel, how the salt will be treated and the characteristics of the final disposal waste form(s) need to be considered. In addition, radiolysis in the frozen salt waste form continues to generate gas, which presents a hazard. Both HALEU and high-enriched uranium SNF are currently being stored and transported by the DOE. Disposal of fuels with enrichments greater than 5% was included in the disposal plan for Yucca Mountain. The increased potential for criticality associated with the higher enriched SNF is mitigated by additional criticality control measures. Fuels that are similar to some ATFs were part of the disposal plan for Yucca Mountain. Some of the properties of these fuels (swelling, generation of 14C) would have to be considered as part of a postclosure performance assessment.
The goal of this paper is to present a set of measurements from a benchmark structure containing two bolted joints to support future efforts to predict the damping due to the joints and to model nonlinear coupling between the first two elastic modes. Bolted joints introduce nonlinearities in structures, typically causing a softening in the natural frequency and an increase in damping because of frictional slip between the contact interfaces within the joint. These nonlinearities pose significant challenges when characterizing the response of the structure under a large range of load amplitudes, especially when the modal responses become coupled, causing the effective damping and natural frequency to not only depend on the excitation amplitude of the targeted mode, but also the relative amplitudes of other modes. In this work, two nominally identical benchmark structures, known in some prior works as the S4 beam, are tested to characterize their nonlinear properties for the first two elastic modes. Detailed surface measurements are presented and validated through finite element analysis and reveal distinct contact interactions between the two sets of beams. The free-free test structures are excited with an impact hammer and the transient response is analyzed to extract the damping and frequency backbone curves. A range of impact amplitudes and drive points are used to isolate a single mode or to excite both modes simultaneously. Differences in the nonlinear response correlate with the relative strength of the modes that are excited, allowing one to characterize mode coupling. Each of the beams shows different nonlinear properties for each mode, which is attributed to the different contact pressure distributions between the parts, although the mode coupling relationship is found to be consistent between the two. The test data key finding are presented in this paper and the supporting data is available on a public repository for interested researchers.
Many teams struggle to adapt and right-size software engineering best practices for quality assurance to fit their context. Introducing software quality is not usually framed in a way that motivates teams to take action, thus resulting in it becoming a "check the box for compliance"activity instead of a cultural practice that values software quality and the effort to achieve it. When and how can we provide effective incentives for software teams to adopt and integrate meaningful and enduring software quality practices? We explored this question through a persona-based ideation exercise at the 2021 Collegeville Workshop on Scientific Software in which we created three unique personas that represent different scientific software developer perspectives.
The focus of this study is on spectral equivalence results for higher-order tensor product finite elements in the H(curl), H(div), and L2 function spaces. For certain choices of the higher-order shape functions, the resulting mass and stiffness matrices are spectrally equivalent to those for an assembly of lowest-order edge-, face- or interior-based elements on the associated Gauss–Lobatto–Legendre (GLL) mesh.
The growing x-ray detection burden for vehicles at Ports of Entry in the US requires the development of efficient and reliable algorithms to assist human operator in detecting contraband. Developing algorithms for large-scale non-intrusive inspection (NII) that both meet operational performance requirements and are extensible for use in an evolving environment requires large volumes and varieties of training data, yet collecting and labeling data for these enivornments is prohibitively costly and time consuming. Given these, generating synthetic data to augment algorithm training has been a focus of recent research. Here we discuss the use of synthetic imagery in an object detection framework, and describe a simulation based approach to determining domain-informed threat image projection (TIP) augmentation.
Geothermal energy has been underutilized in the U.S., primarily due to the high cost of drilling in the harsh environments encountered during the development of geothermal resources. Drilling depths can approach 5,000 m with temperatures reaching 170 C. In situ geothermal fluids are up to ten times more saline than seawater and highly corrosive, and hard rock formations often exceed 240 MPa compressive strength. This combination of extreme conditions pushes the limits of most conventional drilling equipment. Furthermore, enhanced geothermal systems are expected to reach depths of 10,000 m and temperatures more than 300 °C. To address these drilling challenges, Sandia developed a proof-of-concept tool called the auto indexer under an annual operating plan task funded by the Geothermal Technologies Program (GTP) of the U.S. Department of Energy Geothermal Technologies Office. The auto indexer is a relatively simple, elastomer-free motor that was shown previously to be compatible with pneumatic hammers in bench-top testing. Pneumatic hammers can improve penetration rates and potentially reduce drilling costs when deployed in appropriate conditions. The current effort, also funded by DOE GTP, increased the technology readiness level of the auto indexer, producing a scaled prototype for drilling larger diameter boreholes using pneumatic hammers. The results presented herein include design details, modeling and simulation results, and testing results, as well as background on percussive hammers and downhole rotation.
Given a graph, finding the distance-2 maximal independent set (MIS-2) of the vertices is a problem that is useful in several contexts such as algebraic multigrid coarsening or multilevel graph partitioning. Such multilevel methods rely on finding the independent vertices so they can be used as seeds for aggregation in a multilevel scheme. We present a parallel MIS-2 algorithm to improve performance on modern accelerator hardware. This algorithm is implemented using the Kokkos programming model to enable performance portability. We demonstrate the portability of the algorithm and the performance on a variety of architectures (x86/ARM CPUs and NVIDIA/AMD GPUs). The resulting algorithm is also deterministic, producing an identical result for a given input across all of these platforms. The new MIS-2 implementation outperforms implementations in state of the art libraries like CUSP and ViennaCL by 3-8x while producing similar quality results. We further demonstrate the benefits of this approach by developing parallel graph coarsening scheme for two different use cases. First, we develop an algebraic multigrid (AMG) aggregation scheme using parallel MIS-2 and demonstrate the benefits as opposed to previous approaches used in the MueLu multigrid package in Trilinos. We also describe an approach for implementing a parallel multicolor 'cluster' Gauss-Seidel preconditioner using this MIS-2 coarsening, and demonstrate better performance with an efficient, parallel, mul-ticolor Gauss-Seidel algorithm.
Rock salt is being considered as a medium for energy storage and radioactive waste disposal. A Disturbed Rock Zone (DRZ) develops in the immediate vicinity of excavations in rock salt, with an increase in permeability, which alters the migration of gases and liquids around the excavation. When creep occurs adjacent to a stiff inclusion such as a concrete plug, it is expected that the stress state near the inclusion will become more hydrostatic and less deviatoric, promoting healing (permeability reduction) of the DRZ. In this scoping study, we measured the permeability of DRZ rock salt with time adjacent to inclusions (plugs) of varying stiffness to determine how the healing of rock salt, as reflected in the permeability changes, is a function of the stress and time. Samples were created with three different inclusion materials in a central hole along the axis of a salt core: (i) very soft silicone sealant, (ii) sorel cement, and (iii) carbon steel. The measured permeabilities are corrected for the gas slippage effect. We observed that the permeability change is a function of the inclusion material. The stiffer the inclusion, the more rapidly the permeability reduces with time.
Any program tasked with the evaluation and acquisition of algorithms for use in deployed scenarios must have an impartial, repeatable, and auditable means of benchmarking both candidate and fielded algorithms. Success in this endeavor requires a body of representative sensor data, data labels indicating the proper algorithmic response to the data as adjudicated by subject matter experts, a means of executing algorithms under review against the data, and the ability to automatically score and report algorithm performance. Each of these capabilities should be constructed in support of program and mission goals. By curating and maintaining data, labels, tests, and scoring methodology, a program can understand and continually improve the relationship between benchmarked and fielded performance of acquired algorithms. A system supporting these program needs, deployed in an environment with sufficient computational power and necessary security controls is a powerful tool for ensuring due diligence in evaluation and acquisition of mission critical algorithms. This paper describes the Seascape system and its place in such a process.
Density fluctuations in compressible turbulent boundary layers cause aero-optical distortions that affect the performance of optical systems such as sensors and lasers. The development of models for predicting the aero-optical distortions relies on theory and reference data that can be obtained from experiments and time-resolved simulations. This paper reports on wall-modeled large-eddy simulations of turbulent boundary layers over a flat plate at Mach 3.5, 7.87, and 13.64. The conditions for the Mach 3.5 case match those for the DNS presented by Miller et al.1 The Mach 7.87 simulation match those inside the Hypersonic Wind Tunnel at Sandia National Laboratories. For the Mach 13.64, the conditions inside the Arnold Engineering Development Complex Hypervelocity Tunnel 9 are matched. Overall, adequate agreement of the velocity and temperature as well as Reynolds stress profiles with reference data from direct numerical simulations is obtained for the different Mach numbers. For all three cases, the normalized root-mean-square optical path difference was computed and compared with data obtained from the reference direct numerical simulations and experiments, as well as predictions obtained with a semi-analytical relationship by Notre Dame University. Above Mach five, the normalized path difference obtained from the simulations is above the model prediction. This provides motivation for future work aimed at evaluating the assumptions behind the Notre Dame model for hypersonic boundary layer flows.
This paper demonstrates that a faster Automatic Generation Control (AGC) response provided by Inverter-Based Resources (IBRs) can improve a performance-based regulation (PBR) metric. The improvement in performance has a direct effect on operational income. The PBR metric used in this work was obtained from a California ISO (CAISO) example and is fully described herein. A single generator in a modified three area IEEE 39 bus system was replaced with a group of co-located IBRs to present possible responses using different plant controls and variable resource conditions. We show how a group of IBRs that rely on variable resources may negatively affect the described PBR metric of all connected areas if adequate plant control is not employed. However, increasing the dispatch rate of internal plant controls may positively affect the PBR metric of all connected areas despite variable resource conditions.
To keep pace with the demand for innovation through scientific computing, modern scientific software development is increasingly reliant upon a rich and diverse ecosystem of software libraries and toolchains. Research software engineers (RSEs) responsible for that infrastructure perform highly integrative work, acting as a bridge between the hardware, the needs of researchers, and the software layers situated between them; relatively little, however, has been written about the role played by RSEs in that work and what support they need to thrive. To that end, we present a two-part report on the development of half-precision floating point support in the Kokkos Ecosystem. Half-precision computation is a promising strategy for increasing performance in numerical computing and is particularly attractive for emerging application areas (e.g., machine learning), but developing practicable, portable, and user-friendly abstractions is a nontrivial task. In the first half of the paper, we conduct an engineering study on the technical implementation of the Kokkos half-precision scalar feature and showcase experimental results; in the second half, we offer an experience report on the challenges and lessons learned during feature development by the first author. We hope our study provides a holistic view on scientific library development and surfaces opportunities for future studies into effective strategies for RSEs engaged in such work.
Hu, Xuan; Walker, Benjamin W.; Garcia-Sanchez, Felipe; Edwards, Alexander J.; Zhou, Peng; Incorvia, Jean A.C.; Paler, Alexandru; Frank, Michael P.; Friedman, Joseph S.
Magnetic skyrmions are nanoscale whirls of magnetism that can be propagated with electrical currents. The repulsion between skyrmions inspires their use for reversible computing based on the elastic billiard ball collisions proposed for conservative logic in 1982. In this letter, we evaluate the logical and physical reversibility of this skyrmion logic paradigm, as well as the limitations that must be addressed before dissipation-free computation can be realized.
Femtosecond laser electronic excitation tagging (FLEET) is a powerful unseeded velocimetry technique typically used to measure one component of velocity along a line, or two or three components from a dot. In this Letter, we demonstrate a dotted-line FLEET technique which combines the dense profile capability of a line with the ability to perform two-component velocimetry with a single camera on a dot. Our set-up uses a single beam path to create multiple simultaneous spots, more than previously achieved in other FLEET spot configurations. We perform dotted-line FLEET measurements downstream of a highly turbulent, supersonic nitrogen free jet. Dotted-line FLEET is created by focusing light transmitted by a periodic mask with rectangular slits of 1.6 × 40 mm2 and an edge-to-edge spacing of 0.5 mm, then focusing the imaged light at the measurement region. Up to seven symmetric dots spaced approximately 0.9 mm apart, with mean full-width at half maximum diameters between 150 and 350 µm, are simultaneously imaged. Both streamwise and radial velocities are computed and presented in this Letter.
Conference Proceedings of the Society for Experimental Mechanics Series
Singh, Aabhas; Wielgus, Kayla M.; Dimino, Ignazio; Kuether, Robert J.; Allen, Matthew S.
Morphing wings have great potential to dramatically improve the efficiency of future generations of aircraft and to reduce noise and emissions. Among many camber morphing wing concepts, shape changing fingerlike mechanisms consist of components, such as torsion bars, bushings, bearings, and joints, all of which exhibit damping and stiffness nonlinearities that are dependent on excitation amplitude. These nonlinearities make the dynamic response difficult to model accurately with traditional simulation approaches. As a result, at high excitation levels, linear finite element models may be inaccurate, and a nonlinear modeling approach is required to capture the necessary physics. This work seeks to better understand the influence of nonlinearity on the effective damping and natural frequency of the morphing wing through the use of quasi-static modal analysis and model reduction techniques that employ multipoint constraints (i.e., spider elements). With over 500,000 elements and 39 frictional contact surfaces, this represents one of the most complicated models to which these methods have been applied to date. The results to date are summarized and lessons learned are highlighted.
A sodium pool fire in the containment of a sodium-cooled fast reactor (SFR) plant can occur due to a pipe leak or break. Accumulation of the sodium in a pool would allow the sodium to react with the atmosphere of the containment, such as oxygen, to cause a fire. Sodium fire is important to model because the heat addition and aerosol generation that occur. Any fission products trapped in the leaked sodium coolant may also be released into the containment, which can affect workers and the public if the containment is breached. This paper describes progress of an international collaborative research in SFR sodium fire modeling between the United States and Japan under the framework of the Civil Nuclear Energy Research and Development Working Group (CNWG). In this collaboration between Sandia National Laboratories (SNL) and Japan Atomic Energy Agency (JAEA), the validation basis for and modeling capabilities of sodium pool fires in MELCOR of SNL and SPHINCS of JAEA are being assessed. Additional model improvements for the sodium pool fire in MELCOR are discussed. The MELCOR results for the sodium pool fire model enhancement in MELCOR agreed well with the JAEA's F7 pool fire experiments and compared closely with SPHINCS.
Conference Proceedings of the Society for Experimental Mechanics Series
Saunders, Brian E.; Vasconcellos, Rui M.G.; Kuether, Robert J.; Abdelkefi, Abdessattar
Dynamical systems containing contact/impact between parts can be modeled as piecewise-smooth reduced-order models. The most common example is freeplay, which can manifest as a loose support, worn hinges, or backlash. Freeplay causes very complex, nonlinear responses in a system that range from isolated resonances to grazing bifurcations to chaos. This can be an issue because classical solution methods, such as direct time integration (e.g., Runge-Kutta) or harmonic balance methods, can fail to accurately detect some of the nonlinear behavior or fail to run altogether. To deal with this limitation, researchers often approximate piecewise freeplay terms in the equations of motion using continuous, fully smooth functions. While this strategy can be convenient, it may not always be appropriate for use. For example, past investigation on freeplay in an aeroelastic control surface showed that, compared to the exact piecewise representation, some approximations are not as effective at capturing freeplay behavior as other ones. Another potential issue is the effectiveness of continuous representations at capturing grazing contacts and grazing-type bifurcations. These can cause the system to transition to high-amplitude responses with frequent contact/impact and be particularly damaging. In this work, a bifurcation study is performed on a model of a forced Duffing oscillator with freeplay nonlinearity. Various representations are used to approximate the freeplay including polynomial, absolute value, and hyperbolic tangent representations. Bifurcation analysis results for each type are compared to results using the exact piecewise-smooth representation computed using MATLAB® Event Location. The effectiveness of each representation is compared and ranked in terms of numerical accuracy, ability to capture multiple response types, ability to predict chaos, and computation time.
The Cramér-Rao Lower Bound (CRLB) is used as a classical benchmark to assess estimators. Online algorithms for estimating modal properties from ambient data, i.e., mode meters, can benefit from accurate estimates of forced oscillations. The CRLB provides insight into how well forced oscillation parameters, e.g., frequency and amplitude, can be estimated. Previous works have solved the lower bound under a single-channel PMU measurement; thus, this paper extends works further to study CRLB under two-channel PMU measurements. The goal is to study how correlated/uncorrelated noise can affect estimation accuracy. Interestingly, these studies shows that correlated noise can decrease the CRLB in some cases. This paper derives the CRLB for the two-channel case and discusses factors that affect the bound.
Neural networks (NN) have become almost ubiquitous with image classification, but in their standard form produce point estimates, with no measure of confidence. Bayesian neural networks (BNN) provide uncertainty quantification (UQ) for NN predictions and estimates through the posterior distribution. As NN are applied in more high-consequence applications, UQ is becoming a requirement. Automating systems can save time and money, but only if the operator can trust what the system outputs. BNN provide a solution to this problem by not only giving accurate predictions and estimates, but also an interval that includes reasonable values within a desired probability. Despite their positive attributes, BNN are notoriously difficult and time consuming to train. Traditional Bayesian methods use Markov Chain Monte Carlo (MCMC), but this is often brushed aside as being too slow. The most common method is variational inference (VI) due to its fast computation, but there are multiple concerns with its efficacy. MCMC is the gold standard and given enough time, will produce the correct result. VI, alternatively, is an approximation that converges asymptotically. Unfortunately (or fortunately), high consequence problems often do not live in the land of asymtopia so solutions like MCMC are preferable to approximations. We apply and compare MCMC-and VI-trained BNN in the context of target detection in hyperspectral imagery (HSI), where materials of interest can be identified by their unique spectral signature. This is a challenging field, due to the numerous permuting effects practical collection of HSI has on measured spectra. Both models are trained using out-of-the-box tools on a high fidelity HSI target detection scene. Both MCMC-and VI-trained BNN perform well overall at target detection on a simulated HSI scene. Splitting the test set predictions into two classes, high confidence and low confidence predictions, presents a path to automation. For the MCMC-trained BNN, the high confidence predictions have a 0.95 probability of detection with a false alarm rate of 0.05 when considering pixels with target abundance of 0.2. VI-trained BNN have a 0.25 probability of detection for the same, but its performance on high confidence sets matched MCMC for abundances >0.4. However, the VI-trained BNN on this scene required significant expert tuning to get these results while MCMC worked immediately. On neither scene was MCMC prohibitively time consuming, as is often assumed, but the networks we used were relatively small. This paper provides an example of how to utilize the benefits of UQ, but also to increase awareness that different training methods can give different results for the same model. If sufficient computational resources are available, the best approach rather than the fastest or most efficient should be used, especially for high consequence problems.
Quantum computers can now run interesting programs, but each processor’s capability—the set of programs that it can run successfully—is limited by hardware errors. These errors can be complicated, making it difficult to accurately predict a processor’s capability. Benchmarks can be used to measure capability directly, but current benchmarks have limited flexibility and scale poorly to many-qubit processors. We show how to construct scalable, efficiently verifiable benchmarks based on any program by using a technique that we call circuit mirroring. With it, we construct two flexible, scalable volumetric benchmarks based on randomized and periodically ordered programs. We use these benchmarks to map out the capabilities of twelve publicly available processors, and to measure the impact of program structure on each one. We find that standard error metrics are poor predictors of whether a program will run successfully on today’s hardware, and that current processors vary widely in their sensitivity to program structure.
A new reflected shock tunnel has been commissioned at Sandia capable of generating hypersonic environments at realistic flight enthalpies. The tunnel uses an existing free-piston driver and shock tube coupled to a conical nozzle to accelerate the flow to approximately Mach 9. The facility design process is outlined and compared to other ground test facilities. A representative flight enthalpy condition is designed using an in-house state-to-state solver and piston dynamics model and evaluated using quasi-1D modeling with the University of Queensland L1d code. This condition is demonstrated using canonical models and a calibration rake. A 25 cm core flow with 4.6 MJ/kg total enthalpy is achieved over an approximately 1 millisecond test time. Analysis shows that increasing piston mass should extend test time by a factor of 2-3.
Using the power balance method we estimate the maximum electric field on a conducting wall of a cavity containing an interior structure supporting eccentric coaxial modes in the frequency regime where the resonant modes are isolated from each other.
Magann, Alicia B.; Mccaul, Gerard; Rabitz, Herschel A.; Bondar, Denys I.
The characterization of mixtures of non-interacting, spectroscopically similar quantum components has important applications in chemistry, biology, and materials science. We introduce an approach based on quantum tracking control that allows for determining the relative concentrations of constituents in a quantum mixture, using a single pulse which enhances the distinguishability of components of the mixture and has a length that scales linearly with the number of mixture constituents. To illustrate the method, we consider two very distinct model systems: mixtures of diatomic molecules in the gas phase, as well as solid-state materials composed of a mixture of components. A set of numerical analyses are presented, showing strong performance in both settings.
ASHRAE and IBPSA-USA Building Simulation Conference
Villa, Daniel V.; Carvallo, Juan P.; Bianchi, Carlo; Lee, Sang H.
Heat waves are increasing in severity, duration, and frequency, making historical weather patterns insufficient for assessments of building resilience. This work introduces a stochastic weather generator called the multi-scenario extreme weather simulator (MEWS) that produces credible future heat waves. MEWS calculates statistical parameters from historical weather data and then shifts them using climate projections of increasing severity and frequency. MEWS is demonstrated using the EnergyPlus medium office prototype model for climate zone 4B using five climate scenarios to 2060. The results show how changes in climate and heat waves affect electric loads, peak loads, and thermal comfort with uncertainty.
Communication-assisted adaptive protection can improve the speed and selectivity of the protection system. However, in the event, that communication is disrupted to the relays from the centralized adaptive protection system, predicting the local relay protection settings is a viable alternative. This work evaluates the potential for machine learning to overcome these challenges by using the Prophet algorithm programmed into each relay to individually predict the time-dial (TDS) and pickup current (IPICKUP) settings. A modified IEEE 123 feeder was used to generate the data needed to train and test the Prophet algorithm to individually predict the TDS and IPICKUP settings. The models were evaluated using the mean average percentage error (MAPE) and the root mean squared error (RMSE) as metrics. The results show that the algorithms could accurately predict IPICKUP setting with an average MAPE accuracy of 99.961%, and the TDS setting with a average MAPE accuracy of 94.32% which is sufficient for protection parameter prediction.
Early on in 2018 Sandia recognized the Microsystems Engineering, Science and Applications (MESA) Programmatic Asset Lifecycle Planning capability to be unpredictable, inconsistent, reactive, and unable to provide strong linkage to the sponsor's needs. The impetus for this report is to share learnings from MESA's journey towards maturing this capability. This report describes re-building the foundational elements of MESA's Programmatic Asset Lifecycle Planning capability using a risk-based, Multi-Criteria Decision Analysis (MCDA) approach. To begin, MESA's decades-old Piano Chart + Ad Hoc Hybrid Methodology is described with a narrative of its strengths and weaknesses. Then its replacement, the MCDA /Analytical Hierarchy Process, is introduced with a discussion of its strengths and weaknesses. To generate a realistic Programmatic Asset Lifecycle Planning budget outlook, MESA used its rolling 20-year Extended Life Program Plan (MELPP) as a baseline. The new MCDA risk-based prioritization methodology implements DOE/NNSA guidelines for prioritization of DOE activities and provides a reliable, structured framework for combining expert judgement and stakeholder preferences according to an established scientific technique. An in-house Hybrid Decision Support System (HDSS) software application was developed to facilitate production of several key deliverables. The application enables analysis of the prioritization decisions with charts to display and provide linkage of MESA's funding requests to the stakeholders' priorities, strategic objectives, nuclear deterrence programs, MESA priorities, and much more.
Area efficient self-correcting flip-flops for use with triple modular redundant (TMR) soft-error hardened logic are implemented in a 12-nm finFET process technology. The TMR flip-flop slave latches self-correct in the clock low phase using Muller C-elements in the latch feedback. These C-elements are driven by the two redundant stored values and not by the slave latch itself, saving area over a similar implementation using majority gate feedback. These flip-flops are implemented as large shift-register arrays on a test chip and have been experimentally tested for their soft-error mitigation in static and dynamic modes of operation using heavy ions and protons. We show how high clock skew can result in susceptibility to soft-errors in the dynamic mode, and explain the potential failure mechanism.
A crucial component of field testing is the utilization of numerical models to better understand the system and the experimental data being collected. Meshing and modeling field tests is a complex and computationally demanding problem. Hexahedral elements cannot always reproduce experimental dimensions leading to grid orientation or geometric errors. Voronoi meshes can match complex geometries without sacrificing orthogonality. As a result, here we present a high-resolution 3D numerical study for the BATS heater test at the WIPP that compares both a standard non-deformed cartesian mesh along with a Voronoi mesh to match field data collected during a salt heater experiment.
Many, if not all, Waste Management Organisation programs will include criticality safety. As criticality safety in the long-term, i.e. considered over post-closure timescales in dedicated disposal facilities, is a unique challenge for geological disposal there is limited opportunity for sharing of experience within an individual organization/country. Therefore, sharing of experience and knowledge between WMOs to understand any similarities and differences will be beneficial in understanding where the approaches are similar and where they are not, and the reasons for this. To achieve this benefit a project on Post-Closure Criticality Safety has been established through the Implementing Geological Disposal - Technology Platform with the overall aim to facilitate the sharing of this knowledge. This project currently has 11 participating nations, including the United States and this paper presents the current position in the United States.