Publications

Results 13901–14000 of 99,299

Search results

Jump to search filters

Determination of the photoelastic constants of silicon nitride using piezo-optomechanical photonic integrated circuits and laser Doppler vibrometry

Optics InfoBase Conference Papers

Koppa, Matthew A.; Storey, Matthew J.; Dong, Mark; Heim, David; Leenheer, Andrew J.; Zimmermann, Matthew; Foulk, James W.; Gilbert, Gerald; Englund, Dirk; Eichenfield, Matt

We measure the photoelastic constants of piezo-optomechanical photonic integrated circuits incorporating a specially formulated, silicon-depleted silicon nitride thin films using a laser doppler vibrometer to calibrate the strain produced by the integrated piezoelectric actuators.

More Details

A Comparative Study of SiC JFET Super-Cascode Topologies

2021 IEEE Energy Conversion Congress and Exposition, ECCE 2021 - Proceedings

Gill, Lee; Rodriguez, Luciano G.; Mueller, Jacob A.; Neely, Jason C.

In spite of several advantages of SiC JFETs over enhancement mode SiC MOSFETs, the intrinsic normally-ON characteristic of the JFETs can be undesirable for many industrial power conversion applications due to the negative turn-OFF voltage requirement. This prevents normally-ON JFETs from being widely accepted in industry. However, a cascode configuration, which uses a low voltage (LV) Si MOSFET can be used to enable a normally-OFF behavior, making this approach an attractive solution to utilize the benefits of SiC JFETs. For medium-, and high-voltage applications that require larger blocking voltage than the rating of each JFET, additional devices can be connected in series to increase the overall blocking voltage capability, creating a super-cascode configuration. This paper provides a review of several super-cascode topology variations and presents a comprehensive comparative study, evaluating similarities and differences in operating principles, equivalent circuits, and design considerations and limitations.

More Details

CHARACTERIZING HUMAN PERFORMANCE: DETECTING TARGETS AT HIGH FALSE ALARM RATES

Proceedings of the 2021 International Topical Meeting on Probabilistic Safety Assessment and Analysis, PSA 2021

Speed, Ann E.; Wheeler, Jason; Russell, John; Oppel, Fred; Sanchez, Danielle N.; Silva, Austin R.; Chavez, Anna

The prevalence effect is the observation that, in visual search tasks as the signal (target) to noise (non-target) ratio becomes smaller, humans are more likely to miss the target when it does occur. Studied extensively in the basic literature [e.g., 1, 2], this effect has implications for real-world settings such as security guards monitoring physical facilities for attacks. Importantly, what seems to drive the effect is the development of a response bias based on learned sensitivity to the statistical likelihood of a target [e.g., 3-5]. This paper presents results from two experiments aimed at understanding how the target prevalence impacts the ability for individuals to detect a target on the 1,000th trial of a series of 1000 trials. The first experiment employed the traditional prevalence effect paradigm. This paradigm involves search for a perfect capital letter T amidst imperfect Ts. In a between-subjects design, our subjects experienced target prevalence rates of 50/50, 1/10, 1/100, or 1/1000. In all conditions, the final trial was always a target. The second (ongoing) experiment replicates this design using a notional physical facility in a mod/sim environment. This simulation enables triggering different intrusion detection sensors by simulated characters and events (e.g., people, animals, weather). In this experiment, subjects viewed 1000 “alarm” events and were asked to characterize each as either a nuisance alarm (e.g., set off by an animal) or an attack. As with the basic visual search study, the final trial was always an attack.

More Details

Dakota and Pyomo for Closed and Open Box Controller Gain Tuning

Proceedings of the IEEE Conference on Decision and Control

Williams, Kyle; Wilbanks, James J.; Schlossman, Rachel; Kozlowski, David M.; Parish, Julie M.

Pyomo and Dakota are openly available software packages developed by Sandia National Labs. In this tutorial, methods for automating the optimization of controller parameters for a nonlinear cart-pole system are presented. Two approaches are described and demonstrated on the cart-pole example problem for tuning a linear quadratic regulator and also a partial feedback linearization controller. First the problem is formulated as a pseudospectral optimization problem under an open box methodology utilizing Pyomo, where the plant model is fully known to the optimizer. In the next approach, a black-box approach utilizing Dakota in concert with a MATLAB or Simulink plant model is discussed, where the plant model is unknown to the optimizer. A comparison of the two approaches provides the end user the advantages and shortcomings of each method in order to pick the right tool for their problem. We find that complex system models and objectives are easily incorporated in the Dakota-based approach with minimal setup time, while the Pyomo-based approach provides rapid solutions once the system model has been developed.

More Details

Impact of Load Allocation and High Penetration PV Modeling on QSTS-Based Curtailment Studies

IEEE Power and Energy Society General Meeting

Azzolini, Joseph A.; Reno, Matthew J.

The rising penetration levels of photovoltaic (PV) systems within distribution networks has driven considerable interest in the implementation of advanced inverter functions, like autonomous Volt- Var, to provide grid support in response to adverse conditions. Quasi-static time-series (QSTS) analyses are increasingly being utilized to evaluate advanced inverter functions on their potential benefits to the grid and to quantify the magnitude of PV power curtailment they may induce. However, these analyses require additional modeling efforts to appropriately capture the time-varying behavior of circuit elements like loads and PV systems. The contribution of this paper is to study QSTS-based curtailment evaluations with different load allocation and PV modeling practices under a variety of assumptions and data limitations. A total of 24 combinations of PV and load modeling scenarios were tested on a realistic test circuit with 1,379 loads and 701 PV systems. The results revealed that the average annual curtailment varied from the baseline value of 0.47% by an absolute difference of +0.55% to -0.43 % based on the modeling scenario.

More Details

Sage Advice? The Impacts of Explanations for Machine Learning Models on Human Decision-Making in Spam Detection

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Stites, Mallory C.; Nyre-Yu, Megan; Moss, Blake; Smutz, Charles; Smith, Michael R.

The impact of machine learning (ML) explanations and different attributes of explanations on human performance was investigated in a simulated spam detection task. Participants decided whether the metadata presented about an email indicated that it was spam or benign. The task was completed with the aid of a ML model. The ML model’s prediction was displayed on every trial. The inclusion of an explanation and, if an explanation was presented, attributes of the explanation were manipulated within subjects: the number of model input features (3, 7) and visualization of feature importance values (graph, table), as was trial type (i.e., hit, false alarm). Overall model accuracy (50% vs 88%) was manipulated between subjects, and user trust in the model was measured as an individual difference metric. Results suggest that a user’s trust in the model had the largest impact on the decision process. The users showed better performance with a more accurate model, but no differences in accuracy based on number of input features or visualization condition. Rather, users were more likely to detect false alarms made by the more accurate model; they were also more likely to comply with a model “miss” when more model explanation was provided. Finally, response times were longer in individuals reporting low model trust, especially when they did not comply with the model’s prediction. Our findings suggest that the factors impacting the efficacy of ML explanations depends, minimally, on the task, the overall model accuracy, the likelihood of different model errors, and user trust.

More Details

Temporally resolved light emission and optical emission spectroscopy of surface flashover in vacuum

IEEE International Pulsed Power Conference

Clark, Raimi; Brooks, William; Hopkins, Matthew M.; Mankowski, John; Stephens, Jacob; Neuber, Andreas

Early light emission provides information about the dominant mechanisms culminating in vacuum surface flashover (anode-initiated vs. cathode-initiated) for particular geometries. From experimental evidence gathered elsewhere, for the case of an insulator oriented at 45° with respect to the anode, anode-initiated flashover is believed to dominate since the field at the anode triple point is roughly three times that of the cathode. Similar to previous work performed on cathode-initiated flashover, light emission from the voltage rise through the impedance collapse is collected into two optical fibers focused on light emanating from the insulator in regions near the anode and cathode. The optical fibers are either connected to PMTs for spectrally integrated localized light intensity information or to a spectrograph used in conjunction with an ICCD camera. Challenges associated with localizing the flashover for optical diagnostics and incorporating the optical diagnostics into the high-field environment are discussed. Initial results for cross-linked polystyrene (Rexolite 1422) support the premise that flashover is initiated from the anode for these geometries, as early light from the anode leads cathode light up to photocathode saturation. Early spectroscopy results show promise for future characterization of the spatio-temporal development of emission from desorbed gas species across the insulator surface and identification of bulk insulator involvement if it occurs.

More Details

High-Al-content heterostructures and devices

Semiconductors and Semimetals

Kaplar, Robert; Baca, Albert G.; Douglas, Erica A.; Klein, Brianna A.; Allerman, A.A.; Crawford, Mary H.; Reza, Shahed

Ultra-wide-bandgap aluminum gallium nitride (AlGaN) possesses several material properties that make it attractive for use in a variety of applications. This chapter focuses on power switching and radio-frequency (RF) devices based on Al-rich AlGaN heterostructures. The relevant figures of merit for both power switching and RF devices are discussed as motivation for the use of AlGaN heterostructures in such applications. The key physical parameters impacting these figures of merit include critical electric field, channel mobility, channel carrier density, and carrier saturation velocity, and the factors influencing these and the trade-offs between them are discussed. Surveys of both power switching and RF devices are given and their performance is described including in special operating regimes such as at high temperatures. Challenges to be overcome, such as the formation of low-resistivity Ohmic contacts, are presented. Finally, an overview of processing-related challenges, especially related to surfaces and interfaces, concludes the chapter.

More Details

Deep Conservation: A Latent-Dynamics Model for Exact Satisfaction of Physical Conservation Laws

35th AAAI Conference on Artificial Intelligence, AAAI 2021

Lee, Kookjin L.; Carlberg, Kevin T.

This work proposes an approach for latent-dynamics learning that exactly enforces physical conservation laws. The method comprises two steps. First, the method computes a low-dimensional embedding of the high-dimensional dynamical-system state using deep convolutional autoencoders. This defines a low-dimensional nonlinear manifold on which the state is subsequently enforced to evolve. Second, the method defines a latent-dynamics model that associates with the solution to a constrained optimization problem. Here, the objective function is defined as the sum of squares of conservation-law violations over control volumes within a finite-volume discretization of the problem; nonlinear equality constraints explicitly enforce conservation over prescribed subdomains of the problem. Under modest conditions, the resulting dynamics model guarantees that the time-evolution of the latent state exactly satisfies conservation laws over the prescribed subdomains.

More Details

Comments on rendering synthetic aperture radar (SAR) images

Proceedings of SPIE - The International Society for Optical Engineering

Doerry, Armin W.

Once Synthetic Aperture Radar (SAR) images are formed, they typically need to be stored in some file format which might restrict the dynamic range of what can be represented. Thereafter, for exploitation by human observers, the images might need to be displayed in a manner to reveal the subtle scene reflectivity characteristics the observer seeks, which generally requires further manipulation of dynamic range. Proper image scaling, for both storage and for display, to maximize the perceived dynamic range of interest to an observer depends on many factors, and an understanding of underlying data characteristics. While SAR images are typically rendered with grayscale, or at least monochromatic intensity variations, color might also be usefully employed in some cases. We analyze these and other issues pertaining to SAR image scaling, dynamic range, radiometric calibration, and display.

More Details

Motion measurement impact on synthetic aperture radar (SAR) geolocation

Proceedings of SPIE - The International Society for Optical Engineering

Doerry, Armin W.; Bickel, Douglas L.

Often a crucial exploitation of a Synthetic Aperture Radar (SAR) image requires accurate and precise knowledge of its geolocation, or at least the geolocation of a feature of interest in the image. However, SAR, like all radar modes of operation, makes its measurements relative to its own location or position. Consequently, it is crucial to understand how the radar's own position and motion impacts the ability to geolocate a feature in the SAR image. Furthermore, accuracy and precision of navigation aids like GPS directly impact the goodness of the geolocation solution.

More Details

Investigation of post-injection strategies for diesel engine Catalyst Heating Operation using a vapor-liquid-equilibrium-based spray model

Journal of Supercritical Fluids

Perini, Federico; Busch, Stephen; Reitz, Rolf D.

Most multidimensional engine simulations spend much time solving for non-equilibrium spray dynamics (atomization, collision, vaporization). However, their accuracy is limited by significant grid dependency, and the need for extensive calibration. This is critical for modeling cold-start diesel fuel post injections, which occur at low temperatures and pressures, far from typical model validation ranges. At the same time, resolving micron-scale spray phenomena would render full Eulerian multiphase calculations prohibitive. In this study, an improved phase equilibrium based approach was implemented and assessed for simulating diesel catalyst heating operation strategies. A phase equilibrium solver based on the model by Yue and Reitz [1] was implemented: a fully multiphase CFD solver is employed with an engineering-size engine grid, and fuel injection is modeled using the standard Lagrangian parcels approach. Mass and energy from the liquid parcels are released to the Eulerian multiphase mixture according to an equilibrium-based liquid jet model. An improved phase equilibrium solver was developed to handle large real-gas mixtures such as those from accurate chemical kinetics mechanisms. The liquid-jet model was improved such that momentum transfer to the Eulerian solver better reproduces the physical spray jet structure. Validation of liquid/vapor penetration predictions showed that the model yields accurate results with very limited tuning and low sensitivity to the few calibration constants. In-cylinder simulations of diesel catalyst heating operation strategies showed that capturing spray structure is paramount when short, transient injection pulses and low temperatures are present. Furthermore, the EP model provides improved predictions of post-injection spray structure and ignitability, while conventional spray modeling does not capture the increase of liquid penetration during the expansion stroke. Finally, the only important EP model calibration constant, Cliq, does not affect momentum transfer, but it changes the local charge cooling distribution through the local energy transfer, which makes it candidate to additional research. The results confirm that non-equilibrium spray processes do not need to be resolved in engineering simulations of high-pressure diesel sprays.

More Details

AMNESIA RADIUS VERSIONS OF CONDITIONAL POINT SAMPLING FOR RADIATION TRANSPORT IN 1D STOCHASTIC MEDIA

Proceedings of the International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering, M and C 2021

Vu, Emily; Olson, Aaron

Conditional Point Sampling (CoPS) is a newly developed Monte Carlo method for computing radiation transport quantities in stochastic media. The algorithm involves a growing list of point-wise material designations during simulation that causes potentially unbounded increases in memory and runtime, making the calculation of probability density functions (PDFs) computationally expensive. In this work, we adapt CoPS by omitting material points used in the computation from being stored in persisting memory if they are within a user-defined “amnesia radius” from neighboring material points already defined within a realization. We conduct numerical studies to investigate trade-offs between accuracy, required computer memory, and computation time. We demonstrate CoPS's ability to produce accurate mean leakage results and PDFs of leakage results while improving memory and runtime through use of an amnesia radius. We show that a limit on required computer memory per cohort of histories and average runtime per history is imposed as a function of a non-zero amnesia radius. We find that, for the benchmark set investigated, using an amnesia radius of ra = 0.01 introduces minimal error (a 0.006 increase in CoPS3PO root mean squared relative error) in results while improving memory and runtime by an order of magnitude for a cohort size of 100.

More Details

USING DEEP NEURAL NETWORKS TO PREDICT MATERIAL TYPES IN CONDITIONAL POINT SAMPLING APPLIED TO MARKOVIAN MIXTURE MODELS

Proceedings of the International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering, M and C 2021

Davis, Warren L.; Olson, Aaron; Popoola, Gabriel A.; Bolintineanu, Dan S.; Rodgers, Theron M.; Vu, Emily

Conditional Point Sampling (CoPS) is a recently developed stochastic media transport algorithm that has demonstrated a high degree of accuracy in 1-D and 3-D calculations for binary mixtures with Markovian mixing statistics. In theory, CoPS has the capacity to be accurate for material structures beyond just those with Markovian statistics. However, realizing this capability will require development of conditional probability functions (CPFs) that are based, not on explicit Markovian properties, but rather on latent properties extracted from material structures. Here, we describe a first step towards extracting these properties by developing CPFs using deep neural networks (DNNs). Our new approach lays the groundwork for enabling accurate transport on many classes of stochastic media. We train DNNs on ternary stochastic media with Markovian mixing statistics and compare their CPF predictions to those made by existing CoPS CPFs, which are derived based on Markovian mixing properties. We find that the DNN CPF predictions usually outperform the existing approximate CPF predictions, but with wider variance. In addition, even when trained on only one material volume realization, the DNN CPFs are shown to make accurate predictions on other realizations that have the same internal mixing behavior. We show that it is possible to form a useful CoPS CPF by using a DNN to extract correlation properties from realizations of stochastically mixed media, thus establishing a foundation for creating CPFs for mixtures other than those with Markovian mixing, where it may not be possible to derive an accurate analytical CPF.

More Details

Response effects due to polygonal representation of pores in porous media thermal models

Proceedings of the 2021 ASME Verification and Validation Symposium, VVS 2021

Irick, Kevin W.; Fathi, Nima

Physics models-such as thermal, structural, and fluid models-of engineering systems often incorporate a geometric aspect such that the model resembles the shape of the true system that it represents. However, the physical domain of the model is only a geometric representation of the true system, where geometric features are often simplified for convenience in model construction and to avoid added computational expense to running simulations. The process of simplifying or neglecting different aspects of the system geometry is sometimes referred to as "defeaturing."Typically, modelers will choose to remove small features from the system model, such as fillets, holes, and fasteners. This simplification process can introduce inherent error into the computational model. A similar event can even take place when a computational mesh is generated, where smooth, curved features are represented by jagged, sharp geometries. The geometric representation and feature fidelity in a model can play a significant role in a corresponding simulation's computational solution. In this paper, a porous material system-represented by a single porous unit cell-is considered. The system of interest is a two-dimensional square cell with a centered circular pore, ranging in porosity from 1% to 78%. However, the circular pore was represented geometrically by a series of regular polygons with number of sides ranging from 3 to 100. The system response quantity under investigation was the dimensionless effective thermal conductivity, k∗, of the porous unit cell. The results show significant change in the resulting k∗ value depending on the number of polygon sides used to represent the circular pore. In order to mitigate the convolution of discretization error with this type of model form error, a series of five systematically refined meshes was used for each pore representation. Using the finite element method (FEM), the heat equation was solved numerically across the porous unit cell domain. Code verification was performed using the Method of Manufactured Solutions (MMS) to assess the order of accuracy of the implemented FEM. Likewise, solution verification was performed to estimate the numerical uncertainty due to discretization in the problem of interest. Specifically, a modern grid convergence index (GCI) approach was employed to estimate the numerical uncertainty on the systematically refined meshes. The results of the analyses presented in this paper illustrate the importance of understanding the effects of geometric representation in engineering models and can help to predict some model form error introduced by the model geometry.

More Details

Evaluating the Impact of Algorithm Confidence Ratings on Human Decision Making in Visual Search

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Jones, Aaron; Trumbo, Michael C.S.; Matzen, Laura E.; Stites, Mallory C.; Howell, Breannan C.; Divis, Kristin M.; Gastelum, Zoe N.

As the ability to collect and store data grows, so does the need to efficiently analyze that data. As human-machine teams that use machine learning (ML) algorithms as a way to inform human decision-making grow in popularity it becomes increasingly critical to understand the optimal methods of implementing algorithm assisted search. In order to better understand how algorithm confidence values associated with object identification can influence participant accuracy and response times during a visual search task, we compared models that provided appropriate confidence, random confidence, and no confidence, as well as a model biased toward over confidence and a model biased toward under confidence. Results indicate that randomized confidence is likely harmful to performance while non-random confidence values are likely better than no confidence value for maintaining accuracy over time. Providing participants with appropriate confidence values did not seem to benefit performance any more than providing participants with under or over confident models.

More Details

An asymptotically compatible approach for Neumann-type boundary condition on nonlocal problems

ESAIM: Mathematical Modelling and Numerical Analysis

You, Huaiqian; Lu, Xin Y.; Trask, Nathaniel A.; Yu, Yue

In this paper we consider 2D nonlocal diffusion models with a finite nonlocal horizon parameter δ characterizing the range of nonlocal interactions, and consider the treatment of Neumann-like boundary conditions that have proven challenging for discretizations of nonlocal models. We propose a new generalization of classical local Neumann conditions by converting the local flux to a correction term in the nonlocal model, which provides an estimate for the nonlocal interactions of each point with points outside the domain. While existing 2D nonlocal flux boundary conditions have been shown to exhibit at most first order convergence to the local counter part as δ → 0, the proposed Neumann-type boundary formulation recovers the local case as O(δ2) in the L∞(ω) norm, which is optimal considering the O(δ2) convergence of the nonlocal equation to its local limit away from the boundary. We analyze the application of this new boundary treatment to the nonlocal diffusion problem, and present conditions under which the solution of the nonlocal boundary value problem converges to the solution of the corresponding local Neumann problem as the horizon is reduced. To demonstrate the applicability of this nonlocal flux boundary condition to more complicated scenarios, we extend the approach to less regular domains, numerically verifying that we preserve second-order convergence for non-convex domains with corners. Based on the new formulation for nonlocal boundary condition, we develop an asymptotically compatible meshfree discretization, obtaining a solution to the nonlocal diffusion equation with mixed boundary conditions that converges with O(δ2) convergence.

More Details

The Multiple Instance Learning Gaussian Process Probit Model

Proceedings of Machine Learning Research

Wang, Fulton; Pinar, Ali P.

In the Multiple Instance Learning (MIL) scenario, the training data consists of instances grouped into bags. Bag labels specify whether each bag contains at least one positive instance, but instance labels are not observed. Recently, Haußmann et al [10] tackled the MIL instance label prediction task by introducing the Multiple Instance Learning Gaussian Process Logistic (MIL-GP-Logistic) model, an adaptation of the Gaussian Process Logistic Classification model that inherits its uncertainty quantification and flexibility. Notably, they give a fast mean-field variational inference procedure. However, due to their use of the logit link, they do not maximize the variational inference ELBO objective directly, but rather a lower bound on it. This approximation, as we show, hurts predictive performance. In this work, we propose the Multiple Instance Learning Gaussian Process Probit (MIL-GP-Probit) model, an adaptation of the Gaussian Process Probit Classification model to solve the MIL instance label prediction problem. Leveraging the analytical tractability of the probit link, we give a variational inference procedure based on variable augmentation that maximizes the ELBO objective directly. Applying it, we show MIL-GP-Probit is more calibrated than MIL-GP-Logistic on all 20 datasets of the benchmark 20 Newsgroups dataset collection, and achieves higher AUC than MIL-GP-Logistic on an additional 51 out of 59 datasets. Finally, we show how the probit formulation enables principled bag label predictions and a Gibbs sampling scheme. This is the first exact inference scheme for any Bayesian model for the MIL scenario.

More Details

HIERARCHICAL PARALLELISM FOR TRANSIENT SOLID MECHANICS SIMULATIONS

World Congress in Computational Mechanics and ECCOMAS Congress

Littlewood, David J.; Jones, Reese E.; Foulk, James W.; Plews, Julia A.; Hetmaniuk, Ulrich; Lifflander, Jonathan J.

Software development for high-performance scientific computing continues to evolve in response to increased parallelism and the advent of on-node accelerators, in particular GPUs. While these hardware advancements have the potential to significantly reduce turnaround times, they also present implementation and design challenges for engineering codes. We investigate the use of two strategies to mitigate these challenges: the Kokkos library for performance portability across disparate architectures, and the DARMA/vt library for asynchronous many-task scheduling. We investigate the application of Kokkos within the NimbleSM finite element code and the LAMÉ constitutive model library. We explore the performance of DARMA/vt applied to NimbleSM contact mechanics algorithms. Software engineering strategies are discussed, followed by performance analyses of relevant solid mechanics simulations which demonstrate the promise of Kokkos and DARMA/vt for accelerated engineering simulators.

More Details

Computing potential of the mean force profiles for ion permeation through channelrhodopsin Chimera, C1C2

Methods in Molecular Biology

Rempe, Susan; Priest, Chad; Vangordon, Monika R.; Rempe, Caroline; Stevens, Mark J.; Rick, Steve

Umbrella sampling, coupled with a weighted histogram analysis method (US-WHAM), can be used to construct potentials of mean force (PMFs) for studying the complex ion permeation pathways of membrane transport proteins. Despite the widespread use of US-WHAM, obtaining a physically meaningful PMF can be challenging. Here, we provide a protocol to resolve that issue. Then, we apply that protocol to compute a meaningful PMF for sodium ion permeation through channelrhodopsin chimera, C1C2, for illustration.

More Details

Etched-and-Regrown GaN pn-Diodes with 1600 v Blocking Voltage

IEEE Journal of the Electron Devices Society

Armstrong, Andrew A.; Allerman, A.A.; Pickrell, Gregory W.; Crawford, Mary H.; Glaser, Caleb E.; Smith, Trevor

Etched-and-regrown GaN pn-diodes capable of high breakdown voltage (1610 V), low reverse current leakage (1 nA = 6 μ A /cm2 at 1250 V), excellent forward characteristics (ideality factor 1.6), and low specific on-resistance (1.1 m Ω.cm2) were realized by mitigating plasma etch-related defects at the regrown interface. Epitaxial n -GaN layers grown by metal-organic chemical vapor deposition on free-standing GaN substrates were etched using inductively coupled plasma etching (ICP), and we demonstrate that a slow reactive ion etch (RIE) prior to p -GaN regrowth dramatically increases diode electrical performance compared to wet chemical surface treatments. Etched-and-regrown diodes without a junction termination extension (JTE) were characterized to compare diode performance using the post-ICP RIE method with prior studies of other post-ICP treatments. Then, etched-and-regrown diodes using the post-ICP RIE etch steps prior to regrowth were fabricated with a multi-step JTE to demonstrate kV-class operation.

More Details

Simultaneous 10 kHz three-dimensional CH2O and tomographic PIV measurements in a lifted partially-premixed jet flame

Proceedings of the Combustion Institute

Zhou, Bo; Li, Tao; Frank, Jonathan H.; Dreizler, Andreas; Bohm, Benjamin

High-speed, three-dimensional (3D) scalar-velocity field measurements were demonstrated in a lifted partially-premixed dimethyl-ether/air jet flame using simultaneous laser-induced fluorescence (LIF) of formaldehyde and tomographic particle image velocimetry (TPIV). The 3D LIF measurements were conducted by raster scanning the laser beam from a 100 kHz pulse-burst laser across the probe volume using an acousto-optic deflector. The volumetric reconstruction of the LIF signal from ten parallel planes provides quasi-instantaneous 3D LIF measurements that are synchronized with 10 kHz TPIV measurements. The temporally resolved formaldehyde-LIF and velocity field data were employed to analyze Lagrangian particle trajectories and displacement speeds at the base of the lifted flame. The particle trajectories revealed flow structures that are difficult to observe in an Eulerian reference frame. Positive and negative displacement speeds were observed at the formaldehyde-LIF surfaces at the inner and outer regions of the jet flame with a maximum displacement speed of approximately eight times the laminar flame speed of a stoichiometric dimethyl-ether/air mixture.

More Details

Spatio-Temporal Progression of Two-Stage Autoignition for Diesel Sprays in a Low-Reactivity Ambient: N-Heptane Pilot-Ignited Premixed Natural Gas

SAE Technical Papers

Rajasegar, Rajavasanth; Niki, Yoichi; Garcia-Oliver, Jose M.; Li, Zheming; Musculus, Mark P.B.

The spatial and temporal locations of autoignition depend on fuel chemistry and the temperature, pressure, and mixing trajectories in the fuel jets. Dual-fuel systems can provide insight into fuel-chemistry aspects through variation of the proportions of fuels with different reactivities, and engine operating condition variations can provide information on physical effects. In this context, the spatial and temporal progression of two-stage autoignition of a diesel-fuel surrogate, n-heptane, in a lean-premixed charge of synthetic natural gas (NG) and air is imaged in an optically accessible heavy-duty diesel engine. The lean-premixed charge of NG is prepared by fumigation upstream of the engine intake manifold. Optical diagnostics include: infrared (IR) imaging for quantifying both the in-cylinder NG concentration and the pilot-jet penetration rate and spreading angle, high-speed cool-flame chemiluminescence imaging as an indicator of low-temperature heat release (LTHR), and high-speed OH* chemiluminescence imaging as an indicator high-temperature heat release (HTHR). To aid interpretation of the experimental observations, zero-dimensional chemical kinetics simulations provide further understanding of the underlying interplay between the physical and chemical processes of mixing (pilot fuel-jet entrainment) and autoignition (two-stage ignition chemistry). Increasing the premixed NG concentration prolongs the ignition delay of the pilot fuel and increases the combustion duration. Due to the relatively short pilot-fuel injections utilized, the transient increase in entrainment near the end of injection (entrainment wave) plays an important role in mixing. To achieve desired combustion characteristics, i.e., ignition and combustion timing (e.g., for combustion phasing) and location (e.g., for reducing wall heat-transfer or tailoring charge stratification), injection parameters can be suitably selected to yield the necessary mixing trajectories that potentially help offset changes in fuel ignition chemistry, which could be a valuable tool for combustion design.

More Details

Near-Surface Imaging of the Multicomponent Gas Phase above a Silver Catalyst during Partial Oxidation of Methanol

ACS Catalysis

Zhou, Bo; Huang, Erxiong; Almeida, Raybel; Gurses, Sadi; Ungar, Alexander; Zetterberg, Johan; Kulkarni, Ambarish; Kronawitter, Coleman X.; Osborn, David L.; Hansen, Nils; Frank, Jonathan H.

Fundamental chemistry in heterogeneous catalysis is increasingly explored using operando techniques in order to address the pressure gap between ultrahigh vacuum studies and practical operating pressures. Because most operando experiments focus on the surface and surface-bound species, there is a knowledge gap of the near-surface gas phase and the fundamental information the properties of this region convey about catalytic mechanisms. We demonstrate in situ visualization and measurement of gas-phase species and temperature distributions in operando catalysis experiments using complementary near-surface optical and mass spectrometry techniques. The partial oxidation of methanol over a silver catalyst demonstrates the value of these diagnostic techniques at 600 Torr (800 mbar) pressure and temperatures from 150 to 410 °C. Planar laser-induced fluorescence provides two-dimensional images of the formaldehyde product distribution that show the development of the boundary layer above the catalyst under different flow conditions. Raman scattering imaging provides measurements of a wide range of major species, such as methanol, oxygen, nitrogen, formaldehyde, and water vapor. Near-surface molecular beam mass spectrometry enables simultaneous detection of all species using a gas sampling probe. Detection of gas-phase free radicals, such as CH3 and CH3O, and of minor products, such as acetaldehyde, dimethyl ether, and methyl formate, provides insights into catalytic mechanisms of the partial oxidation of methanol. The combination of these techniques provides a detailed picture of the coupling between the gas phase and surface in heterogeneous catalysis and enables parametric studies under different operating conditions, which will enhance our ability to constrain microkinetic models of heterogeneous catalysis.

More Details

EMPIRE-PIC: A performance portable unstructured particle-in-cell code

Communications in Computational Physics

Bettencourt, Matthew T.; Brown, Dominic A.S.; Cartwright, Keith L.; Cyr, Eric C.; Glusa, Christian; Lin, Paul T.; Moore, Stan G.; Mcgregor, Duncan A.O.; Pawlowski, Roger; Phillips, Edward; Roberts, Nathan V.; Wright, Steven A.; Maheswaran, Satheesh; Jones, John P.; Jarvis, Stephen A.

In this paper we introduce EMPIRE-PIC, a finite element method particle-in-cell (FEM-PIC) application developed at Sandia National Laboratories. The code has been developed in C++ using the Trilinos library and the Kokkos Performance Portability Framework to enable running on multiple modern compute architectures while only requiring maintenance of a single codebase. EMPIRE-PIC is capable of solving both electrostatic and electromagnetic problems in two- and three-dimensions to second-order accuracy in space and time. In this paper we validate the code against three benchmark problems - a simple electron orbit, an electrostatic Langmuir wave, and a transverse electromagnetic wave propagating through a plasma. We demonstrate the performance of EMPIRE-PIC on four different architectures: Intel Haswell CPUs, Intel's Xeon Phi Knights Landing, ARM Thunder-X2 CPUs, and NVIDIA Tesla V100 GPUs attached to IBM POWER9 processors. This analysis demonstrates scalability of the code up to more than two thousand GPUs, and greater than one hundred thousand CPUs.

More Details

Photothermal alternative to device fabrication using atomic precision advanced manufacturing techniques

Journal of Micro/Nanopatterning, Materials and Metrology

Katzenmeyer, Aaron M.; Dmitrovic, Sanja; Baczewski, Andrew D.; Campbell, Quinn; Bussmann, Ezra; Lu, Tzu M.; Anderson, Evan M.; Schmucker, Scott W.; Ivie, Jeffrey A.; Campbell, Deanna M.; Ward, Daniel R.; Scrymgeour, David; Wang, George T.; Misra, Shashank

The attachment of dopant precursor molecules to depassivated areas of hydrogen-terminated silicon templated with a scanning tunneling microscope (STM) has been used to create electronic devices with subnanometer precision, typically for quantum physics experiments. This process, which we call atomic precision advanced manufacturing (APAM), dopes silicon beyond the solid-solubility limit and produces electrical and optical characteristics that may also be useful for microelectronic and plasmonic applications. However, scanned probe lithography lacks the throughput required to develop more sophisticated applications. Here, we demonstrate and characterize an APAM device workflow where scanned probe lithography of the atomic layer resist has been replaced by photolithography. An ultraviolet laser is shown to locally and controllably heat silicon above the temperature required for hydrogen depassivation on a nanosecond timescale, a process resistant to under- and overexposure. STM images indicate a narrow range of energy density where the surface is both depassivated and undamaged. Modeling that accounts for photothermal heating and the subsequent hydrogen desorption kinetics suggests that the silicon surface temperatures reached in our patterning process exceed those required for hydrogen removal in temperature-programmed desorption experiments. A phosphorus-doped van der Pauw structure made by sequentially photodepassivating a predefined area and then exposing it to phosphine is found to have a similar mobility and higher carrier density compared with devices patterned by STM. Lastly, it is also demonstrated that photodepassivation and precursor exposure steps may be performed concomitantly, a potential route to enabling APAM outside of ultrahigh vacuum.

More Details

A second benchmarking exercise on estimating extreme environmental conditions: Methodology & baseline results

Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering - OMAE

Mackay, Ed; Haselsteiner, Andreas F.; Coe, Ryan G.; Manuel, Lance

Estimating extreme environmental conditions remains a key challenge in the design of offshore structures. This paper describes an exercise for benchmarking methods for extreme environmental conditions, which follows on from an initial benchmarking exercise introduced at OMAE 2019. In this second exercise, we address the problem of estimating extreme metocean conditions in a variable and changing climate. The study makes use of several very long datasets from a global climate model, including a 165-year historical run, a 700-year pre-industrial control run, which represents a quasi-steady state climate, and several runs under various future emissions scenarios. The availability of the long datasets allows for an in-depth analysis of the uncertainties in the estimated extreme conditions and an attribution of the relative importance of uncertainties resulting from modelling choices, natural climate variability, and potential future changes to the climate. This paper outlines the methodology for the second collaborative benchmarking exercise as well as presenting baseline results for the selected datasets.

More Details

Deep learning denoising applied to regional distance seismic data in Utah

Bulletin of the Seismological Society of America

Tibi, Rigobert; Hammond, Patrick; Brogan, Ronald; Young, Christopher J.; Koper, Keith

Seismic waveform data are generally contaminated by noise from various sources. Suppressing this noise effectively so that the remaining signal of interest can be successfully exploited remains a fundamental problem for the seismological community. To date, the most common noise suppression methods have been based on frequency filtering. These methods, however, are less effective when the signal of interest and noise share similar frequency bands. Inspired by source separation studies in the field of music information retrieval (Jansson et al., 2017) and a recent study in seismology (Zhu et al., 2019), we implemented a seismic denoising method that uses a trained deep convolutional neural network (CNN) model to decompose an input waveform into a signal of interest and noise. In our approach, the CNN provides a signal mask and a noise mask for an input signal. The short-time Fourier transform (STFT) of the estimated signal is obtained by multiplying the signal mask with the STFT of the input signal. To build and test the denoiser, we used carefully compiled signal and noise datasets of seismograms recorded by the University of Utah Seismograph Stations network. Results of test runs involving more than 9000 constructed waveforms suggest that on average the denoiser improves the signal-to-noise ratios (SNRs) by ∼ 5 dB, and that most of the recovered signal waveforms have high similarity with respect to the target waveforms (average correlation coefficient of ∼ 0:80) and suffer little distortion. Application to real data suggests that our denoiser achieves on average a factor of up to ∼ 2–5 improvement in SNR over band-pass filtering and can suppress many types of noise that band-pass filtering cannot. For individual waveforms, the improvement can be as high as ∼ 15 dB.

More Details

A Case Study on Pathogen Transport, Deposition, Evaporation and Transmission: Linking High-Fidelity Computational Fluid Dynamics Simulations to Probability of Infection

International Journal of Computational Fluid Dynamics

Domino, Stefan P.

A high-fidelity, low-Mach computational fluid dynamics simulation tool that includes evaporating droplets and variable-density turbulent flow coupling is well-suited to ascertain transmission probability and supports risk mitigation methods development for airborne infectious diseases such as COVID-19. A multi-physics large-eddy simulation-based paradigm is used to explore droplet and aerosol pathogen transport from a synthetic cough emanating from a kneeling humanoid. For an outdoor configuration that mimics the recent open-space social distance strategy of San Francisco, maximum primary droplet deposition distances are shown to approach 8.1 m in a moderate wind configuration with the aerosol plume transported in excess of 15 m. In quiescent conditions, the aerosol plume extends to approximately 4 m before the emanating pulsed jet becomes neutrally buoyant. A dose–response model, which is based on previous SARS coronavirus (SARS-CoV) data, is exercised on the high-fidelity aerosol transport database to establish relative risk at eighteen virtual receptor probe locations.

More Details

On the use of MBPE to mitigate corrupted data in radar applications

Proceedings of SPIE - The International Society for Optical Engineering

Maio, Brianna; Dawood, Muhammed; Loui, Hung

An algorithm is developed based on Edmund K. Miller's Model-Based Parameter Estimation (MBPE) technique to mitigate the effects of missing or corrupted data in random regions of wideband linear frequency modulated (LFM) radar signals. Two methods of applying MBPE in the spectral/frequency domain are presented that operate on either the full complex data or separated magnitude/phase data, respectively. The final algorithm iteratively applies MBPE using the latter approach to re-generate results in the corrupted regions of a windowed LFM signal until the difference is minimized relative to un-corrupted data. Several sets of simulations were conducted across many randomized gap parameters where impulse response (IPR) impacts are summarized. Conditions where the algorithm successfully improved the IPR for a single target are provided. The algorithm's effectiveness on multiple targets, especially when the corrupted regions are relatively large compared to the overall bandwidth of the signal, are also explored.

More Details

Greedy fiedler spectral partitioning for data-driven discrete exterior calculus

CEUR Workshop Proceedings

Huang, Andy; Trask, Nathaniel A.; Brissette, Christopher; Hu, Xiaozhe

The data-driven discrete exterior calculus (DDEC) structure provides a novel machine learning architecture for discovering structure-preserving models which govern data, allowing for example machine learning of reduced order models for complex continuum scale physical systems. In this work, we present a Greedy Fiedler Spectral (GFS) partitioning method to obtain a chain complex structure to support DDEC models, incorporating synthetic data obtained from high-fidelity solutions to partial differential equations. We provide justification for the effectiveness of the resulting chain complex and demonstrate its DDEC model trained for Darcy flow on a heterogeneous domain.

More Details

Phenomenology-informed techniques for machine learning with measured and synthetic SAR imagery

Proceedings of SPIE - The International Society for Optical Engineering

Walker, Christopher; Foulk, James W.; Erteza, Ireena; Bray, Brian

Phenomenology-Informed (PI) Machine Learning is introduced to address the unique challenges faced when applying modern machine-learning object recognition techniques to the SAR domain. PI-ML includes a collection of data normalization and augmentation techniques inspired by successful SAR ATR algorithms designed to bridge the gap between simulated and real-world SAR data for use in training Convolutional Neural Networks (CNNs) that perform well in the low-noise, feature-dense space of camera-based imagery. The efficacy of PI-ML will be evaluated using ResNet, EfficientNet, and other networks, using both traditional training techniques and all-SAR transfer learning.

More Details

Phenomenology-informed techniques for machine learning with measured and synthetic SAR imagery

Proceedings of SPIE - The International Society for Optical Engineering

Walker, Christopher; Foulk, James W.; Erteza, Ireena; Bray, Brian

Phenomenology-Informed (PI) Machine Learning is introduced to address the unique challenges faced when applying modern machine-learning object recognition techniques to the SAR domain. PI-ML includes a collection of data normalization and augmentation techniques inspired by successful SAR ATR algorithms designed to bridge the gap between simulated and real-world SAR data for use in training Convolutional Neural Networks (CNNs) that perform well in the low-noise, feature-dense space of camera-based imagery. The efficacy of PI-ML will be evaluated using ResNet, EfficientNet, and other networks, using both traditional training techniques and all-SAR transfer learning.

More Details

Postprocessing techniques for gradient percolation predictions on the square lattice

Physical Review E

Tencer, John T.; Forsberg, Kelsey M.

In this work, we revisit the classic problem of site percolation on a regular square lattice. In particular, we investigate the effect of quantization bias errors on percolation threshold predictions for large probability gradients and propose a mitigation strategy. We demonstrate through extensive computational experiments that the assumption of a linear relationship between probability gradient and percolation threshold used in previous investigations is invalid. Moreover, we demonstrate that, due to skewness in the distribution of occupation probabilities visited the average does not converge monotonically to the true percolation threshold. We identify several alternative metrics which do exhibit monotonic (albeit not linear) convergence and document their observed convergence rates.

More Details

Exploring characteristics of neural network architecture computation for enabling SAR ATR

Proceedings of SPIE - The International Society for Optical Engineering

Melzer, Ryan; Severa, William M.; Plagge, Mark; Vineyard, Craig M.

Neural network approaches have periodically been explored in the pursuit of high performing SAR ATR solutions. With deep neural networks (DNNs) now offering many state-of-The-Art solutions to computer vision tasks, neural networks are once again being revisited for ATR processing. Here, we characterize and explore a suite of neural network architectural topologies. In doing so, we assess how different architectural approaches impact performance and consider the associated computational costs. This includes characterizing network depth, width, scale, connectivity patterns, as well as convolution layer optimizations. We have explored a suite of architectural topologies applied to both the canonical MSTAR dataset, as well as the more operationally realistic Synthetic and Measured Paired and Labeled Experiment (SAMPLE) dataset. The latter pairs high fidelity computational models of targets with actual measured SAR data. Effectively, this dataset offers the ability to train a DNN on simulated data and test the network performance on measured data. Not only does our in-depth architecture topology analysis offer insight into how different architectural approaches impact performance, but we also have trained DNNs attaining state-of-The-Art performance on both datasets. Furthermore, beyond just accuracy, we also assess how efficiently an accelerator architecture executes these neural networks. Specifically, Using an analytical assessment tool, we forecast energy and latency for an edge TPU like architecture. Taken together, this tradespace exploration offers insight into the interplay of accuracy, energy, and latency for executing these networks.

More Details

Development of Quantum Interconnects (QuICs) for Next-Generation Information Technologies

PRX Quantum

Davids, Paul

Just as "classical"information technology rests on a foundation built of interconnected information-processing systems, quantum information technology (QIT) must do the same. A critical component of such systems is the "interconnect,"a device or process that allows transfer of information between disparate physical media, for example, semiconductor electronics, individual atoms, light pulses in optical fiber, or microwave fields. While interconnects have been well engineered for decades in the realm of classical information technology, quantum interconnects (QuICs) present special challenges, as they must allow the transfer of fragile quantum states between different physical parts or degrees of freedom of the system. The diversity of QIT platforms (superconducting, atomic, solid-state color center, optical, etc.) that will form a "quantum internet"poses additional challenges. As quantum systems scale to larger size, the quantum interconnect bottleneck is imminent, and is emerging as a grand challenge for QIT. For these reasons, it is the position of the community represented by participants of the NSF workshop on "Quantum Interconnects"that accelerating QuIC research is crucial for sustained development of a national quantum science and technology program. Given the diversity of QIT platforms, materials used, applications, and infrastructure required, a convergent research program including partnership between academia, industry, and national laboratories is required.

More Details

Dynamic Programming Method to Optimally Select Power Distribution System Reliability Upgrades

IEEE Open Access Journal of Power and Energy

Raja, S.; Arguello, Bryan; Pierre, Brian J.

This paper presents a novel dynamic programming (DP) technique for the determination of optimal investment decisions to improve power distribution system reliability metrics. This model is designed to select the optimal small-scale investments to protect an electrical distribution system from disruptions. The objective is to minimize distribution system reliability metrics: System Average Interruption Duration Index (SAIDI) and System Average Interruption Frequency Index (SAIFI). The primary input to this optimization model is years of recent utility historical outage data. The DP optimization technique is compared and validated against an equivalent mixed integer linear program (MILP). Through testing on synthetic and real datasets, both approaches are verified to yield equally optimal solutions. Efficiency profiles of each approach indicate that the DP algorithm is more efficient when considering wide budget ranges or a larger outage history, while the MILP model more efficiently handles larger distribution systems. The model is tested with utility data from a distribution system operator in the U.S. Results demonstrate a significant improvement in SAIDI and SAIFI metrics with the optimal small-scale investments.

More Details

Modelling Airborne Transmission and Ventilation Impacts of a COVID-19 Outbreak in a Restaurant in Guangzhou, China

International Journal of Computational Fluid Dynamics

Ho, Clifford K.

Computational fluid dynamics (CFD) modelling was performed to simulate spatial and temporal airborne pathogen concentrations during an observed COVID-19 outbreak in a restaurant in Guangzhou, China. The reported seating configuration, overlap durations, room ventilation, layout, and dimensions were modelled in the CFD simulations to determine relative exposures and probabilities of infection. Results showed that the trends in the simulated probabilities of infection were consistent with the observed rates of infection at each of the tables surrounding the index patient. Alternative configurations that investigated different boundary conditions and ventilation conditions were also simulated. Increasing the fresh-air percentage to 10%, 50%, and 100% of the supply air reduced the accumulated pathogen mass in the room by an average of ∼30%, ∼70%, and ∼80%, respectively, over 73 min. The probability of infection was reduced by ∼10%, 40%, and 50%, respectively. Highlights: Computational fluid dynamics (CFD) models used to simulate pathogen concentrations Infection model developed using spatial and temporal CFD results Simulating spatial variability was important to match observed infection rates Recirculation increased exposures and probability of infection Increased fresh-air ventilation decreased exposures and probability of infection.

More Details

Mechanical Response of Castlegate Sandstone under Hydrostatic Cyclic Loading

Geofluids

Kibikas, William M.; Bauer, Stephen J.

The stress history of rocks in the subsurface affects their mechanical and petrophysical properties. Rocks can often experience repeated cycles of loading and unloading due to fluid pressure fluctuations, which will lead to different mechanical behavior from static conditions. This is of importance for several geophysical and industrial applications, for example, wastewater injection and reservoir storage wells, which generate repeated stress perturbations. Laboratory experiments were conducted with Castlegate sandstone to observe the effects of different cyclic pressure loading conditions on a common reservoir analogue. Each sample was hydrostatically loaded in a triaxial cell to a low effective confining pressure, and either pore pressure or confining pressure was cycled at different rates over the course of a few weeks. Fluid permeability was measured during initial loading and periodically between stress cycles. Samples that undergo cyclic loading experience significantly more inelastic (nonrecoverable) strain compared to samples tested without cyclic hydrostatic loading. Permeability decreases rapidly for all tests during the first few days of testing, but the decrease and variability of permeability after this depend upon the loading conditions of each test. Cycling conditions do affect the mechanical behavior; the elastic moduli decrease with the increasing loading rate and stress cycling. The degree of volumetric strain induced by stress cycles is the major control on permeability change in the sandstones, with less compaction leading to more variation from measurement to measurement. The data indicate that cyclic loading degrades permeability and porosity more than static conditions over a similar period, but the petrophysical properties are dictated more by the hydrostatic loading rate rather than the total length of time stress cycling is imposed.

More Details

Assesment of particle candidates for falling particle receiver applications through irradiance and thermal cycling

Proceedings of the ASME 2021 15th International Conference on Energy Sustainability, ES 2021

Schroeder, Nathaniel R.; Albrecht, Kevin

Falling particle receiver (FPR) systems are a rapidly developing technology for concentrating solar power applications. Solid particles are used as both the heat transfer fluid and system thermal energy storage media. Through the direct irradiation of the solid particles, flux and temperature limitations of tube-bundle receives can be overcome, leading to higher operating temperatures and energy conversion efficiencies. Candidate particles for FPR systems must be resistant to changes in optical properties during long term exposure to high temperatures and thermal cycling using highly concentrated solar irradiance. Five candidate particles, CARBOBEAD HSP 40/70, CARBOBEAD CP 40/100, including three novel particles, CARBOBEAD MAX HD 35, CARBOBEAD HD 350, and WanLi Diamond Black, were tested using simulated solar flux cycling and tube furnace thermal aging. Each particle candidate was exposed for 10 000 cycles (simulating the exposure of a 30-year lifetime) using a shutter to attenuate the solar simulator flux. Feedback from a pyrometer temperature measurement of the irradiated particle surface was used to control the maximum temperatures of 775 °C and 975 °C. Particle solar-weighted absorptivity and emissivity were measured at 2000 cycle intervals. Particle thermal degradation was also studied by heating particles to 800 °C, 900 °C, and 1000 °C for 300 hours in a tube furnace purged with bottled unpurified air. Here particle absorptivity and emissivity were measured at 100-hour intervals. Measurements taken after irradiance cycling and thermal aging were compared to measurements taken from as-received particles. WanLi Diamond Black particles had the highest initial value for solar weighted absorptance, 96%, but degraded up to 4% in irradiance cycling and 6% in thermal aging. CARBOBEAD HSP 40/70 particles currently in use in the prototype FPR at the National Solar Thermal Test Facility had an initial value of 95% solar absorptance with up to a 1% drop after irradiance cycling and 4% drop after 1000 °C thermal aging.

More Details

Terrestrial heat repository for months of storage (THERMS): A novel radial thermocline system

Proceedings of the ASME 2021 15th International Conference on Energy Sustainability Es 2021

Ho, Clifford K.; Gerstle, Walter; Christodoulou, Athena

This paper describes a terrestrial thermocline storage system comprised of inexpensive rock, gravel, and/or sand-like materials to store high-temperature heat for days to months. The present system seeks to overcome past challenges of thermocline storage (cost and performance) by utilizing a confined radialbased thermocline storage system that can better control the flow and temperature distribution in a bed of porous materials with one or more layers or zones of different particle sizes, materials, and injection/extraction wells. Air is used as the heat-transfer fluid, and the storage bed can be heated or "trickle charged"by flowing hot air through multiple wells during periods of low electricity demand using electrical heating or heat from a solar thermal plant. This terrestrial-based storage system can provide low-cost, large-capacity energy storage for both high- (∼400- 800°C) and low- (∼100-400°C) temperature applications. Bench-scale experiments were conducted, and computational fluid dynamics (CFD) simulations were performed to verify models and improve understanding of relevant features and processes that impact the performance of the radial thermocline storage system. Sensitivity studies were performed using the CFD model to investigate the impact o f the air flow rate, porosity, particle thermal conductivity, and air-to-particle heattransfer coefficient on temperature profiles. A preliminary technoeconomic analysis was also performed to estimate the levelized cost of storage for different storage durations and discharging scenarios.

More Details

Development and validation of radiant heat systems to test ram packages under non-uniform thermal environments

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Mendoza, Hector; Gill, Walter; Figueroa Faria, Victor G.; Sanborn, Scott E.

Certification of radioactive material (RAM) packages for storage and transportation requires multiple tiers of testing that simulate accident conditions in order to assure safety. One of these key testing aspects focuses on container response to thermal insults when a package includes materials that decompose, combust, or change phase between-40 °C and 800 °C. Thermal insult for RAM packages during testing can be imposed from a direct pool fire, but it can also be imposed using a furnace or a radiant heat system. Depending on variables such as scale, heating rates, desired environment, intended diagnostics, cost, etc., each of the different methods possess their advantages and disadvantages. While a direct fire can be the closest method to represent a plausible insult, incorporating comprehensive diagnostics in a controlled fire test can pose various challenges due to the nature of a fire. Radiant heat setups can instead be used to impose a comparable heat flux on a test specimen in a controlled manner that allows more comprehensive diagnostics. With radiant heat setups, however, challenges can arise when attempting to impose desired nonuniform heat fluxes that would account for specimen orientation and position in a simulated accident scenario. This work describes the development, implementation, and validation of a series of techniques used by Sandia National Laboratories to create prescribed non-uniform thermal environments using radiant heat sources for RAM packages as large as a 55-gallon drum.

More Details

Detection and localization of objects hidden in fog

Proceedings of SPIE - The International Society for Optical Engineering

Bentz, Brian Z.; Foulk, James W.; Glen, Andrew G.; Pattyn, Christian A.; Redman, Brian J.; Martinez-Sanchez, Andres M.; Westlake, Karl; Hastings, Ryan L.; Webb, Kevin J.; Wright, Jeremy B.

Degraded visual environments like fog pose a major challenge to safety and security because light is scattered by tiny particles. We show that by interpreting the scattered light it is possible to detect, localize, and characterize objects normally hidden in fog. First, a computationally efficient light transport model is presented that accounts for the light reflected and blocked by an opaque object. Then, statistical detection is demonstrated for a specified false alarm rate using the Neyman-Pearson lemma. Finally, object localization and characterization are implemented using the maximum likelihood estimate. These capabilities are being tested at the Sandia National Laboratory Fog Chamber Facility.

More Details

30 CM horizontal drop of a surrogate 17x17 pwr fuel assembly

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Kalinina, Elena A.; Ammerman, Douglas; Grey, Carissa A.; Flores, Gregg; Lujan, Lucas; Saltzstein, Sylvia J.; Michel, Danielle

The 30 cm drop is the remaining NRC normal conditions of transport (NCT) regulatory requirement (10 CFR 71.71) for which there are no data on the response of spent fuel. While obtaining data on the spent fuel is not a direct requirement, it allows for quantifying the risk of fuel breakage resulting from a cask drop from a height of 30 cm or less. Because a full-scale cask and impact limiters are very expensive, 3 consecutive drop tests were conducted to obtain strains on a full-scale surrogate 17x17 PWR assembly. The first step was a 30 cm drop of a 1/3 scale cask loaded with dummy assemblies. The second step was a 30 cm drop test of a full-scale dummy assembly. The third step was a 30 cm drop of a full-scale surrogate assembly. The results of this final test are presented in this paper. The test was conducted in May 2020. The acceleration pulses on the surrogate assembly were in good agreement with the expected pulses derived from steps 1 and 2. This confirmed that during the 30 cm drop the surrogate assembly experienced the same conditions as it would have if it had been dropped in a full-scale cask with impact limiters. The surrogate assembly was instrumented with 27 strain gauges. Pressure paper was inserted between the rods within the two long and two short spacer grid spans in order to register the pressure in case of rod-to-rod contact. The maximum observed peak strain on the surrogate assembly was 1,724 microstrain at the bottom end of the assembly. The pressure paper sheets from the two short spans were blank. The pressure paper sheets from the two long spans, except a few middle ones, showed marks indicating rod-to-rod contact. The maximum estimated contact pressure was 4,100 psi. The longitudinal bending stress corresponding to the maximum observed strain value (calculated from the stress-strain curve for low burnup cladding) was 22,230 psi. Both values are significantly below the yield strength of the cladding. The major conclusion is that the fuel rods will maintain their integrity following a 30 cm drop inside of a transportation cask.

More Details

EXPLORING VITAL AREA IDENTIFICATION USING SYSTEMS-THEORETIC PROCESS ANALYSIS

Proceedings of the 2021 International Topical Meeting on Probabilistic Safety Assessment and Analysis, PSA 2021

Sandt, Emily; Clark, Andrew J.; Williams, Adam D.; Cohn, Brian; Osborn, Douglas; Aldemir, Tunc

Vital Area Identification (VAI) is an important element in securing nuclear facilities, including the range of recently proposed advanced reactors (AR). As ARs continue to develop and progress to licensure status, it will be necessary to ensure that safety analysis methods are compatible with the new reactor designs. These reactors tout inherently passive safety systems that drastically reduce the number of active components whose failures need to be considered as basic events in a Level 1 probabilistic risk assessment (PRA). Instead, ARs rely on natural processes for their safety, which may be difficult to capture through the use of fault trees (FTs) and subsequently difficult to determine the effects of lost equipment when completing a traditional VAI analysis. Traditional VAI methodology incorporates FTs from Level 1 PRA as a substantial portion of the effort to identify candidate vital area sets. The outcome of VAI is a selected set of areas deemed vital which must be protected in order to prevent radiological sabotage. An alternative methodology is proposed to inform the VAI process and selection of vital areas: Systems-Theoretic Process Analysis (STPA). STPA is a systems-based, top-down approach which analyzes a system as a hierarchical control structure composed of components (both those that are controlled and their controllers) and controlled actions taken by/acted upon those components. The control structure is then analyzed based on several situational parameters, including a time component, to produce a list of scenarios which may lead to system losses. A case study is presented to demonstrate how STPA can be used to inform VAI for ARs.

More Details

INTEGRATED SAFETY AND SECURITY ANALYSIS OF NUCLEAR POWER PLANTS USING DYNAMIC EVENT TREES

Proceedings of the 2021 International Topical Meeting on Probabilistic Safety Assessment and Analysis, PSA 2021

Cohn, Brian; Haskin, Troy C.; Noel, Todd; Cardoni, Jeffrey; Osborn, Douglas; Aldemir, Tunc

Nuclear security relies on the method of vital area identification (VAI) to inform the sabotage target locations within a nuclear power plant (NPP) that need to be protected. The VAI methodology uses fault trees (FTs) and event trees (ETs) to identify locations in the NPP that contain vital systems, structures, or components. However, the traditional FT/ET process cannot fully capture the dynamics occurring following NPP sabotage or of mitigating actions. A methodology is presented which examines the consequences of sabotage to NPP systems using the dynamic probabilistic risk assessment approach to explore these dynamics. A force-on-force computer code determines the timing and extent of damage to NPP systems and a reactor response code models the effects of this damage on the reactor. These two codes are connected using the novel leading simulator/trailing simulator (LS/TS) methodology. A case study is created using the LS/TS methodology to model an adversary attack on an NPP. This case study models uncertainties in an adversary attack and in the response to determine if reactor core damage would occur, and the time to core damage, as well as the extent of core damage, if damage occurs.

More Details

TEM Studies of Segregation in a Ge–Sb–Te Alloy During Heating

Springer Proceedings in Materials

Singh, Manish K.; Ghosh, Chanchal; Kotula, Paul G.; Bakan, Gokhan; Silva, Helena; Carter, C.B.

Phase-change materials are important for optical and electronic computing memory. Ge–Sb–Te (GST) is one of the important phase-change materials and has been studied extensively for fast, reversible, and non-volatile electronic phase-change memory. GST exhibits structural transformations from amorphous to metastable fcc at ~150 ℃ and fcc to hcp at ~300 ℃. The investigation of the structural, microstructural, and microchemical changes with high-temporal resolution during heating is crucial to gain insights on the changes that materials undergo during phase transformations. The as-deposited GST film has amorphous island morphology which transform to the metastable fcc phase at ~130 ℃. The second-phase transformation, from fcc to hexagonal, is observed at ~170 ℃. While the as-deposited amorphous islands show a homogeneous distribution of Ge, Sb and Te, these islands boundaries become Ge-rich after heating. Morphological and structural evolutions were captured during heating inside an aberration corrected environmental TEM equipped with a high-speed camera under a low-dose conditions to minimize beam-induced changes in the samples. Microchemical studies were carried out employing ChemiSTEM technique in probe-corrected mode with a monochromated beam.

More Details

Exploring the value of nodes with multicommunity membership for classification with graph convolutional neural networks

Information (Switzerland)

Hopwood, Michael W.; Pho, Phuong; Mantzaris, Alexander V.

Sampling is an important step in the machine learning process because it prioritizes samples that help the model best summarize the important concepts required for the task at hand. The process of determining the best sampling method has been rarely studied in the context of graph neural networks. In this paper, we evaluate multiple sampling methods (i.e., ascending and descending) that sample based off different definitions of centrality (i.e., Voterank, Pagerank, degree) to observe its relation with network topology. We find that no sampling method is superior across all network topologies. Additionally, we find situations where ascending sampling provides better classification scores, showing the strength of weak ties. Two strategies are then created to predict the best sampling method, one that observes the homogeneous connectivity of the nodes, and one that observes the network topology. In both methods, we are able to evaluate the best sampling direction consistently.

More Details

PROBABILITY DISTRIBUTION FUNCTIONS OF THE NUMBER OF SCATTERING COLLISIONS IN ELECTRON SLOWING DOWN

Proceedings of the International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering, M and C 2021

Franke, Brian C.; Prinja, Anil K.

The probability distribution of the number of collisions experienced by electrons slowing down below a threshold energy is investigated to understand the impact of statistical distribution of energy losses on computational efficiency of Monte Carlo simulations. A theoretical model based on an exponentially peaked differential cross section with parameters that reproduce the exact stopping power and straggling at a fixed energy is shown to yield a Poisson distribution for the collision number distribution. However, simulation with realistic energy-loss physics, including both inelastic and bremsstrahlung energy loss interactions, reveal significant departures from the Poisson distribution. In particular, the low collision numbers are more prominent when true cross sections are employed while a Poisson distribution constructed with the exact variance-to-mean ratio is found to be unrealistically peaked. Detailed numerical investigations show that collisions with large energy losses, although infrequent, are statistically important in electron slowing down.

More Details

Proctor: A Semi-Supervised Performance Anomaly Diagnosis Framework for Production HPC Systems

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Aksar, Burak; Zhang, Yijia; Ates, Emre; Schwaller, Benjamin; Aaziz, Omar R.; Leung, Vitus J.; Brandt, James M.; Egele, Manuel; Coskun, Ayse K.

Performance variation diagnosis in High-Performance Computing (HPC) systems is a challenging problem due to the size and complexity of the systems. Application performance variation leads to premature termination of jobs, decreased energy efficiency, or wasted computing resources. Manual root-cause analysis of performance variation based on system telemetry has become an increasingly time-intensive process as it relies on human experts and the size of telemetry data has grown. Recent methods use supervised machine learning models to automatically diagnose previously encountered performance anomalies in compute nodes. However, supervised machine learning models require large labeled data sets for training. This labeled data requirement is restrictive for many real-world application domains, including HPC systems, because collecting labeled data is challenging and time-consuming, especially considering anomalies that sparsely occur. This paper proposes a novel semi-supervised framework that diagnoses previously encountered performance anomalies in HPC systems using a limited number of labeled data points, which is more suitable for production system deployment. Our framework first learns performance anomalies’ characteristics by using historical telemetry data in an unsupervised fashion. In the following process, we leverage supervised classifiers to identify anomaly types. While most semi-supervised approaches do not typically use anomalous samples, our framework takes advantage of a few labeled anomalous samples to classify anomaly types. We evaluate our framework on a production HPC system and on a testbed HPC cluster. We show that our proposed framework achieves 60% F1-score on average, outperforming state-of-the-art supervised methods by 11%, and maintains an average 0.06% anomaly miss rate.

More Details

The palo verde water cycle model (pvwcm) development of an integrated multi-physics and economics model for effective water management

American Society of Mechanical Engineers, Power Division (Publication) POWER

Middleton, Bobby D.; Brady, Patrick V.; Brown, Jeffrey A.; Lawles, Serafina T.

Water management has become critical for thermoelectric power generation in the US. Increasing demand for scarce water resources for domestic, agricultural, and industrial use affects water availability for power plants. In particular, the population in the Southwestern part of the US is growing and water resources are over-stressed. The engineering and management teams at the Palo Verde Generating Station (PV) in the Sonoran Desert have long understood this problem and began a partnership with Sandia National Laboratories in 2017 to develop a long-Term water strategy for PV. As part of this program, Sandia and Palo Verde staff have developed a comprehensive software tool that models all aspects of the PV (plant cooling) water cycle. The software tool the Palo Verde Water Cycle Model (PVWCM) tracks water operations from influent to the plant through evaporation in one of the nine cooling towers or one of the eight evaporation ponds. The PVWCM has been developed using a process called System Dynamics. The PVWCM is developed to allow scenario comparison for various plant operating strategies.

More Details

The sandia national laboratories1 natural circulation cooler

American Society of Mechanical Engineers, Power Division (Publication) POWER

Middleton, Bobby D.; Brady, Patrick V.; Lawles, Serafina

Sandia National Laboratories (SNL) is developing a cooling technology concept the Sandia National Laboratories Natural Circulation Cooler (SNLNCC) that has potential to greatly improve the economic viability of hybrid cooling for power plants. The SNLNCC is a patented technology that holds promise for improved dry heat rejection capabilities when compared to currently available technologies. The cooler itself is a dry heat rejection device, but is conceptualized here as a heat exchanger used in conjunction with a wet cooling tower, creating a hybrid cooling system for a thermoelectric power plant. The SNLNCC seeks to improve on currently available technologies by replacing the two-phase refrigerant currently used with either a supercritical fluid such as supercritical CO2 (sCO2) or a zeotropic mixture of refrigerants. In both cases, the heat being rejected by the water to the SNLNCC would be transferred over a range of temperatures, instead of at a single temperature as it is in a thermosyphon. This has the potential to improve the economics of dry heat rejection performance in three ways: decreasing the minimum temperature to which the water can be cooled, increasing the temperature to which air can be heated, and increasing the fraction of the year during which dry cooling is economically viable. This paper describes the experimental basis and the current state of the SNLNCC.

More Details

Hyperspectral Image Target Detection Using Deep Ensembles for Robust Uncertainty Quantification

Conference Record - Asilomar Conference on Signals, Systems and Computers

Sahay, Rajeev; Ries, Daniel; Zollweg, Joshua; Brinton, Christopher G.

Deep learning (DL) has been widely proposed for target detection in hyperspectral image (HSI) data. Yet, standard DL models produce point estimates at inference time, with no associated measure of uncertainty, which is vital in high-consequence HSI applications. In this work, we develop an uncertainty quantification (UQ) framework using deep ensemble (DE) learning, which builds upon the successes of DL-based HSI target detection, while simultaneously providing UQ metrics. Specifically, we train an ensemble of convolutional deep learning detection models using one spectral prototype at a particular time of day and atmospheric condition. We find that our proposed framework is capable of accurate target detection in additional atmospheric conditions and times of day despite not being exposed to them during training. Furthermore, in comparison to Bayesian Neural Networks, another DL based UQ approach, we find that DEs provide increased target detection performance while achieving comparable probabilities of detection at constant false alarm rates.

More Details

PERFORMANCE OF ITERATIVE NETWORK UNCERTAINTY QUANTIFICATION FOR MULTICOMPONENT SYSTEM QUALIFICATION

ASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)

Rojas, Edward; Tencer, John T.

In order to impact design decisions and realize the full promise of high-fidelity computational tools, simulation results must be integrated at the earliest stages in the design process. This is particularly challenging when dealing with uncertainty and optimizing for system-level performance metrics as full-system models (often notoriously expensive and time-consuming to develop) are generally required to propagate uncertainties to system-level quantities of interest. Methods for propagating parameter and boundary condition uncertainty in networks of interconnected components hold promise for enabling design under uncertainty in real-world applications. These methods preclude the need for time consuming mesh generation of full-system geometries when changes are made to components or subassemblies. Additionally, they explicitly tie full-system model predictions to component/subassembly validation data which is valuable for qualification. This is accomplished by taking advantage of the fact that many engineered systems are inherently modular, being comprised of a hierarchy of components and subassemblies which are individually modified or replaced to define new system designs. We leverage this hierarchical structure to enable rapid model development and the incorporation of uncertainty quantification and rigorous sensitivity analysis earlier in the design process. The resulting formulation of the uncertainty propagation problem is iterative. We express the system model as a network of interconnected component models which exchange stochastic solution information at component boundaries. We utilize Jacobi iteration with Anderson acceleration to converge stochastic representations of system level quantities of interest through successive evaluations of component or subassembly forward problems. We publish our open-source tools for uncertainty propagation in networks remarking that these tools are extensible and can be used with any simulation tool (including arbitrary surrogate modeling tools) through the construction of a simple Python interface class. Additional interface classes for a variety of simulation tools are currently under active development. The performance of the uncertainty quantification method is determined by the number of iterations needed to achieve a desired level of accuracy. Performance of these networks for simple canonical systems from both a heat transfer and solid mechanics perspective are investigated; the models are examined with thermal and mechanical Dirichlet and Neumann type boundary conditions separately imposed and the impact of varying governing equations and boundary condition type on the performance of the networks is analyzed. The form of the boundary conditions is observed to have a large impact on the convergence rate with Neumann-type boundary conditions corresponding to significant performance degradation compared to the Dirichlet boundary conditions. Nonmonotonicity is observed in the solution convergence in some cases.

More Details

Ultrathin epitaxial NbN superconducting films with high upper critical field grown at low temperature

Materials Research Letters

Lu, Ping

Ultrathin (5–50 nm) epitaxial superconducting niobium nitride (NbN) films were grown on AlN-buffered c-plane Al2O3 by an industrial scale physical vapor deposition technique at 400°C. Both X-ray diffraction and scanning electron microscopy analysis show high crystallinity of the (111)-oriented NbN films, with a narrow full-width-at-half-maximum of the rocking curve down to 0.030°. The lattice constant decreases with decreasing NbN layer thickness, suggesting lattice strain for films with thicknesses below 20 nm. The superconducting transition temperature, the transition width, the upper critical field, the irreversibility line, and the coherence length are closely correlated to the film thickness. IMPACT STATEMENT: This work realized high quality ultrathin epitaxial NbN films by an industry-scale PVD technology at low substrate temperature, which opens up new opportunities for quantum devices.

More Details

A MULTILAYER NETWORK APPROACH TO ASSESSING THE IMPACT OF HUMAN PERFORMANCE SHAPING FACTORS ON SECURITY FOR NUCLEAR POWER PLANTS

Proceedings of the 2021 International Topical Meeting on Probabilistic Safety Assessment and Analysis, PSA 2021

Williams, Adam D.; Fleming, Elizabeth S.

Multilayered networks (MLN), when integrated with traditional task analyses, offer a model-based approach to describe human performance in nuclear power plant security. MLNs demonstrate the interconnected links between security-related roles, security operating procedures, and technical components within a security system. However, when used in isolation, MLNs and task analyses may not fully reveal the impacts humans have within a security system. Thus, the Systems Context Lenses were developed to enhance design for and analysis of desired complex system behaviors, like security at Nuclear Power Plants (NPPs). The System Context Lenses integrate systems engineering concepts and human factors considerations to describe how human actors interact within (and across) the system design, operational environment, and sociotechnical context. Through application of the Systems Context Lenses, critical Performance Shaping Factors (PSFs) influencing human performance can be identified and used to analytically connect human actions with technical and environmental resources in an MLN. This paper summarizes the benefit of a tiered-lens approach on a use case of a multilayered network model of NPP security, including demonstrating how NPP security performance can be improved by more robustly incorporating varying human, institutional, and broader socio-technical interactions.

More Details

Near-field and far-field sampling of aerosol plumes to evaluate particulate emission rates from a falling particle receiver during on-sun testing

Proceedings of the ASME 2021 15th International Conference on Energy Sustainability, ES 2021

Glen, Andrew G.; Dexheimer, Darielle N.; Sanchez, Andres L.; Ho, Clifford K.; China, Swarup; Mei, Fan; Lata, Nurun N.

High-temperature falling particle receivers are being investigated for next-generation concentrating solar power applications. Small sand-like particles are released into an open-cavity receiver and are irradiated by concentrated sunlight from a field of heliostats. The particles are heated to temperatures over 700 °C and can be stored to produce heat for electricity generation or industrial applications when needed. As the particles fall through the receiver, particles and particulate fragments in the form of aerosolized dust can be emitted from the aperture, which can lower thermal efficiency, increase costs of particle replacement, and pose a particulate matter (PM) inhalation risk. This paper describes sampling methods that were deployed during on-sun tests to record nearfield (several meters) and far-field (tens to hundreds of meters) concentrations of aerosol particles within emitted plumes. The objective was to quantify the particulate emission rates and loss from the falling particle receiver in relation to OSHA and EPA National Ambient Air Quality Standards (NAAQS). Near-field instrumentation placed on the platform in proximity to the receiver aperture included several real-time aerosol size distribution and concentration measurement techniques, including a TSI Aerodynamic Particle Sizers (APS), TSI DustTraks, Handix Portable Optical Particle Spectrometers (POPS), Alphasense Optical Particle Counters (OPC), TSI Condensation Particle Counters (CPC), Cascade Particle Impactors, 3D-printed prototype tipping buckets, and meteorological instrumentation. Far-field particle sampling techniques utilized multiple tethered balloons located upwind and downwind of the particle receiver to measure the advected plume concentrations using a suite of airborne aerosol and meteorological instruments including POPS, CPCs, OPCs and cascade impactors. The combined aerosol size distribution for all these instruments spanned particle sizes from 0.02 μm - 500 μm. Results showed a strong influence of wind direction on particle emissions and concentration, with preliminary results showing representative concentrations below both the OSHA and NAAQS standards.

More Details

SODIUM FILTER PERFORMANCE IN THE NASCORD DATABASE

Proceedings of the 2021 International Topical Meeting on Probabilistic Safety Assessment and Analysis, PSA 2021

Mohmand, Jamal A.; Clark, Andrew J.

Sodium-cooled Fast Reactors (SFRs) have an extensive operational history that can be leveraged to accelerate the licensing process for advanced reactor designs. Sandia National Laboratories has reconstituted the United States SFR data from the Centralized Reliability Data Organization (CREDO) into a new database called the Sodium System Component Reliability Database (NaSCoRD). The NaSCoRD database and others like it will help reduce parametric uncertainties encountered in probabilistic risk analysis (PRA) models for advanced non-light water reactor technologies. This paper is an extension of previous work done at Sandia National Laboratories which analyzed pump data. This paper investigates the failure rates of filters/strainers. NaSCoRD contains unique records of 147 filters/strainers that have operated in Experimental Breeder Reactor II, Fast Flux Test Facility, and test loops including those operated by both Westinghouse and the Energy Technology Engineering Center. This paper presents filter failure rates for various conditions allowable from the CREDO data that has been recovered under NaSCoRD. The current filter reliability estimates are presented in comparison to estimates provided in historical studies. The impacts of the suggested corrections from the Idaho National Laboratory report, Generic Component Failure Data Base for Light Water and Liquid Sodium Reactor PRAs, and various prior distributions on these reliability estimates are also presented. The paper also briefly describes the potential improvement of the NaSCoRD database.

More Details

Development and testing of a 20 KW moving packed-bed particle-to-SCO2 heat exchanger and test facility

Proceedings of the ASME 2021 15th International Conference on Energy Sustainability, ES 2021

Albrecht, Kevin; Laubscher, Hendrik F.; Carlson, Matthew D.; Ho, Clifford K.

This paper describes the development of a facility for evaluating the performance of small-scale particle-to-sCO2 heat exchangers, which includes an isobaric sCO2 flow loop and an electrically heated particle flow loop. The particle flow loop is capable of delivering up to 60 kW of heat at a temperature of 600°C and flow rate of 0.4 kg/s. The loop was developed to facilitate long duration off-sun testing of small prototype heat exchangers to produce model validation data at steady-state operating conditions. Lessons learned on instrumentation, control, and system integration from prior testing of larger heat exchangers with solar thermal input were used to guide the design of the test facility. In addition, the development and testing of a novel 20-kWt moving packed-bed particle-to-sCO2 heat exchanger using the integrated flow loops is reported. The prototype heat exchanger implements many novel features for increasing thermal performance and reducing pressure drop which include integral porting of the sCO2 flow, unique bond/braze manufacturing, narrow plate spacing, and pure counter-flow arrangement. The experimental data collected for the prototype heat exchanger was compared to model predictions to verify the sizing, thermal performance, and pressure drop which will be extended to multi-megawatt heat exchanger designs in the future.

More Details

ANALYTIC FORMULA FOR THE DIFFERENCE OF THE CIRCUMRADIUS AND ORTHORADIUS OF A WEIGHTED TRIANGLE

Proceedings of the 29th International Meshing Roundtable, IMR 2021

Hummel, Michelle H.

Understanding and quantifying the effects of vertex insertion, perturbation, and weight allocation is useful for mesh generation and optimization. For weighted primal-dual meshes, the sensitivity of the orthoradius to mesh variations is especially important. To this end, this paper presents an analytic formula for the difference between the circumradius and orthoradius of a weighted triangle in terms of edge lengths and point weights under certain weight and edge assumptions. Current literature [1] offers a loose upper bound on the this difference, but as far as we know this is the first formula presented in terms of edge lengths and point weights. A formula in these terms is beneficial as these are fundamental quantities which enable a more immediate determination of how the perturbation of a point location or weight affects this difference. We apply this result to the VoroCrust algorithm to obtain the same quality guarantees under looser sampling conditions.

More Details

Terrestrial heat repository for months of storage (THERMS): A novel radial thermocline system

Proceedings of the ASME 2021 15th International Conference on Energy Sustainability, ES 2021

Ho, Clifford K.; Gerstle, Walter

This paper describes a terrestrial thermocline storage system comprised of inexpensive rock, gravel, and/or sand-like materials to store high-temperature heat for days to months. The present system seeks to overcome past challenges of thermocline storage (cost and performance) by utilizing a confined radialbased thermocline storage system that can better control the flow and temperature distribution in a bed of porous materials with one or more layers or zones of different particle sizes, materials, and injection/extraction wells. Air is used as the heat-transfer fluid, and the storage bed can be heated or "trickle charged"by flowing hot air through multiple wells during periods of low electricity demand using electrical heating or heat from a solar thermal plant. This terrestrial-based storage system can provide low-cost, large-capacity energy storage for both high- (∼400- 800°C) and low- (∼100-400°C) temperature applications. Bench-scale experiments were conducted, and computational fluid dynamics (CFD) simulations were performed to verify models and improve understanding of relevant features and processes that impact the performance of the radial thermocline storage system. Sensitivity studies were performed using the CFD model to investigate the impact o f the air flow rate, porosity, particle thermal conductivity, and air-to-particle heattransfer coefficient on temperature profiles. A preliminary technoeconomic analysis was also performed to estimate the levelized cost of storage for different storage durations and discharging scenarios.

More Details

Correlating incident heat flux and source temperature to meet astm e1529 requirements for ram packaging components thermal testing

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Baird, Austin R.; Gill, Walter; Mendoza, Hector; Figueroa Faria, Victor G.

Often in fire resistance testing of packaging vessels and other components, both the heat source temperature and the incident heat flux on a test specimen need to be measured and correlated. Standards such as ASTM E1529 require a specified temperature range from the heat source and a specified heat flux on the surface of the test specimen. There are other standards that have similar requirements. The geometry of the test environment and specimen may make heat flux measurements using traditional instruments (directional flame thermometers (DFTs) and water-cooled radiometers) difficult to implement. Orientation of the test specimen with respect to the thermal environment is also important to ensure that the heat flux on the surface of the test specimen is properly measured. Other important factors in the flux measurement include the thermal mass and surface emissivity of the test specimen. This paper describes the development of a cylindrical calorimeter using water-cooled wide-angle Schmidt-Bolter gauges to measure the incident heat flux for a vessel exposed to a radiant heat source. The calorimeter is designed to be modular to be modular with multiple configurations while meeting emissivity and thermal mass requirements via a variable thermal mass. The results of the incident heat flux and source temperature along with effective/apparent emissivity calculations are discussed.

More Details

Scalable3-BO: Big data meets HPC - A scalable asynchronous parallel high-dimensional Bayesian optimization framework on supercomputers

Proceedings of the ASME Design Engineering Technical Conference

Foulk, James W.

Bayesian optimization (BO) is a flexible and powerful framework that is suitable for computationally expensive simulation-based applications and guarantees statistical convergence to the global optimum. While remaining as one of the most popular optimization methods, its capability is hindered by the size of data, the dimensionality of the considered problem, and the nature of sequential optimization. These scalability issues are intertwined with each other and must be tackled simultaneously. In this work, we propose the Scalable3-BO framework, which employs sparse GP as the underlying surrogate model to scope with Big Data and is equipped with a random embedding to efficiently optimize high-dimensional problems with low effective dimensionality. The Scalable3-BO framework is further leveraged with asynchronous parallelization feature, which fully exploits the computational resource on HPC within a computational budget. As a result, the proposed Scalable3-BO framework is scalable in three independent perspectives: with respect to data size, dimensionality, and computational resource on HPC. The goal of this work is to push the frontiers of BO beyond its well-known scalability issues and minimize the wall-clock waiting time for optimizing high-dimensional computationally expensive applications. We demonstrate the capability of Scalable3-BO with 1 million data points, 10,000-dimensional problems, with 20 concurrent workers in an HPC environment.

More Details

Optimizing Distributed Load Balancing for Workloads with Time-Varying Imbalance

Proceedings - IEEE International Conference on Cluster Computing, ICCC

Lifflander, Jonathan J.; Slattengren, Nicole L.; Pebay, Philippe P.; Miller, Philip; Rizzi, Francesco; Bettencourt, Matthew T.

This paper explores dynamic load balancing algorithms used by asynchronous many-task (AMT), or 'taskbased', programming models to optimize task placement for scientific applications with dynamic workload imbalances. AMT programming models use overdecomposition of the computational domain. Overdecompostion provides a natural mechanism for domain developers to expose concurrency and break their computational domain into pieces that can be remapped to different hardware. This paper explores fully distributed load balancing strategies that have shown great promise for exascalelevel computing but are challenging to theoretically reason about and implement effectively. We present a novel theoretical analysis of a gossip-based load balancing protocol and use it to build an efficient implementation with fast convergence rates and high load balancing quality. We demonstrate our algorithm in a nextgeneration plasma physics application (EMPIRE) that induces time-varying workload imbalance due to spatial non-uniformity in particle density across the domain. Our highly scalable, novel load balancing algorithm, achieves over a 3x speedup (particle work) compared to a bulk-synchronous MPI implementation without load balancing.

More Details

Construction of Differentially Private Empirical Distributions from a Low-Order Marginals Set Through Solving Linear Equations with l2 Regularization

Lecture Notes in Networks and Systems

Eugenio, Evercita C.; Liu, Fang

We introduce a new algorithm, Construction of dIfferentially Private Empirical Distributions from a low-order marginals set tHrough solving linear Equations with l2 Regularization (CIPHER), that produces differentially private empirical joint distributions from a set of low-order marginals. CIPHER is conceptually simple and requires no more than decomposing joint probabilities via basic probability rules to construct a linear equation set and subsequently solving the equations. Compared to the full-dimensional histogram (FDH) sanitization, CIPHER has drastically lower requirements on computational storage and memory, which is practically attractive especially considering that the high-order signals preserved by the FDH sanitization are likely just sample randomness and rarely of interest. Our experiments demonstrate that CIPHER outperforms the multiplicative weighting exponential mechanism in preserving original information and has similar or superior cost-normalized utility to FDH sanitization at the same privacy budget.

More Details

A Dynamic Mode Decomposition Scheme to Analyze Power Quality Events

IEEE Access

Wilches-Bernal, Felipe; Reno, Matthew J.; Hernandez-Alvidrez, Javier

This paper presents a new method for detecting power quality disturbances, such as faults. The method is based on the dynamic mode decomposition (DMD)-a data-driven method to estimate linear dynamics whose eigenvalues and eigenvectors approximate those of the Koopman operator. The proposed method uses the real part of the main eigenvalue estimated by the DMD as the key indicator that a power quality event has occurred. The paper shows how the proposed method can be used to detect events using current and voltage signals to distinguish different faults. Because the proposed method is window-based, the effect that the window size has on the performance of the approach is analyzed. In addition, a study on the effect that noise has on the proposed approach is presented.

More Details

A Survey of Traveling Wave Protection Schemes in Electric Power Systems

IEEE Access

Wilches-Bernal, Felipe; Bidram, Ali; Reno, Matthew J.; Hernandez-Alvidrez, Javier; Barba, Pedro; Reimer, Benjamin; Carr, Christopher C.; Lavrova, Olga

As a result of the increase in penetration of inverter-based generation such as wind and solar, the dynamics of the grid are being modified. These modifications may threaten the stability of the power system since the dynamics of these devices are completely different from those of rotating generators. Protection schemes need to evolve with the changes in the grid to successfully deliver their objectives of maintaining safe and reliable grid operations. This paper explores the theory of traveling waves and how they can be used to enable fast protection mechanisms. It surveys a list of signal processing methods to extract information on power system signals following a disturbance. The paper also presents a literature review of traveling wave-based protection methods at the transmission and distribution levels of the grid and for AC and DC configurations. The paper then discusses simulations tools to help design and implement protection schemes. A discussion of the anticipated evolution of protection mechanisms with the challenges facing the grid is also presented.

More Details

A tailored convolutional neural network for nonlinear manifold learning of computational physics data using unstructured spatial discretizations

SIAM Journal on Scientific Computing

Tencer, John T.; Potter, Kevin M.

We propose a nonlinear manifold learning technique based on deep convolutional autoencoders that is appropriate for model order reduction of physical systems in complex geometries. Convolutional neural networks have proven to be highly advantageous for compressing data arising from systems demonstrating a slow-decaying Kolmogorov n-width. However, these networks are restricted to data on structured meshes. Unstructured meshes are often required for performing analyses of real systems with complex geometry. Our custom graph convolution operators based on the available differential operators for a given spatial discretization effectively extend the application space of deep convolutional autoencoders to systems with arbitrarily complex geometry that are typically discretized using unstructured meshes. We propose sets of convolution operators based on the spatial derivative operators for the underlying spatial discretization, making the method particularly well suited to data arising from the solution of partial differential equations. We demonstrate the method using examples from heat transfer and fluid mechanics and show better than an order of magnitude improvement in accuracy over linear methods.

More Details

A Minimally Supervised Event Detection Method

Lecture Notes in Networks and Systems

Hoffman, Matthew; Bussell, Sammy J.; Brown, Nathanael J.K.

Solving classification problems with machine learning often entails laborious manual labeling of test data, requiring valuable time from a subject matter expert (SME). This process can be even more challenging when each sample is multidimensional. In the case of an anomaly detection system, a standard two-class problem, the dataset is likely imbalanced with few anomalous observations and many “normal” observations (e.g., credit card fraud detection). We propose a unique methodology that quickly identifies individual samples for SME tagging while automatically classifying commonly occurring samples as normal. In order to facilitate such a process, the relationships among the dimensions (or features) must be easily understood by both the SME and system architects such that tuning of the system can be readily achieved. The resulting system demonstrates how combining human knowledge with machine learning can create an interpretable classification system with robust performance.

More Details

Materials compatibility concerns for hydrogen blended into natural gas

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Ronevich, Joseph; San Marchi, Chris

Hydrogen additions to natural gas are being considered around the globe as a means to utilize existing infrastructure to distribute hydrogen. Hydrogen is known to enhance fatigue crack growth and reduce fracture resistance of structural steels used for pressure vessels, piping and pipelines. Most research has focused on high-pressure hydrogen environments for applications of storage (>100 MPa) and delivery (10-20 MPa) in the context of hydrogen fuel cell vehicles, which typically store hydrogen onboard at pressure of 70 MPa. In applications of blending hydrogen into natural gas, a wide range of hydrogen contents are being considered, typically in the range of 2-20%. In natural gas infrastructure, the pressure differs depending on location in the system (i.e., transmission systems are relatively high pressure compared to low-pressure distribution systems), thus the anticipated partial pressure of hydrogen can be less than an atmosphere or more than 10 MPa. In this report, it is shown that low partial pressure hydrogen has a very strong effect on fatigue and fracture behavior of infrastructure steels. While it is acknowledged that materials compatibility with hydrogen will be important for systems operating with high stresses, the effects of hydrogen do not seem to be a significant threat for systems operating at low pressure as in distribution infrastructure. In any case, system operators considering the addition of hydrogen to their network must carefully consider the structural performance of their system and the significant effects of hydrogen on structural integrity, as fatigue and fracture properties of all steels in the natural gas infrastructure will be degraded by hydrogen, even for partial pressure of hydrogen less than 0.1 MPa.

More Details

Fire-induced pressure response and failure of 3013 containers

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Mendoza, Hector; Gill, Walter; Baird, Austin R.; Figueroa Faria, Victor G.; Hensel, Steve; Sanborn, Scott E.

Several Department of Energy (DOE) facilities have nuclear or hazardous materials stored in robust, welded, stainless-steel containers with undetermined fire-induced pressure response behaviors. Lack of test data related to fire exposure requires conservative safety analysis assumptions for container response at these facilities. This conservatism can in turn result in the implementation of challenging operational restrictions with costly nuclear safety controls. To help address this issue for sites that store DOE 3013 stainless-steel containers, a series of five tests were undertaken at Sandia National Laboratories. The goal of this test series was to obtain the response behavior for various configurations of the DOE 3013 containers when exposed to various fire conditions. Key parameters measured in the test series included identification of failure-specific characteristics such as pressure, temperature, and leak/burst failure type. This paper describes the development and execution of the test series performed to identify these failure-specific characteristics. Work completed to define the test configurations, payload compositions, thermal insults, and experimental setups are discussed. Test results are presented along with corresponding discussions for each test.

More Details

Development and validation of radiant heat systems to test ram packages under non-uniform thermal environments

American Society of Mechanical Engineers Pressure Vessels and Piping Division Publication PVP

Mendoza, Hector; Gill, Walter; Figueroa Faria, Victor G.; Sanborn, Scott E.

Certification of radioactive material (RAM) packages for storage and transportation requires multiple tiers of testing that simulate accident conditions in order to assure safety. One of these key testing aspects focuses on container response to thermal insults when a package includes materials that decompose, combust, or change phase between-40 °C and 800 °C. Thermal insult for RAM packages during testing can be imposed from a direct pool fire, but it can also be imposed using a furnace or a radiant heat system. Depending on variables such as scale, heating rates, desired environment, intended diagnostics, cost, etc., each of the different methods possess their advantages and disadvantages. While a direct fire can be the closest method to represent a plausible insult, incorporating comprehensive diagnostics in a controlled fire test can pose various challenges due to the nature of a fire. Radiant heat setups can instead be used to impose a comparable heat flux on a test specimen in a controlled manner that allows more comprehensive diagnostics. With radiant heat setups, however, challenges can arise when attempting to impose desired nonuniform heat fluxes that would account for specimen orientation and position in a simulated accident scenario. This work describes the development, implementation, and validation of a series of techniques used by Sandia National Laboratories to create prescribed non-uniform thermal environments using radiant heat sources for RAM packages as large as a 55-gallon drum.

More Details

Using Machine Learning to Predict Bilingual Language Proficiency from Reaction Time Priming Data

Proceedings of the 43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021

Matzen, Laura E.; Ting, Christina; Stites, Mallory C.

Studies of bilingual language processing typically assign participants to groups based on their language proficiency and average across participants in order to compare the two groups. This approach loses much of the nuance and individual differences that could be important for furthering theories of bilingual language comprehension. In this study, we present a novel use of machine learning (ML) to develop a predictive model of language proficiency based on behavioral data collected in a priming task. The model achieved 75% accuracy in predicting which participants were proficient in both Spanish and English. Our results indicate that ML can be a useful tool for characterizing and studying individual differences.

More Details

Comparison of pyrometry and thermography for thermal analysis of thermite reactions

Applied Optics

Woodruff, Connor; Dean, Steven W.; Pantoya, Michelle L.

This study examines the thermal behavior of a laser ignited thermite composed of aluminum and bismuth trioxide. Temperature data were collected during the reaction using a four-color pyrometer and a high-speed color camera modified for thermography. The two diagnostics were arranged to collect data simultaneously, with similar fields of view and with similar data acquisition rates, so that the two techniques could be directly compared. Results show that at initial and final stages of the reaction, a lower signal-to-noise ratio affects the accuracy of the measured temperatures. Both diagnostics captured the same trends in transient thermal behavior, but the average temperatures measured with thermography were about 750 K higher than those from the pyrometer. This difference was attributed to the lower dynamic range of the thermography camera’s image sensor, which was unable to resolve cooler temperatures in the field of view as well as the photomultiplier tube sensors in the pyrometer. Overall, while the camera could not accurately capture the average temperature of a scene, its ability to capture peak temperatures and spatial data make it the preferred method for tracking thermal behavior in thermite reactions.

More Details

Comparison of Surface Phenomena Created by Underground Chemical Explosions in Dry Alluvium and Granite Geology from Fully Polarimetric VideoSAR Data

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing

West, Roger D.; Abbott, Robert; Yocky, David A.

Phase I of the Source Physics Experiment (SPE) series involved six underground chemical explosions, all of which were conducted at the same experimental pad. Research from the sixth explosion of the series (SPE-6) demonstrated that polarimetric synthetic aperture radar (PolSAR) is a viable technology for monitoring an underground chemical explosion when the geologic structure is Cretaceous granitic intrusive. It was shown that a durable signal is measurable by the H/A/alpha polarimetric decomposition parameters. After the SPE-6 experiment, the SPE program moved to the Phase II location, which is composed of dry alluvium geology (DAG). The loss of wavefront energy is greater through dry alluvium than through granite. In this article, we compare the SPE-6 analysis to the second DAG (DAG-2) experiment. We hypothesize that despite the geology at the DAG site being more challenging than at the Phase I location, combined with the DAG-2 experiment having a 3.37 times deeper scaled depth of burial than the SPE-6, a durable nonprompt signal is still measurable by a PolSAR sensor. We compare the PolSAR time-series measures from videoSAR frames, from the SPE-6 and DAG-2 experiments, with accelerometer data. We show which PolSAR measures are invariant to the two types of geology and which are geology dependent. We compare a coherent change detection (CCD) map from the DAG-2 experiment with the data from a fiber-optic distributed acoustic sensor to show the connection between the spatial extent of coherence loss in CCD maps and spallation caused by the explosion. Finally, we also analyze the spatial extent of the PolSAR measures from both explosions.

More Details

Path towards a vertical TFET enabled by atomic precision advanced manufacturing

2021 Silicon Nanoelectronics Workshop, SNW 2021

Lu, Tzu M.; Gao, Xujiao; Anderson, Evan M.; Mendez Granado, Juan P.; Campbell, Deanna M.; Ivie, Jeffrey A.; Schmucker, Scott W.; Grine, Albert; Lu, Ping; Tracy, Lisa A.; Arghavani, Reza; Misra, Shashank

We propose a vertical TFET using atomic precision advanced manufacturing (APAM) to create an abrupt buried n++-doped source. We developed a gate stack that preserves the APAM source to accumulate holes above it, with a goal of band-to-band tunneling (BTBT) perpendicular to the gate – critical for the proposed device. A metal-insulator-semiconductor (MIS) capacitor shows hole accumulation above the APAM source, corroborated by simulation, demonstrating the TFET’s feasibility.

More Details

Narrowband microwave-photonic notch filtering using Brillouin interactions in silicon

Optics InfoBase Conference Papers

Gertler, Shai; Otterstrom, Nils T.; Gehl, Michael; Starbuck, Andrew L.; Dallo, Christina M.; Pomerene, Andrew; Lentine, Anthony L.; Rakich, Peter T.

We present narrowband RF-photonic filters in an integrated silicon platform. Using Brillouin interactions, the filters yield narrowband (∼MHZ) filter bandwidths with high signal rejection, and demonstrate tunability over a wide (∼GHz) frequency range.

More Details

Investigation of electrical chatter in bifurcated contact receptacles

Electrical Contacts, Proceedings of the Annual Holm Conference on Electrical Contacts

Zastrow, Benjamin; Flicek, Robert C.; Walczak, Karl; Pacini, Benjamin R.; Johnson, Kelsey; Johnson, Brianna; Schumann, Christopher; Rafeedi, Fadi

Electrical switches are often subjected to shock and vibration environments, which can result in sudden increases in the switch's electrical resistance, referred to as 'chatter'. This paper describes experimental and numerical efforts to investigate the mechanism that causes chatter in a contact pair formed between a cylindrical pin and a bifurcated receptacle. First, the contact pair was instrumented with shakers, accelerometers, laser doppler vibrometers, a high speed camera, and a 'chatter tester' that detects fluctuations in the contact's electrical resistance. Chatter tests were performed over a range of excitation amplitudes and frequencies, and high speed video from the tests suggested that 'bouncing' (i.e. loss of contact) was the primary physical mechanism causing chatter. Structural dynamics models were then developed of the pin, receptacle, and contact pair, and corresponding modal experiments were performed for comparison and model validation. Finally, a high-fidelity solid mechanics model of the contact pair was developed to study the bouncing physics observed in the high speed videos. Chatter event statistics (e.g. mean chatter event duration) were used to compare the chatter behavior recorded during testing to the behavior simulated in the high-fidelity model, and this comparison suggested that the same bouncing mechanism is the cause of chatter in both scenarios.

More Details

Is the Testing Effect Ready to Be Put to Work? Evidence From the Laboratory to the Classroom

Translational Issues in Psychological Science

Trumbo, Michael C.S.; Mcdaniel, Mark A.; Hodge, Gordon K.; Jones, Aaron; Matzen, Laura E.; Kittinger, Liza I.; Kittinger, Robert; Clark, Vincent P.

The testing effect refers to the benefits to retention that result from structuring learning activities in the form of a test. As educators consider implementing testenhanced learning paradigms in real classroom environments, we think it is critical to consider how an array of factors affecting test-enhanced learning in laboratory studies bear on test-enhanced learning in real-world classroom environments. This review discusses the degree to which test feedback, test format (of formative tests), number of tests, level of the test questions, timing of tests (relative to initial learning), and retention duration have import for testing effects in ecologically valid contexts (e.g., classroom studies). Attention is also devoted to characteristics of much laboratory testing-effect research that may limit translation to classroom environments, such as the complexity of the material being learned, the value of the testing effect relative to other generative learning activities in classrooms, an educational orientation that favors criterial tests focused on transfer of learning, and online instructional modalities. We consider how student-centric variables present in the classroom (e.g., cognitive abilities, motivation) may have bearing on the effects of testing-effect techniques implemented in the classroom. We conclude that the testing effect is a robust phenomenon that benefits a wide variety of learners in a broad array of learning domains. Still, studies are needed to compare the benefit of testing to other learning strategies, to further characterize how individual differences relate to testing benefits, and to examine whether testing benefits learners at advanced levels.

More Details

Distributed Inference with Sparse and Quantized Communication

IEEE Transactions on Signal Processing

Mitra, Aritra; Richards, John A.; Bagchi, Saurabh; Sundaram, Shreyas

We consider the problem of distributed inference where agents in a network observe a stream of private signals generated by an unknown state, and aim to uniquely identify this state from a finite set of hypotheses. We focus on scenarios where communication between agents is costly, and takes place over channels with finite bandwidth. To reduce the frequency of communication, we develop a novel event-triggered distributed learning rule that is based on the principle of diffusing low beliefs on each false hypothesis. Building on this principle, we design a trigger condition under which an agent broadcasts only those components of its belief vector that have adequate innovation, to only those neighbors that require such information. We prove that our rule guarantees convergence to the true state exponentially fast almost surely despite sparse communication, and that it has the potential to significantly reduce information flow from uninformative agents to informative agents. Next, to deal with finite-precision communication channels, we propose a distributed learning rule that leverages the idea of adaptive quantization. We show that by sequentially refining the range of the quantizers, every agent can learn the truth exponentially fast almost surely, while using just 1 bit to encode its belief on each hypothesis. For both our proposed algorithms, we rigorously characterize the trade-offs between communication-efficiency and the learning rate.

More Details

INCREMENTAL INTERVAL ASSIGNMENT BY INTEGER LINEAR ALGEBRA

Proceedings of the 29th International Meshing Roundtable, IMR 2021

Mitchell, Scott A.

Interval Assignment (IA) is the problem of selecting the number of mesh edges (intervals) for each curve for conforming quad and hex meshing. The intervals x is fundamentally integer-valued, yet many approaches perform floating-point optimization and convert a floating-point solution into an integer solution. We avoid such steps: we start integer, stay integer. Incremental Interval Assignment (IIA) uses integer linear algebra (Hermite normal form) to find an initial solution to the matrix equation Ax = b satisfying the meshing constraints. Solving for reduced row echelon form provides integer vectors spanning the nullspace of A. We add vectors from the nullspace to improve the initial solution. Compared to floating-point optimization approaches, IIA is faster and always produces an integer solution. The potential drawback is that there is no theoretical guarantee that the solution is optimal, but in practice we achieve solutions close to the user goals. The software is freely available.

More Details

Deep learning denoising applied to regional distance seismic data in Utah

Bulletin of the Seismological Society of America

Tibi, Rigobert; Hammond, Patrick; Brogan, Ronald; Young, Christopher J.; Koper, Keith

Seismic waveform data are generally contaminated by noise from various sources. Suppressing this noise effectively so that the remaining signal of interest can be successfully exploited remains a fundamental problem for the seismological community. To date, the most common noise suppression methods have been based on frequency filtering. These methods, however, are less effective when the signal of interest and noise share similar frequency bands. Inspired by source separation studies in the field of music information retrieval (Jansson et al., 2017) and a recent study in seismology (Zhu et al., 2019), we implemented a seismic denoising method that uses a trained deep convolutional neural network (CNN) model to decompose an input waveform into a signal of interest and noise. In our approach, the CNN provides a signal mask and a noise mask for an input signal. The short-time Fourier transform (STFT) of the estimated signal is obtained by multiplying the signal mask with the STFT of the input signal. To build and test the denoiser, we used carefully compiled signal and noise datasets of seismograms recorded by the University of Utah Seismograph Stations network. Results of test runs involving more than 9000 constructed waveforms suggest that on average the denoiser improves the signal-to-noise ratios (SNRs) by ∼ 5 dB, and that most of the recovered signal waveforms have high similarity with respect to the target waveforms (average correlation coefficient of ∼ 0:80) and suffer little distortion. Application to real data suggests that our denoiser achieves on average a factor of up to ∼ 2–5 improvement in SNR over band-pass filtering and can suppress many types of noise that band-pass filtering cannot. For individual waveforms, the improvement can be as high as ∼ 15 dB.

More Details

Deep Reinforcement Learning for Online Distribution Power System Cybersecurity Protection

2021 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids, SmartGridComm 2021

Bailey, Tyson; Johnson, Jay; Levin, Drew

The sophistication and regularity of power system cybersecurity attacks has been growing in the last decade, leading researchers to investigate new innovative, cyber-resilient tools to help grid operators defend their networks and power systems. One promising approach is to apply recent advances in deep reinforcement learning (DRL) to aid grid operators in making real-time changes to the power system equipment to counteract malicious actions. While multiple transmission studies have been conducted in the past, in this work we investigate the possibility of defending distribution power systems using a DRL agent who has control of a collection of utility-owned distributed energy resources (DER). A game board using a modified version of the IEEE 13-bus model was simulated using OpenDSS to train the DRL agent and compare its performance to a random agent, a greedy agent, and human players. Both the DRL agent and the greedy approach performed well, suggesting a greedy approach can be appropriate for computationally tractable system configurations and a DRL agent is a viable path forward for systems of increased complexity. This work paves the way to create multi-player distribution system control games which could be designed to defend the power grid under a sophisticated cyber-attack.

More Details

Fracture Formation in Layered Synthetic Rocks with Oriented Mineral Fabric under Mixed Mode I and II Loading Conditions

55th U.S. Rock Mechanics / Geomechanics Symposium 2021

Jiang, Liyang; Yoon, Hongkyu; Bobet, Antonio; Pyrak-Nolte, Laura J.

Anisotropy in the mechanical properties of rock is often attributed to bedding and mineral texture. Here, we use 3D printed synthetic rock to show that, in addition to bedding layers, mineral fabric orientation governs sample strength, surface roughness and fracture path under mixed mode I and II three point bending tests (3PB). Arrester (horizontal layering) and short traverse (vertical layering) samples were printed with different notch locations to compare pure mode I induced fractures to mixed mode I and II fracturing. For a given sample type, the location of the notch affected the intensity of mode II loading, and thus affected the peak failure load and fracture path. When notches were printed at the same location, crack propagation, peak failure load and fracture surface roughness were found to depend on both the layer and mineral fabric orientations. The uniqueness of the induced fracture path and roughness is a potential method for the assessment of the orientation and relative bonding strengths of minerals in a rock. With this information, we will be able to predict isotropic or anisotropic flow rates through fractures which is vital to induced fracturing, geothermal energy production and CO2 sequestration.

More Details

INCREMENTAL INTERVAL ASSIGNMENT BY INTEGER LINEAR ALGEBRA

Proceedings of the 29th International Meshing Roundtable, IMR 2021

Mitchell, Scott A.

Interval Assignment (IA) is the problem of selecting the number of mesh edges (intervals) for each curve for conforming quad and hex meshing. The intervals x is fundamentally integer-valued, yet many approaches perform floating-point optimization and convert a floating-point solution into an integer solution. We avoid such steps: we start integer, stay integer. Incremental Interval Assignment (IIA) uses integer linear algebra (Hermite normal form) to find an initial solution to the matrix equation Ax = b satisfying the meshing constraints. Solving for reduced row echelon form provides integer vectors spanning the nullspace of A. We add vectors from the nullspace to improve the initial solution. Compared to floating-point optimization approaches, IIA is faster and always produces an integer solution. The potential drawback is that there is no theoretical guarantee that the solution is optimal, but in practice we achieve solutions close to the user goals. The software is freely available.

More Details

Using Machine Learning to Predict Bilingual Language Proficiency from Reaction Time Priming Data

Proceedings of the 43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021

Matzen, Laura E.; Ting, Christina; Stites, Mallory C.

Studies of bilingual language processing typically assign participants to groups based on their language proficiency and average across participants in order to compare the two groups. This approach loses much of the nuance and individual differences that could be important for furthering theories of bilingual language comprehension. In this study, we present a novel use of machine learning (ML) to develop a predictive model of language proficiency based on behavioral data collected in a priming task. The model achieved 75% accuracy in predicting which participants were proficient in both Spanish and English. Our results indicate that ML can be a useful tool for characterizing and studying individual differences.

More Details

Aero-Optical Distortions of Turbulent Boundary Layers: DNS up to Mach 8

AIAA Aviation and Aeronautics Forum and Exposition, AIAA AVIATION Forum 2021

Miller, Nathan E.; Guildenbecher, Daniel; Lynch, Kyle P.

The character of aero-optical distortions produced by turbulence is investigated for subsonic, supersonic, and hypersonic boundary layers. Data from four Direct Numerical Simulations (DNS) of boundary layers with nominal Mach numbers ranging from 0.5 to 8 are used. The DNS data for the subsonic and supersonic boundary layers are of flow over flat plates. Two hypersonic boundary layers are both from flows with a Mach 8 inlet condition, one of which is flow over a flat plate while the other is a boundary layer on a sharp cone. Density fields from these datasets are converted to index-of-refraction fields which are integrated along an expected beam path to determine the effective Optical Path Lengths that a beam would experience while passing through the refractions of the turbulent field. By then accounting for the mean path length and tip/tilt issues related to bulk boundary layer effects, the distribution of Optical Path Differences (OPD s) is determined. Comparisons of the root-mean-squares of the OPDs are made to an existing model. The OPDr m s values determined from the subsonic and supersonic data were found to match the existing model well. As could be expected, the hypersonic data does not match as well due to assumptions like the Strong Reynold Analogy that were made in the derivation of the model. Until now, the model has never been compared to flows with Mach numbers as high as included herein or to flow over a sharp cone geometry.

More Details

Detecting Communities and Attributing Purpose to Human Mobility Data

Proceedings - Winter Simulation Conference

John, Esther W.L.; Cauthen, Katherine R.; Brown, Nathanael J.K.; Nozick, Linda

Many individuals' mobility can be characterized by strong patterns of regular movements and is influenced by social relationships. Social networks are also often organized into overlapping communities which are associated in time or space. We develop a model that can generate the structure of a social network and attribute purpose to individuals' movements, based solely on records of individuals' locations over time. This model distinguishes the attributed purpose of check-ins based on temporal and spatial patterns in check-in data. Because a location-based social network dataset with authoritative ground-truth to test our entire model does not exist, we generate large scale datasets containing social networks and individual check-in data to test our model. We find that our model reliably assigns community purpose to social check-in data, and is robust over a variety of different situations.

More Details

Development of a Spatially Filtered Wavefront Sensor as an Aero-Optical Measurement Technique

AIAA Aviation and Aeronautics Forum and Exposition, AIAA AVIATION Forum 2021

Butler, Luke; Gordeyev, Stanislav; Lynch, Kyle P.; Guildenbecher, Daniel

This paper validates the concept of a spatially filtered wavefront sensor, which uses a convergent-divergent beam to reduce sensitivity to aero-optical distortions near the focal point while retaining sensitivity at large beam diameters. This sensor was used to perform wavefront measurements in a cavity flow test section. The focal point was traversed to various spanwise locations across the test section, and the overall OPDRMS levels and aperture-averaged spectra of wavefronts were computed. It was demonstrated that the sensor was able to effectively suppress the stronger aero-optical signal from the cavity flow and recover the aero-optical signal from the boundary layer when the focal point was placed inside the shear region of the cavity flow. To model these measured quantities, additional collimated beam wavefronts were taken at various subsonic speeds in a wind tunnel test section with two turbulent boundary layers, and then in the cavity flow test section, where the signal from the cavity was dominant. The results from the experimental model agree with the measured convergent-divergent beam results, confirming that the spatial filtering properties of the proposed sensor are due to attenuating effects at small apertures.

More Details

Evaluation of Microhole drilling technology for geothermal exploration, assessment, and monitoring

Transactions - Geothermal Resources Council

Su, Jiann-Cherng; Mazumdar, Anirban; Buerger, Stephen P.; Foris, Adam J.; Faircloth, Brian

The well documented promise of microholes has not yet matched expectations. A fundamental issue is that delivering high weight-on-bit (WOB), high torque rotational horsepower to a conventional drill bit does not scale down to the hole sizes necessary to realize the envisioned cost savings. Prior work has focused on miniaturizing the various systems used in conventional drilling technologies, such as motors, steering systems, mud handling and logging tools, and coiled tubing drilling units. As smaller diameters are targeted for these low WOB drilling technologies, several associated sets of challenges arise. For example, energy transfer efficiency in small diameter percussive hammers is different than conventional hammers. Finding adequate methods of generating rotation at the bit are also more difficult. A low weight-on-bit microhole drilling system was proposed, conceived, and tested on a limited scale. The utility of a microhole was quantified using flow analyses to establish bounds for usable microholes. Two low weight-on-bit rock reduction techniques were evaluated and developed, including a low technology readiness level concept in the laser-assisted mechanical drill and a modified commercial percussive hammer. Supporting equipment, including downhole rotation and a drill string twist reaction tool, were developed to enable wireline deployment of a drilling assembly. Although the various subsystems were tested and shown to work well individually in a laboratory environment, there is still room for improvement before the microhole drilling system is ready to be deployed. Ruggedizing the various components will be key, as well as having additional capacity in a conveyance system to provide additional capacity for pullback and deployment.

More Details

Validation of multi-frame piv image interrogation algorithms in the spectral domain

AIAA Scitech 2021 Forum

Beresh, Steven J.; Neal, Douglas R.; Sciacchitano, Andrea

Multi-frame correlation algorithms for time-resolved PIV have been shown in previous studies to reduce noise and error levels in comparison with conventional two-frame correlations. However, none of these prior efforts tested the accuracy of the algorithms in spectral space. Even should a multi-frame algorithm reduce the error of vector computations summed over an entire data set, this does not imply that these improvements are observed at all frequencies. The present study examines the accuracy of velocity spectra in comparison with simultaneous hot-wire data. Results indicate that the high-frequency content of the spectrum is very sensitive to choice of the interrogation algorithm and may not return an accurate response. A top-hat-weighted sliding sum-of-correlation is contaminated by high-frequency ringing whereas Gaussian weighting is indistinguishable from a low-pass filtering effect. Some evidence suggests the pyramid correlation modestly increases bandwidth of the measurement at high frequencies. The apparent benefits of multi-frame interrogation algorithms may be limited in their ability to reveal additional spectral content of the flow.

More Details

Electrical conduction and polarization of silica-based capacitors under electro-thermal poling

Annual Report - Conference on Electrical Insulation and Dielectric Phenomena, CEIDP

Nieves-Sanabria, Cesar; Wilke, Rudeger; Bishop, Sean R.; Lanagan, Michael T.; Clem, Paul

Electrical conduction in silica-based capacitors under a combined effect of intermediate electric field and temperature (2.5 - 10 kV/mm, 50-300°C) is dominated by localized motion of high mobility ions such as sodium. Thermally stimulated polarization and depolarization current (TSPC/TSDC) characterization was carried out on poled fused silica and AF32 glass samples. Two relaxation mechanisms were found during the depolarization step and an anomalous response for the second TSDC peak was observed. Absorption current measurements were performed on the glass samples and a time-dependent response was observed when subjected to different electro-thermal conditions. It was found that at low temperature (T = 175 °C) and short times, the current follows a linear behavior (I α V) while at high temperature (T = 250 °C), the current follows V0.5. TSPC/TSDC and absorption current measurements results led to the conclusion that (1) Poole-Frenkel dominates conduction at high temperatures and at longer times and that (2) ionic blockage and/or H+/H3O+ injection are responsible for the observed anomalous current response.

More Details

Scaling of Reflected Shock Bifurcation at High Incident Mach Number

AIAA Aviation and Aeronautics Forum and Exposition, AIAA AVIATION Forum 2021

Daniel, Kyle A.; Lynch, Kyle P.; Downing, Charley R.; Wagner, Justin L.

Measurements of bifurcated reflected shocks over a wide range of incident shock Mach numbers, 2.9 < Ms < 9.4, are carried out in Sandia’s high temperature shock tube. The size of the non-uniform flow region associated with the bifurcation is measured using high speed schlieren imaging. Measurements of the bifurcation height are compared to historical data from the literature. A correlation for the bifurcation height from Petersen et al. [1] is examined and found to over estimate the bifurcation height for Ms > 6. An improved correlation is introduced that can predict the bifurcation height over the range 2.15 < Ms < 9.4. The time required for the non-uniform flow region to pass over a stationary sensor is also examined. A non-dimensional time related to the induced velocity behind the shock and the distance to the endwall is introduced. This non-dimensional time collapses the data and yields a new correlation that predicts the temporal duration of the bifurcation.

More Details

Configurable Microgrid Modelling with Multiple Distributed Energy Resources for Dynamic System Analysis

IEEE Power and Energy Society General Meeting

Darbali-Zamora, Rachid; Wilches-Bernal, Felipe; Naughton, Brian

As renewable energy sources are becoming more dominant in electric grids, particularly in micro grids, new approaches for designing, operating, and controlling these systems are required. The integration of renewable energy devices such as photovoltaics and wind turbines require system design considerations to mitigate potential power quality issues caused by highly variable generation. Power system simulations play an important role in understanding stability and performance of electrical power systems. This paper discusses the modeling of the Global Laboratory for Energy Asset Management and Manufacturing (GLEAMM) micro grid integrated with the Sandia National Laboratories Scaled Wind Farm Technology (SWiFT) test site, providing a dynamic simulation model for power flow and transient stability analysis. A description of the system as well as the dynamic models is presented.

More Details

Analysis and optimization of a closed loop geothermal system in hot rock reservoirs

Transactions - Geothermal Resources Council

Vasyliv, Yaroslav V.; Bran Anleu, Gabriela A.; Kucala, Alec; Subia, Samuel R.; Martinez, Mario J.

Recent advances in drilling technology, especially horizontal drilling, have prompted a renewed interest in the use of closed loop geothermal energy extraction systems. Deeply placed closed loops in hot wet or dry rock reservoirs offer the potential to exploit the vast thermal energy in the subsurface. To better understand the potential and limitations for recovering thermal and mechanical energy from closed-loop geothermal systems (CLGS), a collaborative study is underway to investigate an array of system configurations, working fluids, geothermal reservoir characteristics, operational periods, and heat transfer enhancements (Parisi et al., 2021; White et al., 2021). This paper presents numerical results for the heat exchange between a closed loop system (single U-tube) circulating water as the working fluid in a hot rock reservoir. The characteristics of the reservoir are based on the Frontier Observatory for Research in Geothermal Energy (FORGE) site, near Milford Utah. To determine optimal system configurations, a mechanical (electrical) objective function is defined for a bounded optimization study over a specified design space. The objective function includes a surface plant thermal to mechanical energy conversion factor, pump work, and an energy drilling capital cost. To complement the optimization results, detailed parametric studies are also performed. The numerical model is built using the Sandia National Laboratories (SNL) massively parallel Sierra computational framework, while the optimization and parametric studies are driven using the SNL Dakota software package. Together, the optimization and parametric studies presented in this paper will help assess the impact of CLGS parameters (e.g., flow rate, tubing length and diameter, insulation length, etc.) on CLGS performance and optimal energy recovery.

More Details

Machine learning methods for estimating down-hole depth of cut

Transactions - Geothermal Resources Council

Sacks, Jacob; Choi, Kevin; Bruss, Kathryn; Su, Jiann-Cherng; Buerger, Stephen P.; Mazumdar, Anirban; Boots, Byron

Depth of cut (DOC) refers to the depth a bit penetrates into the rock during drilling. This is an important quantity for estimating drilling performance. In general, DOC is determined by dividing the rate of penetration (ROP) by the rotational speed. Surface based sensors at the top of the drill string are used to determine both ROP and rotational speed. However, ROP measurements using top-hole sensors are noisy and often require taking a derivative. Filtering reduces the update rate, and both top-hole linear and angular velocity can be delayed relative to downhole behavior. In this work, we describe recent progress towards estimating ROP and DOC using down-hole sensing. We assume downhole measurements of torque, weight-on-bit (WOB), and rotational speed and anticipate that these measurements are physically realizable. Our hypothesis is that these measurements can provide more rapid and accurate measures of drilling performance. We examine a range of machine learning techniques for estimating ROP and DOC based on this local sensing paradigm. We show how machine learning can provide rapid and accurate performance when evaluated on experimental data taken from Sandia's Hard Rock Drilling Facility. These results have the potential to enable better drilling assessment, improved control, and extended component life-times.

More Details

Lost circulation in a hydrothermally cemented Basin-fill reservoir: Don A. Campbell Geothermal field, Nevada

Transactions - Geothermal Resources Council

Winn, Carmen; Dobson, Patrick; Ulrich, Craig; Kneafsey, Timothy; Lowry, Thomas S.; Akerley, John; Delwiche, Ben; Samuel, Abraham; Bauer, Stephen J.

Significant costs can be related to losing circulation of drilling fluids in geothermal drilling. This paper is the second of four case studies of geothermal fields operated by Ormat Technologies, directed at forming a comprehensive strategy to characterize and address lost circulation in varying conditions, and examines the geologic context of and common responses to lost circulation in the loosely consolidated, shallow sedimentary reservoir of the Don A. Campbell geothermal field. The Don A. Campbell Geothermal Field is in the SW portion of Gabbs Valley in NV, along the eastern margin of the Central Walker Lane shear zone. The reservoir here is shallow and primarily in the basin fill, which is hydrothermally altered along fault zones. Wells in this reservoir are highly productive (250-315 L/s) with moderate temperatures (120-125 °C) and were drilled to an average depth of ~1500 ft (450 m). Lost circulation is frequently reported beginning at depths of about 800 ft, slightly shallower than the average casing shoe depth of 900- 1000 ft (275-305 m). Reports of lost circulation frequently coincide with drilling through silicified basin fill. Strategies to address lost circulation differ above and below the cased interval; bentonite chips were used at shallow depths and aerated, gelled drilling fluids were used in the production intervals. Further study of this and other areas will contribute to developing a systematic understanding of geologic contextual-informed lost circulation mitigation strategies.

More Details

Advanced analytics of rig parameter data using rock reduction model constraints for improved drilling performance

Transactions - Geothermal Resources Council

Raymond, David W.; Foris, Adam J.; Norton, Jaiden; Mclennan, John

Drill rig parameter measurements are routinely used during deep well construction to monitor and guide drilling conditions for improved performance and reduced costs. While insightful into the drilling process, these measurements are of reduced value without a standard to aid in data evaluation and decision making. A method is demonstrated whereby rock reduction model constraints are used to interpret drilling response parameters; the method could be applied in real-time to improved decision-making in the field and to further discern technology performance during post-drilling evaluations. Drill rig parameter data were acquired by drilling contractor Frontier Drilling and evaluated for two wells drilled at the DOE-sponsored site, Utah Frontier Observatory for Research in Geothermal Energy (FORGE). The subject wells include: 1) FORGE 16A(78)-32, a directional well with vertical depth to a kick-off point at 5892 ft and a 65 degree tangent to a measured depth of 10987 ft and, 2) FORGE 56-32, a vertical monitoring well to a measured depth of 9145 ft. Drilling parameters are evaluated using laboratory-validated rock reduction models for predicting the phenomenological response of drag bits (Detournay and Defourny, 1992) along with other model constraints in computational algorithms. The method is used to evaluate overall bit performance, develop rock strength approximations, determine bit aggressiveness, characterize frictional energy losses, evaluate bit wear rates, and detect the presence of drillstring vibrations contributing to bit failure; comparisons are made to observations of bit wear and damage. Analyses are also presented to correlate performance to bit run cost drivers to provide guidance on the relative tradeoff between bit penetration rate and life. The method presented has applicability to development of advanced analytics on future geothermal wells using real-time electronic data recording for improved performance and reduced drilling costs.

More Details
Results 13901–14000 of 99,299
Results 13901–14000 of 99,299