Publications

Results 12401–12600 of 96,771

Search results

Jump to search filters

Using Computation Effectively for Scalable Poisson Tensor Factorization: Comparing Methods beyond Computational Efficiency

2021 IEEE High Performance Extreme Computing Conference, HPEC 2021

Myers, Jeremy M.; Dunlavy, Daniel D.

Poisson Tensor Factorization (PTF) is an important data analysis method for analyzing patterns and relationships in multiway count data. In this work, we consider several algorithms for computing a low-rank PTF of tensors with sparse count data values via maximum likelihood estimation. Such an approach reduces to solving a nonlinear, non-convex optimization problem, which can leverage considerable parallel computation due to the structure of the problem. However, since the maximum likelihood estimator corresponds to the global minimizer of this optimization problem, it is important to consider how effective methods are at both leveraging this inherent parallelism as well as computing a good approximation to the global minimizer. In this work we present comparisons of multiple methods for PTF that illustrate the tradeoffs in computational efficiency and accurately computing the maximum likelihood estimator. We present results using synthetic and real-world data tensors to demonstrate some of the challenges when choosing a method for a given tensor.

More Details

Gate Set Tomography

Quantum

Nielsen, Erik N.; Gamble, John K.; Rudinger, Kenneth M.; Scholten, Travis; Young, Kevin C.; Blume-Kohout, Robin J.

Gate set tomography (GST) is a protocol for detailed, predictive characterization of logic operations (gates) on quantum computing processors. Early versions of GST emerged around 2012-13, and since then it has been refined, demonstrated, and used in a large number of experiments. This paper presents the foundations of GST in comprehensive detail. The most important feature of GST, compared to older state and process tomography protocols, is that it is calibration-free. GST does not rely on pre-calibrated state preparations and measurements. Instead, it characterizes all the operations in a gate set simultaneously and self-consistently, relative to each other. Long sequence GST can estimate gates with very high precision and efficiency, achieving Heisenberg scaling in regimes of practical interest. In this paper, we cover GST’s intellectual history, the techniques and experiments used to achieve its intended purpose, data analysis, gauge freedom and fixing, error bars, and the interpretation of gauge-fixed estimates of gate sets. Our focus is fundamental mathematical aspects of GST, rather than implementation details, but we touch on some of the foundational algorithmic tricks used in the pyGSTi implementation.

More Details

Using Monitoring Data to Improve HPC Performance via Network-Data-Driven Allocation

2021 IEEE High Performance Extreme Computing Conference, HPEC 2021

Zhang, Yijia; Aksar, Burak; Aaziz, Omar R.; Schwaller, Benjamin S.; Brandt, James M.; Leung, Vitus J.; Egele, Manuel; Coskun, Ayse K.

On high-performance computing (HPC) systems, job allocation strategies control the placement of a job among available nodes. As the placement changes a job's communication performance, allocation can significantly affects execution times of many HPC applications. Existing allocation strategies typically make decisions based on resource limit, network topology, communication patterns, etc. However, system network performance at runtime is seldom consulted in allocation, even though it significantly affects job execution times.In this work, we demonstrate using monitoring data to improve HPC systems' performance by proposing a NetworkData-Driven (NeDD) job allocation framework, which monitors the network performance of an HPC system at runtime and allocates resources based on both network performance and job characteristics. NeDD characterizes system network performance by collecting the network traffic statistics on each router link, and it characterizes a job's sensitivity to network congestion by collecting Message Passing Interface (MPI) statistics. During allocation, NeDD pairs network-sensitive (network-insensitive) jobs with nodes whose parent routers have low (high) network traffic. Through experiments on a large HPC system, we demonstrate that NeDD reduces the execution time of parallel applications by 11% on average and up to 34%.

More Details

The marine and hydrokinetic toolkit (Mhkit) for data quality control and analysis

Proceedings of the European Wave and Tidal Energy Conference

Olson, Sterling S.; Fao, Rebecca; Coe, Ryan G.; Ruehl, Kelley M.; Driscoll, Frederick; Gunawan, Budi G.; Lansing, Carina; Ivanov, Hristo

The ability to handle data is critical at all stages of marine energy development. The Marine and Hydrokinetic Toolkit (MHKiT) is an open-source marine energy software, which includes modules for ingesting, applying quality control, processing, visualizing, and managing data. MHKiT-Python and MHKiT-MATLAB provide robust and verified functions that are needed by the marine energy community to standardize data processing. Calculations and visualizations adhere to International Electrotechnical Commission technical specifications and other guidelines. A resource assessment of National Data Buoy Center buoy 46050 near PACWAVE is performed using MHKiT and we discuss comparisons to the resource assessment provided performed by Dunkle et al. (2020).

More Details

Advertising DNS Protocol Use to Mitigate DDoS Attacks

Proceedings - International Conference on Network Protocols, ICNP

Davis, Jacob D.; Deccio, Casey

The Domain Name System (DNS) has been frequently abused for distributed denial-of-service (DDoS) attacks and cache poisoning because it relies on the User Datagram Protocol (UDP). Since UDP is connection-less, it is trivial for an attacker to spoof the source of a DNS query or response. While other secure transport mechanisms provide identity management, such as the Transmission Control Protocol (TCP) and DNS Cookies, there is currently no method for a client to state that they only use a given protocol. This paper presents a new method to allow protocol enforcement: DNS Protocol Advertisement Records (DPAR). Advertisement records allow Internet Protocol (IP) address subnets to post a public record in the reverse DNS zone stating which DNS mechanisms are used by their clients. DNS servers may then look up this record and require a client to use the stated mechanism, in turn preventing an attacker from sending spoofed messages over UDP. In this paper, we define the specification for DNS Protocol Advertisement Records, considerations that were made, and comparisons to alternative approaches. We additionally estimate the effectiveness of advertisements in preventing DDoS attacks and the expected burden to DNS servers.

More Details

Monte-Carlo modeling and design of a high-resolution hyperspectral computed tomography system with a multi-material patterned anodes for material identification applications

Proceedings of SPIE - The International Society for Optical Engineering

Dalton, Gabriella D.; Laros, James H.; Clifford, Joshua M.; Kemp, Emily K.; Limpanukorn, Ben L.; Jimenez, Edward S.

Industrial and security communities leverage x-ray computed tomography for several applications in non-destructive evaluation such as material detection and metrology. Many of these applications ultimately reach a limit as most x-ray systems have a nonlinear mathematical operator due to the Bremsstrahlung radiation emitted from the x-ray source. This work proposes a design of a multi-metal pattered anode coupled with a hyperspectral X-ray detector to improve spatial resolution, absorption signal, and overall data quality for various quantitative. The union of a multi-metal pattered anode x-ray source with an energy-resolved photon counting detector permits the generation and detection of a preferential set of X-ray energy peaks. When photons about the peaks are detected, while rejecting photons outside this neighborhood, the overall quality of the image is improved by linearizing the operator that defines the image formation. Additionally, the effective X-ray focal spot size allows for further improvement of the image quality by increasing resolution. Previous works use machine learning techniques to analyze the hyperspectral computed tomography signal and reliably identify and discriminate a wide range of materials based on a material's composition, improving data quality through a multi-material pattern anode will further enhance these identification and classification methods. This work presents initial investigations of a multi-metal patterned anode along with a hyperspectral detector using a general-purpose Monte Carlo particle transport code known as PHITS version 3.24. If successful, these results will have tremendous impact on several nondestructive evaluation applications in industry, security, and medicine.

More Details

Analysis of ALD Dielectric Leakage in Bulk GaN MOS Devices

2021 IEEE 8th Workshop on Wide Bandgap Power Devices and Applications, WiPDA 2021 - Proceedings

Glaser, Caleb E.; Binder, Andrew T.; Yates, Luke Y.; Allerman, A.A.; Feezell, Daniel F.; Kaplar, Robert K.

This study analyzes the ability of various processing techniques to reduce leakage current in vertical GaN MOS devices. Careful analysis is required to determine suitable gate dielectric materials in vertical GaN MOSFET devices since they are largely responsible for determination of threshold voltage, gate leakage reduction, and semiconductor/dielectric interface traps. SiO2, Al2 O3, and HfO2 films were deposited by Atomic Layer Deposition (ALD) and subjected to treatments nominally identical to those in a vertical GaN MOSFET fabrication sequence. This work determines mechanisms for reducing gate leakage by reduction of surface contaminants and interface traps using pre-deposition cleans, elevated temperature depositions, and post-deposition anneals. Breakdown measurements indicate that ALD Al2O3 is an ideal candidate for a MOSFET gate dielectric, with a breakdown electric field near 7.5 MV/cm with no high temperature annealing required to increase breakdown strength. SiO2 ALD films treated with a post deposition anneal at 850 °C for 30 minutes show significant reduction in leakage current while maintaining breakdown at 5.5 MV/cm. HfO2 films show breakdown nominally identical to annealed SiO2 films, but with significantly higher leakage. Additionally, HfO2 films show more sensitivity to high temperature annealing suggesting that more research into surface cleans is necessary to improving these films for MOSFET gate applications.

More Details

Introducing primre’s mre software knowledge hub (February 2021)

Proceedings of the European Wave and Tidal Energy Conference

Ruehl, Kelley M.; Topper, Mathew B.R.; Faltas, Mina A.; Lansing, Carina; Weers, Jon; Driscoll, Frederick

This paper focuses on the role of the Marine Renewable Energy (MRE) Software Knowledge Hub on the Portal and Repository for Information on Marine Renewable Energy (PRIMRE). The MRE Software Knowledge Hub provides online services for MRE software users and developers, and seeks to develop assessments and recommendations for improving MRE software in the future. Online software discovery platforms, known as the Code Hub and the Code Catalog, are provided. The Code Hub is a collection of open-source MRE software that includes a landing page with search functionality, linked to files hosted on the MRE Code Hub GitHub organization. The Code Catalog is a searchable online platform for discovery of useful (open-source or commercial) software packages, tools, codes, and other software products. To gather information about the existing MRE software landscape, a software survey is being performed, the preliminary results of which are presented herein. Initially, the data collected in the MRE software survey will be used to populate the MRE Software knowledge hub on PRIMRE, and future work will use data from the survey to perform a gap analysis and develop a vision for future software development. Additionally, as one of PRIMRE’s roles is to support development of MRE software within project partners, a silo of knowledge relating to best practices has been gathered. An early draft of new guidance developed from this knowledge is presented.

More Details

DC Bus Collection of Type-4 Wind Turbine Farms with Phasing Control to Minimize Energy Storage

IET Conference Proceedings

Weaver, Wayne W.; Wilson, David G.; Robinett, Rush D.; Young, Joseph

Typical Type-4 wind turbines use DC-link inverters to couple the electrical machine to the power grid. Each wind turbine has two power conversion steps. Therefore, an N-turbine farm will have 2N power converters. This work presents a DC bus collection system for a type-4 wind farm that reduces the overall required number of converters and minimizes the energy storage system (ESS) requirements. This approach requires one conversion step per turbine, one converter for the ESS and a single grid coupling converter, which leads to N + 2 converters for the wind farm which will result in significant cost savings. However, one of the trade-offs for a DC collection system is the need for increased energy storage to filter the power variations and improve power quality to the grid. This paper presents a novel approach to an effective DC bus collection system design. The DC collection for the wind farm implements a power phasing control method between turbines that filter the variations and improves power quality while minimizing the need for added energy storage system hardware and improved power quality. The phasing control takes advantage of a novel power packet network concept with nonlinear power flow control design techniques that guarantees both stable and enhanced dynamic performance. This paper presents the theoretical design of the DC collection and phasing control. To demonstrate the efficacy of this approach detailed numerical simulation examples are presented.

More Details

Etched and Regrown Vertical GaN Junction Barrier Schottky Diodes

2021 IEEE 8th Workshop on Wide Bandgap Power Devices and Applications, WiPDA 2021 - Proceedings

Binder, Andrew B.; Pickrell, Gregory P.; Allerman, A.A.; Dickerson, Jeramy R.; Yates, Luke Y.; Steinfeldt, Jeffrey A.; Glaser, Caleb E.; Crawford, Mary H.; Armstrong, Andrew A.; Sharps, Paul; Kaplar, Robert K.

This work provides the first demonstration of vertical GaN Junction Barrier Schottky (JBS) rectifiers fabricated by etch and regrowth of p-GaN. A reverse blocking voltage near 1500 V was achieved at 1 mA reverse leakage, with a sub 1 V turn-on and a specific on-resistance of 10 mΩ-cm2. This result is compared to other reported JBS devices in the literature and our device demonstrates the lowest leakage slope at high reverse bias. A large initial leakage current is present near zero-bias which is attributed to a combination of inadequate etch-damage removal and passivation induced leakage current.

More Details

An Isolated Bidirectional DC-DC Converter with High Voltage Conversion Ratio and Reduced Output Current Ripple

2021 IEEE 8th Workshop on Wide Bandgap Power Devices and Applications, WiPDA 2021 - Proceedings

Zhang, Zhining; Hu, Boxue; Zhang, Yue; Wang, Jin; Mueller, Jacob M.; Garcia Rodriguez, Luciano A.; Ray, Anindya; Atcitty, Stanley A.

This paper presents an isolated bidirectional dc/dc converter for battery energy storage applications. Two main features of the proposed circuit topology are high voltage-conversion ratio and reduced battery current ripple. The primary side circuit is a quasi-switched-capacitor circuit with reduced voltage stress on switching devices and a 3:1 voltage step down ratio, which reduces the turns ratio of the transformer to 6:1:1. The secondary side circuit has an interleaved operation by utilizing the split magnetizing inductance of the transformer, which not only helps to increase the step down ratio but also reduces the battery current ripple. Similar to the dual-active-bridge circuit, the phase shift control is implemented to regulate the operation power of the circuit. A 1-kW, 300-kHz, 380-420 V/20-33 V GaN-based circuit prototype is currently under fabrication. The preliminary test results are presented.

More Details

Modeling and predicting power from a WEC array

Oceans Conference Record (IEEE)

Coe, Ryan G.; Bacelli, Giorgio B.; Gaebele, Daniel; Cotten, Alfred; Mcnatt, Cameron; Wilson, David G.; Weaver, Wayne; Kasper, Jeremy L.; Khalil, Mohammad K.; Dallman, Ann R.

This study presents a numerical model of a WEC array. The model will be used in subsequent work to study the ability of data assimilation to support power prediction from WEC arrays and WEC array design. In this study, we focus on design, modeling, and control of the WEC array. A case study is performed for a small remote Alaskan town. Using an efficient method for modeling the linear interactions within a homogeneous array, we produce a model and predictionless feedback controllers for the devices within the array. The model is applied to study the effects of spectral wave forecast errors on power output. The results of this analysis show that the power performance of the WEC array will be most strongly affected by errors in prediction of the spectral period, but that reductions in performance can realistically be limited to less than 10% based on typical data assimilation based spectral forecasting accuracy levels.

More Details

RingIR AG-4000 Testing

Glen, Andrew G.; Mayes, Cathryn M.

The AG-4000 detector can identify gas phase species using molecular fingerprinting and has potential application for SARS-CoV-2 detection in near real time. As part of the development process Sandia will utilize the biological aerosol test bed deployed at the Aerosol Complex to evaluate the penetration of MS2 bacteriophage aerosol through the Ring IR system. The objective of this project is to provide experimentally derived measurements of the RingIR AG-4000 penetration efficiency, including external exhaust filter for mitigation of exhaust aerosol and operation using MS2 bacteriophage as a biological surrogate to the SARS-CoV-2 virus.

More Details

Union: A Unified HW-SW Co-Design Ecosystem in MLIR for Evaluating Tensor Operations on Spatial Accelerators

Parallel Architectures and Compilation Techniques - Conference Proceedings, PACT

Jeong, Geonhwa; Kestor, Gokcen; Chatarasi, Prasanth; Parashar, Angshuman; Tsai, Po A.; Rajamanickam, Sivasankaran R.; Gioiosa, Roberto; Krishna, Tushar

To meet the extreme compute demands for deep learning across commercial and scientific applications, dataflow accelerators are becoming increasingly popular. While these “domain-specific” accelerators are not fully programmable like CPUs and GPUs, they retain varying levels of flexibility with respect to data orchestration, i.e., dataflow and tiling optimizations to enhance efficiency. There are several challenges when designing new algorithms and mapping approaches to execute the algorithms for a target problem on new hardware. Previous works have addressed these challenges individually. To address this challenge as a whole, in this work, we present a HW-SW codesign ecosystem for spatial accelerators called Union within the popular MLIR compiler infrastructure. Our framework allows exploring different algorithms and their mappings on several accelerator cost models. Union also includes a plug-and-play library of accelerator cost models and mappers which can easily be extended. The algorithms and accelerator cost models are connected via a novel mapping abstraction that captures the map space of spatial accelerators which can be systematically pruned based on constraints from the hardware, workload, and mapper. We demonstrate the value of Union for the community with several case studies which examine offloading different tensor operations (CONV/GEMM/Tensor Contraction) on diverse accelerator architectures using different mapping schemes.

More Details

Computational Optimization of Mechanical Energy Transduction (COMET) Toolkit

IEEE International Ultrasonics Symposium, IUS

Kohtanen, Eetu; Sugino, Christopher; Allam, Ahmed; El-Kady, I.

Ultrasonic transducers can be leveraged to transmit power and data through metallic enclosures such as Faraday cages for which standard electromagnetic methods are infeasible. The design of these systems features a number of variables that must be carefully tweaked for optimal data and power transfer rate and efficiency. The objective of this work is to present a toolkit, COMET, standing for Computational Optimization of Mechanical Energy Transduction, in which the design process and analysis of such transducer systems is streamlined. The toolkit features flexible tools for introducing an arbitrary number of backing/bonding layers, material libraries, parameter sweeps, and optimization.

More Details

Bandwidth Enhancement Strategies for Acoustic Data Transmission by Piezoelectric Transduction

IEEE International Ultrasonics Symposium, IUS

Gerbe, Romain; Ruzzene, Massimo; Sugino, Christopher; Erturk, Alper; Steinfeldt, Jeffrey A.; Oxandale, Samuel W.; Reinke, Charles M.; El-Kady, I.

Several applications, such as underwater vehicles or waste containers, require the ability to transfer data from transducers enclosed by metallic structures. In these cases, Faraday shielding makes electromagnetic transmission highly inefficient, and suggests the employment of ultrasonic transmission as a promising alternative. While ultrasonic data transmission by piezoelectric transduction provides a practical solution, the amplitude of the transmitted signal strongly depends on acoustic resonances of the transmission line, which limits the bandwidth over which signals are sent and the rate of data transmission. The objective of this work is to investigate piezoelectric acoustic transducer configurations that enable data transmission at a relatively constant amplitude over large frequency bands. This is achieved through structural modifications of the transmission line, which includes layering of the transducers, as well as the introduction of electric circuits connected to both transmitting and receiving transducers. Both strategies lead to strong enhancements in the available bandwidth and show promising directions for the design of effective acoustic transmission across metallic barriers.

More Details

Detachable Dry-Coupled Ultrasonic Power Transfer Through Metallic Enclosures

IEEE International Ultrasonics Symposium, IUS

Allam, Ahmed; Patel, Herit; Sugino, Christopher; Arrington, Christian L.; St John, Christopher S.; Steinfeldt, Jeffrey A.; Erturk, Alper; El-Kady, I.

Ultrasonic waves can be used to transfer power and data to electronic devices in sealed metallic enclosures. Two piezoelectric transducers are used to transmit and receive elastic waves that propagate through the metal. For an efficient power transfer, both transducers are typically bonded to the metal or coupled with a gel which limits the device portability. We present an ultrasonic power transfer system with a detachable transmitter that uses a dry elastic layer and a magnetic joint for efficient coupling. We show that the system can deliver more than 2 W of power to an electric load with 50% efficiency.

More Details

Using Computation Effectively for Scalable Poisson Tensor Factorization: Comparing Methods beyond Computational Efficiency

2021 IEEE High Performance Extreme Computing Conference, HPEC 2021

Myers, Jeremy M.; Dunlavy, Daniel D.

Poisson Tensor Factorization (PTF) is an important data analysis method for analyzing patterns and relationships in multiway count data. In this work, we consider several algorithms for computing a low-rank PTF of tensors with sparse count data values via maximum likelihood estimation. Such an approach reduces to solving a nonlinear, non-convex optimization problem, which can leverage considerable parallel computation due to the structure of the problem. However, since the maximum likelihood estimator corresponds to the global minimizer of this optimization problem, it is important to consider how effective methods are at both leveraging this inherent parallelism as well as computing a good approximation to the global minimizer. In this work we present comparisons of multiple methods for PTF that illustrate the tradeoffs in computational efficiency and accurately computing the maximum likelihood estimator. We present results using synthetic and real-world data tensors to demonstrate some of the challenges when choosing a method for a given tensor.

More Details

Advanced analytics of rig parameter data using rock reduction model constraints for improved drilling performance

Transactions - Geothermal Resources Council

Raymond, David W.; Foris, Adam J.; Norton, Jaiden; Mclennan, John

Drill rig parameter measurements are routinely used during deep well construction to monitor and guide drilling conditions for improved performance and reduced costs. While insightful into the drilling process, these measurements are of reduced value without a standard to aid in data evaluation and decision making. A method is demonstrated whereby rock reduction model constraints are used to interpret drilling response parameters; the method could be applied in real-time to improved decision-making in the field and to further discern technology performance during post-drilling evaluations. Drill rig parameter data were acquired by drilling contractor Frontier Drilling and evaluated for two wells drilled at the DOE-sponsored site, Utah Frontier Observatory for Research in Geothermal Energy (FORGE). The subject wells include: 1) FORGE 16A(78)-32, a directional well with vertical depth to a kick-off point at 5892 ft and a 65 degree tangent to a measured depth of 10987 ft and, 2) FORGE 56-32, a vertical monitoring well to a measured depth of 9145 ft. Drilling parameters are evaluated using laboratory-validated rock reduction models for predicting the phenomenological response of drag bits (Detournay and Defourny, 1992) along with other model constraints in computational algorithms. The method is used to evaluate overall bit performance, develop rock strength approximations, determine bit aggressiveness, characterize frictional energy losses, evaluate bit wear rates, and detect the presence of drillstring vibrations contributing to bit failure; comparisons are made to observations of bit wear and damage. Analyses are also presented to correlate performance to bit run cost drivers to provide guidance on the relative tradeoff between bit penetration rate and life. The method presented has applicability to development of advanced analytics on future geothermal wells using real-time electronic data recording for improved performance and reduced drilling costs.

More Details

Scoping and concept design of a WEC for autonomous power

Oceans Conference Record (IEEE)

Korde, Umesh A.; Gish, L.A.; Bacelli, Giorgio B.; Coe, Ryan G.

This paper reports results from an ongoing investigation on potential ways to utilize small wave energy devices that can be transported in, and deployed from, torpedo tubes. The devices are designed to perform designated ocean measurement operations and thus need to convert enough energy to power onboard sensors, while storing any excess energy to support vehicle recharging operations. Examined in this paper is a traditional tubular oscillating water column device, and particular interest here is in designs that lead to optimization of power converted from shorter wind sea waves. A two step design procedure is investigated here, wherein a more approximate two-degree-of-freedom model is first used to identify relative dimensions (of device elements) that optimize power conversion from relative oscillations between the device elements. A more rigorous mathematical model based on the hydrodynamics of oscillating pressure distributions within solid oscillators is then used to provide the hydrodynamic coefficients, forces, and flow rates for the device. These results provide a quick but rigorous way to estimate the energy conversion performance of the device in various wave climates, while enabling more accurate design of the power takeoff and energy storage systems.

More Details

Sandia 7uPCX critical experiments exploring the effects of fuel-to-water ratio variations

Transactions of the American Nuclear Society

Laros, James H.; Harms, Gary A.; Campbell, Rafe C.; Hanson, Christina B.

The Sandia Critical Experiments (SCX) Program provides a specialized facility for performing water moderated and reflected critical experiments with UO2 fuel rod arrays. A history of safe reactor operations and flexibility in reactor core configuration has resulted in the completion of several benchmark critical experiment evaluations that are published in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. The LEUCOMP-THERM-078 and LEU-COMP-THERM-080 evaluations from the handbook provide similar cases for reference. The set of experiments described here were performed using the Seven Percent Critical Experiment (7uPCX) fuel to measure the effects of decreasing the fuel-to-water volume ratio on the critical array size. This was accomplished by using fuel loading patterns to effectively increase the pitch of the fuel arrays in the assembly. The fuel rod pitch variations provided assembly configurations that ranged from strongly undermoderated to slightly overmoderated.

More Details

Low-Communication Asynchronous Distributed Generalized Canonical Polyadic Tensor Decomposition

2021 IEEE High Performance Extreme Computing Conference, HPEC 2021

Lewis, Cannada L.; Phipps, Eric T.

In this work, we show that reduced communication algorithms for distributed stochastic gradient descent improve the time per epoch and strong scaling for the Generalized Canonical Polyadic (GCP) tensor decomposition, but with a cost, achieving convergence becomes more difficult. The implementation, based on MPI, shows that while one-sided algorithms offer a path to asynchronous execution, the performance benefits of optimized allreduce are difficult to best.

More Details

Using Monitoring Data to Improve HPC Performance via Network-Data-Driven Allocation

2021 IEEE High Performance Extreme Computing Conference, HPEC 2021

Zhang, Yijia; Aksar, Burak; Aaziz, Omar R.; Schwaller, Benjamin S.; Brandt, James M.; Leung, Vitus J.; Egele, Manuel; Coskun, Ayse K.

On high-performance computing (HPC) systems, job allocation strategies control the placement of a job among available nodes. As the placement changes a job's communication performance, allocation can significantly affects execution times of many HPC applications. Existing allocation strategies typically make decisions based on resource limit, network topology, communication patterns, etc. However, system network performance at runtime is seldom consulted in allocation, even though it significantly affects job execution times.In this work, we demonstrate using monitoring data to improve HPC systems' performance by proposing a NetworkData-Driven (NeDD) job allocation framework, which monitors the network performance of an HPC system at runtime and allocates resources based on both network performance and job characteristics. NeDD characterizes system network performance by collecting the network traffic statistics on each router link, and it characterizes a job's sensitivity to network congestion by collecting Message Passing Interface (MPI) statistics. During allocation, NeDD pairs network-sensitive (network-insensitive) jobs with nodes whose parent routers have low (high) network traffic. Through experiments on a large HPC system, we demonstrate that NeDD reduces the execution time of parallel applications by 11% on average and up to 34%.

More Details

StressBench: A Configurable Full System Network and I/O Benchmark Framework

2021 IEEE High Performance Extreme Computing Conference, HPEC 2021

Chester, Dean G.; Groves, Taylor; Hammond, Simon D.; Law, Tim; Wright, Steven A.; Smedley-Stevenson, Richard; Fahmy, Suhaib A.; Mudalidge, Gihan R.; Jarvis, Stephen A.

We present StressBench, a network benchmarking framework written for testing MPI operations and file I/O concurrently. It is designed specifically to execute MPI communication and file access patterns that are representative of real-world scientific applications. Existing tools consider either the worst case congestion with small abstract patterns or peak performance with simplistic patterns. StressBench allows for a richer study of congestion by allowing orchestration of network load scenarios that are representative of those typically seen at HPC centres, something that is difficult to achieve with existing tools. We demonstrate the versatility of the framework from micro benchmarks through to finely controlled congested runs across a cluster. Validation of the results using four proxy application communication schemes within StressBench against parent applications shows a maximum difference of 15%. Using the I/O modeling capabilities of StressBench, we are able to quantify the impact of file I/O on application traffic showing how it can be used in procurement and performance studies.

More Details

Lost circulation in a hydrothermally cemented Basin-fill reservoir: Don A. Campbell Geothermal field, Nevada

Transactions - Geothermal Resources Council

Winn, Carmen L.; Dobson, Patrick; Ulrich, Craig; Kneafsey, Timothy; Lowry, Thomas S.; Akerley, John; Delwiche, Ben; Samuel, Abraham; Bauer, Stephen

Significant costs can be related to losing circulation of drilling fluids in geothermal drilling. This paper is the second of four case studies of geothermal fields operated by Ormat Technologies, directed at forming a comprehensive strategy to characterize and address lost circulation in varying conditions, and examines the geologic context of and common responses to lost circulation in the loosely consolidated, shallow sedimentary reservoir of the Don A. Campbell geothermal field. The Don A. Campbell Geothermal Field is in the SW portion of Gabbs Valley in NV, along the eastern margin of the Central Walker Lane shear zone. The reservoir here is shallow and primarily in the basin fill, which is hydrothermally altered along fault zones. Wells in this reservoir are highly productive (250-315 L/s) with moderate temperatures (120-125 °C) and were drilled to an average depth of ~1500 ft (450 m). Lost circulation is frequently reported beginning at depths of about 800 ft, slightly shallower than the average casing shoe depth of 900- 1000 ft (275-305 m). Reports of lost circulation frequently coincide with drilling through silicified basin fill. Strategies to address lost circulation differ above and below the cased interval; bentonite chips were used at shallow depths and aerated, gelled drilling fluids were used in the production intervals. Further study of this and other areas will contribute to developing a systematic understanding of geologic contextual-informed lost circulation mitigation strategies.

More Details

Leveraging Resilience Metrics to Support Security System Analysis

2021 IEEE Virtual IEEE International Symposium on Technologies for Homeland Security, HST 2021

Caskey, Susan A.; Gunda, Thushara G.; Wingo, Jamie; Williams, Adam D.

Resilience has been defined as a priority for the US critical infrastructure. This paper presents a process for incorporating resiliency-derived metrics into security system evaluations. To support this analysis, we used a multi-layer network model (MLN) reflecting the defined security system of a hypothetical nuclear power plant to define what metrics would be useful in understanding a system's ability to absorb perturbation (i.e., system resilience). We defined measures focusing on the system's criticality, rapidity, diversity, and confidence at each network layer, simulated adversary path, and the system as a basis for understanding the system's resilience. For this hypothetical system, our metrics indicated the importance of physical infrastructure to overall system criticality, the relative confidence of physical sensors, and the lack of diversity in assessment activities (i.e., dependence on human evaluations). Refined model design and data outputs will enable more nuanced evaluations into temporal, geospatial, and human behavior considerations. Future studies can also extend these methodologies to capture respond and recover aspects of resilience, further supporting the protection of critical infrastructure.

More Details

A Peek into the DNS Cookie Jar: An Analysis of DNS Cookie Use

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Davis, Jacob D.; Deccio, Casey

More Details

Parameterized Pseudo-Differential Operators for Graph Convolutional Neural Networks

Proceedings of the IEEE International Conference on Computer Vision

Potter, Kevin M.; Smith, Matthew C.; Perera, Shehan R.; Sleder, Steven R.; Tencer, John T.

We present a novel graph convolutional layer that is conceptually simple, fast, and provides high accuracy with reduced overfitting. Based on pseudo-differential operators, our layer operates on graphs with relative position information available for each pair of connected nodes. Our layer represents a generalization of parameterized differential operators (previously shown effective for shape correspondence, image segmentation, and dimensionality reduction tasks) to a larger class of graphs. We evaluate our method on a variety of supervised learning tasks, including 2D graph classification using the MNIST and CIFAR-100 datasets and 3D node correspondence using the FAUST dataset. We also introduce a superpixel graph version of the lesion classification task using the ISIC 2016 challenge dataset and evaluate our layer versus other state-of-the-art graph convolutional network architectures.The new layer outperforms multiple recent architectures on graph classification tasks using the MNIST and CIFAR-100 superpixel datasets. For the ISIC dataset, we outperform all other graph neural networks examined as well as all of the submissions to the original ISIC challenge despite the best of those models having more than 200 times as many parameters as our model.

More Details

CSPlib - A Software Toolkit for the Analysis of Dynamical Systems and Chemical Kinetic Models

Diaz-Ibarra, Oscar H.; Kim, Kyungjoo K.; Safta, Cosmin S.; Najm, H.N.

CSPlib is an open source software library for analyzing general ordinary differential equation (ODE) systems and detailed chemical kinetic ODE systems. It relies on the computational singular perturbation (CSP) method for the analysis of these systems. The software provides support for: General ODE models (gODE model class) for computing source terms and Jacobians for a generic ODE system; TChem model (ChemElemODETChem model class) for computing source term, Jacobian, other necessary chemical reaction data, as well as the rates of progress for a homogenous batch reactor using an elementary step detailed chemical kinetic reaction mechanism. This class relies on the TChem [2] library; A set of functions to compute essential elements of CSP analysis (Kernel class). This includes computations of the eigensolution of the Jacobian matrix, CSP basis vectors and co-vectors, time scales (reciprocals of the magnitudes of the Jacobian eigenvalues), mode amplitudes, CSP pointers, and the number of exhausted modes. This class relies on the Tines library; A set of functions to compute the eigensolution of the Jacobian matrix using Tines library GPU eigensolver; A set of functions to compute CSP indices (Index Class). This includes participation indices and both slow and fast importance indices.

More Details

Rechargeable alkaline zinc–manganese oxide batteries for grid storage: Mechanisms, challenges and developments

Materials Science and Engineering R: Reports

Lim, Matthew B.; Lambert, Timothy N.; Chalamala, Babu C.

Rechargeable alkaline Zn–MnO2 (RAM) batteries are a promising candidate for grid-scale energy storage owing to their high theoretical energy density rivaling lithium-ion systems (∼400 Wh/L), relatively safe aqueous electrolyte, established supply chain, and projected costs below $100/kWh at scale. In practice, however, many fundamental chemical and physical processes at both electrodes make it difficult to achieve commercially competitive energy density and cycle life. This review presents a detailed and timely analysis of the constituent materials, current commercial status, electrode processes, and performance-limiting factors of RAM batteries. We also examine recently reported strategies in RAM and related systems to address these issues through additives and modifications to the electrode materials and electrolyte, special ion-selective separators and/or coatings, and unconventional cycling protocols. We conclude with a critical summary of these developments and discussion of how future studies should be focused toward the goal of energy-dense, scalable, and cost-effective RAM systems.

More Details

Dakota and Pyomo for Closed and Open Box Controller Gain Tuning

Proceedings of the IEEE Conference on Decision and Control

Williams, Kyle R.; Wilbanks, James J.; Schlossman, Rachel S.; Kozlowski, David M.; Parish, Julie M.

Pyomo and Dakota are openly available software packages developed by Sandia National Labs. In this tutorial, methods for automating the optimization of controller parameters for a nonlinear cart-pole system are presented. Two approaches are described and demonstrated on the cart-pole example problem for tuning a linear quadratic regulator and also a partial feedback linearization controller. First the problem is formulated as a pseudospectral optimization problem under an open box methodology utilizing Pyomo, where the plant model is fully known to the optimizer. In the next approach, a black-box approach utilizing Dakota in concert with a MATLAB or Simulink plant model is discussed, where the plant model is unknown to the optimizer. A comparison of the two approaches provides the end user the advantages and shortcomings of each method in order to pick the right tool for their problem. We find that complex system models and objectives are easily incorporated in the Dakota-based approach with minimal setup time, while the Pyomo-based approach provides rapid solutions once the system model has been developed.

More Details

USING DEEP NEURAL NETWORKS TO PREDICT MATERIAL TYPES IN CONDITIONAL POINT SAMPLING APPLIED TO MARKOVIAN MIXTURE MODELS

Proceedings of the International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering, M and C 2021

Davis, Warren L.; Olson, Aaron J.; Popoola, Gabriel A.; Bolintineanu, Dan S.; Rodgers, Theron R.; Vu, Emily

Conditional Point Sampling (CoPS) is a recently developed stochastic media transport algorithm that has demonstrated a high degree of accuracy in 1-D and 3-D calculations for binary mixtures with Markovian mixing statistics. In theory, CoPS has the capacity to be accurate for material structures beyond just those with Markovian statistics. However, realizing this capability will require development of conditional probability functions (CPFs) that are based, not on explicit Markovian properties, but rather on latent properties extracted from material structures. Here, we describe a first step towards extracting these properties by developing CPFs using deep neural networks (DNNs). Our new approach lays the groundwork for enabling accurate transport on many classes of stochastic media. We train DNNs on ternary stochastic media with Markovian mixing statistics and compare their CPF predictions to those made by existing CoPS CPFs, which are derived based on Markovian mixing properties. We find that the DNN CPF predictions usually outperform the existing approximate CPF predictions, but with wider variance. In addition, even when trained on only one material volume realization, the DNN CPFs are shown to make accurate predictions on other realizations that have the same internal mixing behavior. We show that it is possible to form a useful CoPS CPF by using a DNN to extract correlation properties from realizations of stochastically mixed media, thus establishing a foundation for creating CPFs for mixtures other than those with Markovian mixing, where it may not be possible to derive an accurate analytical CPF.

More Details

Adaptive, Cyber-Physical Special Protection Schemes to Defend the Electric Grid Against Predictable and Unpredictable Disturbances

2021 Resilience Week, RWS 2021 - Proceedings

Hossain-McKenzie, Shamina S.; Calzada, Daniel A.; Goes, Christopher E.; Jacobs, Nicholas J.; Summers, Adam; Davis, Katherine; Li, Hanyue; Mao, Zeyu; Overbye, Thomas; Shetye, Komal

Special protection schemes (SPSs) safeguard the grid by detecting predefined abnormal conditions and deploying predefined corrective actions. Utilities leverage SPSs to maintain stability, acceptable voltages, and loading limits during disturbances. However, traditional SPSs cannot defend against unpredictable disturbances. Events such as cyber attacks, extreme weather, and electromagnetic pulses have unpredictable trajectories and require adaptive response. Therefore, we propose a harmonized automatic relay mitigation of nefarious intentional events (HARMONIE)-SPS that learns system conditions, mitigates cyber-physical consequences, and preserves grid operation during both predictable and unpredictable disturbances. In this paper, we define the HARMONIE-SPS approach, detail progress on its development, and provide initial results using a WSCC 9-bus system.

More Details

User-Centric System Fault Identification Using IO500 Benchmark

Proceedings of PDSW 2021: IEEE/ACM 6th International Parallel Data Systems Workshop, Held in conjunction with SC 2021: The International Conference for High Performance Computing, Networking, Storage and Analysis

Liem, Radita; Povaliaiev, Dmytro; Lofstead, Gerald F.; Kunkel, Julian; Terboven, Christian

I/O performance in a multi-user environment is difficult to predict. Users do not know what I/O performance to expect when running and tuning applications. We propose to use the IO500 benchmark as a way to guide user expectations on their application's performance and to aid identifying root causes of their I/O problems that might come from the system. Our experiments describe how we manage user expectation with IO500 and provide a mechanism for system fault identification. This work also provides us with information of the tail latency problem that needs to be addressed and granular information about the impact of I/O technique choices (POSIX and MPI-IO).

More Details

SCTuner: An Autotuner Addressing Dynamic I/O Needs on Supercomputer I/O Subsystems

Proceedings of PDSW 2021: IEEE/ACM 6th International Parallel Data Systems Workshop, Held in conjunction with SC 2021: The International Conference for High Performance Computing, Networking, Storage and Analysis

Tang, Houjun; Xie, Bing; Byna, Suren; Carns, Philip; Koziol, Quincey; Kannan, Sudarsun; Lofstead, Gerald F.; Oral, Sarp

In high-performance computing (HPC), scientific applications often manage a massive amount of data using I/O libraries. These libraries provide convenient data model abstractions, help ensure data portability, and, most important, empower end users to improve I/O performance by tuning configurations across multiple layers of the HPC I/O stack. We propose SCTuner, an autotuner integrated within the I/O library itself to dynamically tune both the I/O library and the underlying I/O stack at application runtime. To this end, we introduce a statistical benchmarking method to profile the behaviors of individual supercomputer I/O subsystems with varied configurations across I/O layers. We use the benchmarking results as the built-in knowledge in SCTuner, implement an I/O pattern extractor, and plan to implement an online performance tuner as the SCTuner runtime. We conducted a benchmarking analysis on the Summit supercomputer and its GPFS file system Alpine. The preliminary results show that our method can effectively extract the consistent I/O behaviors of the target system under production load, building the base for I/O autotuning at application runtime.

More Details

CONDITIONAL POINT SAMPLING IMPLEMENTATION FOR THE GPU

Proceedings of the International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering, M and C 2021

Kersting, Luke J.; Olson, Aaron J.; Bossler, Kerry B.

Conditional Point Sampling (CoPS) is a recently developed stochastic media transport algorithm that has demonstrated a high degree of accuracy in 1D and 3D simulations implemented for the CPU in Python. However, it is increasingly important that modern, production-level transport codes like CoPS be adapted for use on next-generation computing architectures. In this project, we describe the creation of a fast and accurate variant of CoPS implemented for the GPU in C++. As an initial test, we performed a code-to-code verification using single-history cohorts, which showed that the GPU implementation matched the original CPU implementation to within statistical uncertainty, while improving the speed by over a factor of 4000. We then tested the GPU implementation for cohorts up to size 64 and compared three variants of CoPS based on how the particle histories are grouped into cohorts: successive, simultaneous, and a successive-simultaneous hybrid. We examined the accuracy-efficiency tradeoff of each variant for 9 different benchmarks, measuring the reflectance and transmittance in a cubic geometry with reflecting boundary conditions on the four non-transmissive or reflective faces. Successive cohorts were found to be far more accurate than simultaneous cohorts for both reflectance (4.3 times) and transmittance (5.9 times), although simultaneous cohorts run more than twice as fast as successive cohorts, especially for larger cohorts. The hybrid cohorts demonstrated speed and accuracy behavior most similar to that of simultaneous cohorts. Overall, successive cohorts were found to be more suitable for the GPU due to their greater accuracy and reproducibility, although simultaneous and hybrid cohorts present an enticing prospect for future research.

More Details

Resilience-based performance measures for next-generation systems security engineering

Proceedings - International Carnahan Conference on Security Technology

Williams, Adam D.; Adams, Thomas A.; Wingo, Jamie; Birch, Gabriel C.; Caskey, Susan A.; Fleming Lindsley, Elizabeth S.; Gunda, Thushara G.

Performance measures commonly used in systems security engineering tend to be static, linear, and have limited utility in addressing challenges to security performance from increasingly complex risk environments, adversary innovation, and disruptive technologies. Leveraging key concepts from resilience science offers an opportunity to advance next-generation systems security engineering to better describe the complexities, dynamism, and non-linearity observed in security performance—particularly in response to these challenges. This article introduces a multilayer network model and modified Continuous Time Markov Chain model that explicitly captures interdependencies in systems security engineering. The results and insights from a multilayer network model of security for a hypothetical nuclear power plant introduce how network-based metrics can incorporate resilience concepts into performance metrics for next generation systems security engineering.

More Details

Exploration of multifidelity UQ sampling strategies for computer network applications

International Journal for Uncertainty Quantification

Geraci, Gianluca G.; Crussell, Jonathan C.; Swiler, Laura P.; Debusschere, Bert D.

Network modeling is a powerful tool to enable rapid analysis of complex systems that can be challenging to study directly using physical testing. Two approaches are considered: emulation and simulation. The former runs real software on virtualized hardware, while the latter mimics the behavior of network components and their interactions in software. Although emulation provides an accurate representation of physical networks, this approach alone cannot guarantee the characterization of the system under realistic operative conditions. Operative conditions for physical networks are often characterized by intrinsic variability (payload size, packet latency, etc.) or a lack of precise knowledge regarding the network configuration (bandwidth, delays, etc.); therefore uncertainty quantification (UQ) strategies should be also employed. UQ strategies require multiple evaluations of the system with a number of evaluation instances that roughly increases with the problem dimensionality, i.e., the number of uncertain parameters. It follows that a typical UQ workflow for network modeling based on emulation can easily become unattainable due to its prohibitive computational cost. In this paper, a multifidelity sampling approach is discussed and applied to network modeling problems. The main idea is to optimally fuse information coming from simulations, which are a low-fidelity version of the emulation problem of interest, in order to decrease the estimator variance. By reducing the estimator variance in a sampling approach it is usually possible to obtain more reliable statistics and therefore a more reliable system characterization. Several network problems of increasing difficulty are presented. For each of them, the performance of the multifidelity estimator is compared with respect to the single fidelity counterpart, namely, Monte Carlo sampling. For all the test problems studied in this work, the multifidelity estimator demonstrated an increased efficiency with respect to MC.

More Details

Applying Utility's Advanced Grid Technologies to Improve Resiliency of a Critical Load

2021 Resilience Week, RWS 2021 - Proceedings

Vartanian, Charlie; Koplin, Clay; Kudrna, Trever; Clark, Waylon T.; Borneo, Daniel R.; Kolln, Jaime; Huang, Daisy; Tuffner, Frank; Panwar, Mayank; Stewart, Emma; Khair, Lauren

The US DOE Office of Electricity's Energy Storage Program's joint RD work with the Cordova Electric Cooperative (CEC) has deployed several advanced grid technologies that are providing benefits today to Cordova Alaska's electricity users. Advanced grid technologies deployed through DoE co-funded RD include a 1MW Battery Energy Storage System (BESS), and enhanced monitoring including Phasor Measurement Units (PMU's) to help better understand the operational impacts of the added BESS. This paper will highlight key accomplishments to-date in deploying and using advanced grid technologies, and then outline the next phase of work that will use these technologies to implement an operating scheme to reconfigure the utility's distribution system and utility resources including BESS to provide emergency back-up power to a critical load: The Cordova Community Medical Center (CCMC). This paper will include additional insights on the use of utility resources to support critical loads via case study examples by the National Rural Electric Cooperative Assoc. (NRECA).

More Details

THEORY AND GENERATION METHODS FOR N-ARY STOCHASTIC MIXTURES WITH MARKOVIAN MIXING STATISTICS

Proceedings of the International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering, M and C 2021

Olson, Aaron J.; Pautz, Shawn D.; Bolintineanu, Dan S.; Vu, Emily

Work on radiation transport in stochastic media has tended to focus on binary mixing with Markovian mixing statistics. However, although some real-world applications involve only two materials, others involve three or more. Therefore, we seek to provide a foundation for ongoing theoretical and numerical work with “N-ary” stochastic media comprised of discrete material phases with spatially homogenous Markovian mixing statistics. To accomplish this goal, we first describe a set of parameters and relationships that are useful to characterize such media. In doing so, we make a noteworthy observation: media that are frequently called Poisson media only comprise a subset of those that have Markovian mixing statistics. Since the concept of correlation length (as it has been used in stochastic media transport literature) and the hyperplane realization generation method are both tied to the Poisson property of the media, we argue that not all media with Markovian mixing statistics have a correlation length in this sense or are realizable with the traditional hyperplane generation method. Second, we describe methods for generating realizations of N-ary media with Markovian mixing. We generalize the chord- and hyperplane-based sampling methods from binary to N-ary mixing and propose a novel recursive hyperplane method that can generate a broader class of material structures than the traditional, non-recursive hyperplane method. Finally, we perform numerical studies that provide validation to the proposed N-ary relationships and generation methods in which statistical quantities are observed from realizations of ternary and quaternary media and are shown to agree with predicted values.

More Details

BENCHMARK COMPARISONS OF MONTE CARLO ALGORITHMS FOR ONE-DIMENSIONAL N-ARY STOCHASTIC MEDIA

Proceedings of the International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering, M and C 2021

Vu, Emily H.; Brantley, Patrick S.; Olson, Aaron J.; Kiedrowski, Brian C.

We extend the Monte Carlo Chord Length Sampling (CLS) and Local Realization Preserving (LRP) algorithms to the N-ary stochastic medium case using two recently developed uniform and volume fraction models that follow a Markov-chain process for N-ary problems in one-dimensional, Markovian-mixed media. We use the Lawrence Livermore National Laboratory Mercury Monte Carlo particle transport code to compute CLS and LRP reflection and transmission leakage values and material scalar flux distributions for one-dimensional, Markovian-mixed quaternary stochastic media based on the two N-ary stochastic medium models. We conduct accuracy comparisons against benchmark results produced with the Sandia National Laboratories PlaybookMC stochastic media transport research code. We show that CLS and LRP produce exact results for purely absorbing N-ary stochastic medium problems and find that LRP is generally more accurate than CLS for problems with scattering.

More Details

COMPUTATION OF SOBOL' INDICES USING EMBEDDED VARIANCE DECONVOLUTION

Proceedings of the International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering, M and C 2021

Petticrew, James M.; Olson, Aaron J.

Sobol' sensitivity indices (SI) provide robust and accurate measures of how much uncertainty in output quantities is caused by different uncertain input parameters. These allow analysts to prioritize future work to either reduce or better quantify the effects of the most important uncertain parameters. One of the most common approaches to computing SI requires Monte Carlo (MC) sampling of uncertain parameters and full physics code runs to compute the response for each of these samples. In the case that the physics code is a MC radiation transport code, this traditional approach to computing SI presents a workflow in which the MC transport calculation must be sufficiently resolved for each MC uncertain parameter sample. This process can be prohibitively expensive, especially since thousands or more particle histories are often required on each of thousands or so uncertain parameter samples. We propose a process for computing SI in which only a few MC radiation transport histories are simulated before sampling new uncertain parameter values. We use Embedded Variance Deconvolution (EVADE) to parse the desired parametric variance from the MC transport variance on each uncertain parameter sample. To provide a relevant benchmark, we propose a new radiation transport benchmark problem and derive analytic solutions for its outputs, including SI. The new EVADE-based approach is found to converge with MC convergence behavior and be at least an order of magnitude more precise for the same computational cost than the traditional approach for several SI on our test problem.

More Details

Risk-averse control of fractional diffusion with uncertain exponent

SIAM Journal on Control and Optimization

Kouri, Drew P.; Antil, Harbir; Pfefferer, Johannes

In this paper, we introduce and analyze a new class of optimal control problems constrained by elliptic equations with uncertain fractional exponents. We utilize risk measures to formulate the resulting optimization problem. We develop a functional analytic framework, study the existence of solution, and rigorously derive the first-order optimality conditions. Additionally, we employ a sample-based approximation for the uncertain exponent and the finite element method to discretize in space. We prove the rate of convergence for the optimal risk neutral controls when using quadrature approximation for the uncertain exponent and conclude with illustrative examples.

More Details

Towards Improving Container Security by Preventing Runtime Escapes

Proceedings - 2021 IEEE Secure Development Conference, SecDev 2021

Reeves, Michael J.; Tian, Dave J.; Bianchi, Antonio; Celik, Z.B.

Container escapes enable the adversary to execute code on the host from inside an isolated container. These high severity escape vulnerabilities originate from three sources: (1) container profile misconfigurations, (2) Linux kernel bugs, and (3) container runtime vulnerabilities. While the first two cases have been studied in the literature, no works have investigated the impact of container runtime vulnerabilities. In this paper, to fill this gap, we study 59 CVEs for 11 different container runtimes. As a result of our study, we found that five of the 11 runtimes had nine publicly available PoC container escape exploits covering 13 CVEs. Our further analysis revealed all nine exploits are the result of a host component leaked into the container. We apply a user namespace container defense to prevent the adversary from leveraging leaked host components and demonstrate that the defense stops seven of the nine container escape exploits.

More Details

Detecting False Data Injection Attacks to Battery State Estimation Using Cumulative Sum Algorithm

2021 North American Power Symposium, NAPS 2021

Obrien, Victoria; Trevizan, Rodrigo D.; Rao, Vittal S.

Estimated parameters in Battery Energy Storage Systems (BESSs) may be vulnerable to cyber-attacks such as False Data Injection Attacks (FDIAs). FDIAs, which typically evade bad data detectors, could damage or degrade Battery Energy Storage Systems (BESSs). This paper will investigate methods to detect small magnitude FDIA using battery equivalent circuit models, an Extended Kalman Filter (EKF), and a Cumulative Sum (CUSUM) algorithm. A priori error residual data estimated by the EKF was used in the CUSUM algorithm to find the lowest detectable FDIA for this battery equivalent model. The algorithm described in this paper was able to detect attacks as low as 1 mV, with no false positives. The CUSUM algorithm was compared to a chi-squared based FDIA detector. In this study the CUSUM was found to detect attacks of smaller magnitudes than the conventional chi-squared detector.

More Details

A Co-Simulation Approach to Modeling Electric Vehicle Impacts on Distribution Feeders during Resilience Events

2021 Resilience Week, RWS 2021 - Proceedings

Haines, Thad; Garcia, Brooke M.; Vining, William F.; Lave, Matthew S.

This paper describes a co-simulation environment used to investigate how high penetrations of electric vehicles (EV s) impact a distribution feeder during a resilience event. As EV adoption and EV supply equipment (EVSE) technology advance, possible impacts to the electric grid increase. Additionally, as weather related resilience events become more common, the need to understand possible challenges associated with EV charging during such events becomes more important. Software designed to simulate vehicle travel patterns, EV charging characteristics, and the associated electric demand can be integrated with power system software using co-simulation to provide more realistic results. The work in progress described here will simulate varying EV loading and location over time to provide insights about EVSE characteristics for maximum benefit and allow for general sizing of possible micro grids to supply EVs and critical loads.

More Details

Recovering Power Factor Control Settings of Solar PV Inverters from Net Load Data

2021 North American Power Symposium, NAPS 2021

Talkington, Samuel; Grijalva, Santiago; Reno, Matthew J.; Azzolini, Joseph A.

Advanced solar PV inverter control settings may not be reported to utilities or may be changed without notice. This paper develops an estimation method for determining a fixed power factor control setting of a behind-the-meter (BTM) solar PV smart inverter. The estimation is achieved using linear regression methods with historical net load advanced metering infrastructure (AMI) data. Notably, the BTM PV power factor setting may be unknown or uncertain to a distribution engineer, and cannot be trivially estimated from the historical AMI data due to the influence of the native load on the measurements. To solve this, we use a simple percentile-based approach for filtering the measurements. A physics-based linear sensitivity model is then used to determine the fixed power factor control setting from the sensitivity in the complex power plane. This sensitivity parameter characterizes the control setting hidden in the aggregate data. We compare several loss functions, and verify the models developed by conducting experiments on 250 datasets based on real smart meter data. The data are augmented with synthetic quasi-static-timeseries (QSTS) simulations of BTM PV that simulate utility-observed aggregate measurements at the load. The simulations demonstrate the reactive power sensitivity of a BTM PV smart inverter can be recovered efficiently from the net load data after applying the filtering approach.

More Details

A Numerical Method for Fault Location in DC Systems Using Traveling Waves

2021 North American Power Symposium, NAPS 2021

Paruthiyil, Sajay K.; Montoya, Rudy; Bidram, Ali; Reno, Matthew J.

Due to the existence of DC-DC converters, fast-tripping fault location in DC power systems is of particular importance to ensure the reliable operation of DC systems. Traveling wave (TW) protection is one of the promising approaches to accommodate fast detection and location of faults in DC systems. This paper proposes a numerical approach for a DC system fault location using the concept of TWs. The proposed approach is based on multiresolution analysis to calculate the TW signal's wavelet coefficients for different frequency ranges, and then, the Parseval theorem is used to calculate the energy of wavelet coefficients. A curve-fitting approach is used to find the best curve that fits the Parseval energy as a function of fault location for a set of curve-fitting datapoints. The identified Parseval energy curves are then utilized to estimate the fault location when a new fault is applied on a DC cable. A DC test system simulated in PSCAD/EMTDC is used to verify the performance of the proposed fault location algorithm.

More Details

Maximum Power Point Tracking and Voltage Control in a Solar-PV based DC Microgrid Using Simulink

2021 North American Power Symposium, NAPS 2021

Miyagishima, Frank; Augustine, Sijo; Lavrova, Olga; Nademi, Hamed; Ranade, Satish; Reno, Matthew J.

This paper discusses a solar photovoltaic (PV) DC microgrid system consisting of a PV array, a battery, DC-DC converters, and a load, where all these elements are simulated in MATLAB/Simulink environment. The design and testing entail the functions of a boost converter and a bidirectional converter and how they work together to maintain stable control of the DC bus voltage and its energy management. Furthermore, the boost converter operates under Maximum Power Point Tracking (MPPT) settings to maximize the power that the PV array can output. The control algorithm can successfully maintain the output power of the PV array at its maximum point and can respond well to changes in input irradiance. This is shown in detail in the results section.

More Details

Comments on rendering synthetic aperture radar (SAR) images

Proceedings of SPIE - The International Society for Optical Engineering

Doerry, Armin

Once Synthetic Aperture Radar (SAR) images are formed, they typically need to be stored in some file format which might restrict the dynamic range of what can be represented. Thereafter, for exploitation by human observers, the images might need to be displayed in a manner to reveal the subtle scene reflectivity characteristics the observer seeks, which generally requires further manipulation of dynamic range. Proper image scaling, for both storage and for display, to maximize the perceived dynamic range of interest to an observer depends on many factors, and an understanding of underlying data characteristics. While SAR images are typically rendered with grayscale, or at least monochromatic intensity variations, color might also be usefully employed in some cases. We analyze these and other issues pertaining to SAR image scaling, dynamic range, radiometric calibration, and display.

More Details

Topology Identification with Smart Meter Data Using Security Aware Machine Learning

2021 North American Power Symposium, NAPS 2021

Francis, Cody; Rao, Vittal S.; Trevizan, Rodrigo D.

Distribution system topology identification has historically been accomplished by unencrypting the information that is received from the smart meters and then running a topology identification algorithm. Unencrypted smart meter data introduces privacy and security issues for utility companies and their customers. This paper introduces security aware machine learning algorithms to alleviate the privacy and security issues raised with un-encrypted smart meter data. The security aware machine learning algorithms use the information received from the Advanced Metering Infrastructure (AMI) and identifies the distribution systems topology without unencrypting the AMI data by using fully homomorphic NTRU and CKKS encryption. The encrypted smart meter data is then used by Linear Discriminant Analysis, Convolution Neural Network, and Support Vector Machine algorithms to predict the distribution systems real time topology. This method can leverage noisy voltage magnitude readings from smart meters to accurately identify distribution system reconfiguration between radial topologies during operation under changing loads.

More Details

Velocity-space hybridization of direct simulation monte carlo and a quasi-particle boltzmann solver

Journal of Thermophysics and Heat Transfer

Oblapenko, Georgii; Goldstein, David; Varghese, Philip; Moore, Christopher H.

This paper presents a new method for modeling rarefied gas flows based on hybridization of direct simulation Monte Carlo (DSMC) and discrete velocity method (DVM)-based quasi-particle representations of the velocity distribution function. It is aimed at improving the resolution of the tails of the distribution function (compared with DSMC) and computational efficiency (compared with DVM). Details of the method, such as the collision algorithm and the particle merging scheme, are discussed. The hybrid approach is applied to the study of noise in a Maxwellian distribution, computation of electron-impact ionization rate coefficient, as well as numerical simulation of a supersonic Couette flow. The hybrid-based solver is compared with pure DSMC and DVM approaches in terms of accuracy, computational speed, and memory use. It is shown that such a hybrid approach can provide a lower computational cost than a pure DVM approach, while being able to retain accuracy in modeling high-velocity tails of the distribution function. For problems where trace species have a significant impact on the flow physics, the proposed method is shown to be capable of providing better computational efficiency and accuracy compared with standard fixed-weight DSMC.

More Details

Evaluation of Interoperable Distributed Energy Resources to IEEE 1547.1 Using SunSpec Modbus, IEEE 1815, and IEEE 2030.5

IEEE Access

Johnson, Jay

The American distributed energy resource (DER) interconnection standard, IEEE Std. 1547, was updated in 2018 to include standardized interoperability functionality. As state regulators begin ratifying these requirements, all DER - such as photovoltaic (PV) inverters, energy storage systems (ESSs), and synchronous generators - in those jurisdictions must include a standardized SunSpec Modbus, IEEE 2030.5, or IEEE 1815 (DNP3) communication interface. Utilities and authorized third parties will interact with these DER interfaces to read nameplate information, power measurements, and alarms as well as configure the DER settings and grid-support functionality. In 2020, the certification standard IEEE 1547.1 was revised with test procedures for evaluating the IEEE 1547-2018 interoperability requirements. In this work, we present an open-source framework to evaluate DER interoperability. To demonstrate this capability, we used four test devices: a SunSpec DER Simulator with a SunSpec Modbus interface, an EPRI-developed DER simulator with an IEEE 1815 interface, a Kitu Systems DER simulator with an IEEE 2030.5 interface, and an EPRI IEEE 2030.5-to-Modbus converter. By making this test platform openly available, DER vendors can validate their implementations, utilities can spot check communications to DER equipment, certification laboratories can conduct type testing, and research institutions can more easily research DER interoperability and cybersecurity. We indicate several limitations and ambiguities in the communication protocols, information models, and the IEEE 1547.1-2020 test protocol which were exposed in these evaluations in anticipation that the standards-development organizations will address these issues in the future.

More Details

Partitioned Collective Communication

Proceedings of ExaMPI 2021: Workshop on Exascale MPI, Held in conjunction with SC 2021: The International Conference for High Performance Computing, Networking, Storage and Analysis

Holmes, Daniel J.; Skjellum, Anthony; Jaeger, Julien; Grant, Ryan E.; Schafer, Derek; Bangalore, Purushotham V.; Dosanjh, Matthew D.; Bienz, Amanda

Partitioned point-To-point communication and persistent collective communication were both recently standardized in MPI-4.0. Each offers performance and scalability advantages over MPI-3.1-based communication when planned transfers are feasible in an MPI application. Their merger into a generalized, persistent collective communication with partitions is a logical next step, with significant advantages for performance portability. Non-Trivial decisions about the syntax and semantics of such operations need to be addressed, including scope of knowledge of partitioning choices by members of the communicator's group(s). This paper introduces and motivates proposed interfaces for partitioned collective communication. Partitioned collectives will be particularly useful for multithreaded, accelerator-offloaded, and/or hardware-collective-enhanced MPI implementations driving suitable applications, as well as for pipelined collective communication (e.g., partitioned allreduce) with single consumers and producers per MPI process. These operations also provide load imbalance mitigation. Halo exchange codes arising from regular and irregular grid/mesh applications are a key candidate class of applications for this functionality. Generalizations of lightweight notification procedures MPI-Parrived and MPI-Pready are considered. Generalization of MPIX-Pbuf-prepare, a procedure proposed for MPI-4.1 for point-To-point partitioned communication, are also considered, shown in context of supporting ready-mode send semantics for the operations. The option of providing local and incomplete modes for initialization procedures is mentioned (which could also apply to persistent collective operations); these semantics interact with the MPIX-Pbuf-prepare concept and the progress rule. Last, future work is outlined, indicating prerequisites for formal consideration for the MPI-5 standard.

More Details

Faster classification using compression analytics

IEEE International Conference on Data Mining Workshops, ICDMW

Ting, Christina T.; Johnson, Nicholas; Onunkwo, Uzoma O.; Tucker, James D.

Compression analytics have gained recent interest for application in malware classification and digital forensics. This interest is due to the fact that compression analytics rely on measured similarity between byte sequences in datasets without requiring prior feature extraction; in other words, these methods are featureless. Being featureless makes compression analytics particularly appealing for computer security applications, where good static features are either unknown or easy to circumvent by adversaries. However, previous classification methods based on compression analytics relied on algorithms that scaled with the size of each labeled class and the number of classes. In this work, we introduce an approach that, in addition to being featureless, can perform fast and accurate inference that is independent of the size of each labeled class. Our method is based on calculating a representative sample, the Fréchet mean, for each labeled class and using it at inference time. We introduce a greedy algorithm for calculating the Fréchet mean and evaluate its utility for classification across a variety of computer security applications, including authorship attribution of source code, file fragment type detection, and malware classification.

More Details

Spiking Neural Streaming Binary Arithmetic

Proceedings - 2021 International Conference on Rebooting Computing, ICRC 2021

Aimone, James B.; Hill, Aaron J.; Severa, William M.; Vineyard, Craig M.

Boolean functions and binary arithmetic operations are central to standard computing paradigms. Accordingly, many advances in computing have focused upon how to make these operations more efficient as well as exploring what they can compute. To best leverage the advantages of novel computing paradigms it is important to consider what unique computing approaches they offer. However, for any special-purpose co-processor, Boolean functions and binary arithmetic operations are useful for, among other things, avoiding unnecessary I/O on-and-off the co-processor by pre- and post-processing data on-device. This is especially true for spiking neuromorphic architectures where these basic operations are not fundamental low-level operations. Instead, these functions require specific implementation. Here we discuss the implications of an advantageous streaming binary encoding method as well as a handful of circuits designed to exactly compute elementary Boolean and binary operations.

More Details

Impacts of Substrate Thinning on FPGA Performance and Reliability

Conference Proceedings from the International Symposium for Testing and Failure Analysis

Leonhardt, Darin L.; Cannon, Matthew J.; Dodds, Nathaniel A.; Fellows, Matthew W.; Grzybowski, Thomas A.; Haase, Gad S.; Lee, David S.; LeBoeuf, Thomas L.; Rice, William

Global thinning of integrated circuits is a technique that enables backside failure analysis and radiation testing. Prior work also shows increased thresholds for single-event latchup and upset in thinned devices. We present impacts of global thinning on device performance and reliability of 28 nm node field programmable gate arrays (FPGA). Devices are thinned to values of 50, 10, and 3 microns using a micromachining and polishing method. Lattice damage, in the form of dislocations, extend about 1 micron below the machined surface. The damage layer is removed after polishing with colloidal SiO2 slurry. We create a 2D finite-element model with liner elasticity equations and flip-chip packaged device geometry to show that thinning increases compressive global stress in the Si, while C4 bumps increase stress locally. Measurements of stress using Raman spectroscopy qualitatively agree with our stress model but also reveal the need for more complex structural models to account for nonlinear effects occurring in devices thinned to 3 microns and after temperature cycling to 125 °C. Thermal imaging shows that increased local heating occurs with increased thinning but the maximum temperature difference across the 3-micron die is less than 2 °C. Ring oscillators (ROs) programmed throughout the FPGA fabric slow about 0.5% after thinning compared to full thickness values. Temperature cycling the devices to 125 °C further decreases RO frequency about 0.5%, which we attribute to stress changes in the Si.

More Details

Cascaded Second Order Optical Nonlinearities in a Dielectric Metasurface

Optics InfoBase Conference Papers

Gennaro, Sylvain D.; Doiron, Chloe F.; Karl, Nicholas J.; Padmanabha Iyer, Prasad P.; Sinclair, Michael B.; Brener, Igal B.

In this work, we analyze the second and third harmonic signal from a dielectric metasurface in conjunction with polarization selection rules to unambiguously demonstrate the occurrence of cascaded second-order nonlinearities.

More Details

Thermal and Loss Characterization of Mechanically Released Whispering Gallery Mode Waveguide Resonators

Optics InfoBase Conference Papers

Robison, Samuel L.; Grine, Alejandro J.; Wood, Michael G.; Serkland, Darwin K.

We present an empirical methodology for thermally characterizing and determining absorption and scattering losses in released ring whisper gallery mode optical resonators. We used the methodology to deduce absorption and scattering contributions in Q = 308,000 silicon nitride resonators coupled to on-chip waveguides.

More Details

Compact, Pull-in-Free Electrostatic MEMS Actuated Tunable Ring Resonator for Optical Multiplexing

Optics InfoBase Conference Papers

Ruyack, Alexander R.; Grine, Alejandro J.; Finnegan, Patrick S.; Serkland, Darwin K.; Robinson, Samuel; Weatherred, Scott E.; Frost, Megan D.; Nordquist, Christopher N.; Wood, Michael G.

We present an optical wavelength division multiplexer enabled by a ring resonator tuned by MEMS electrostatic actuation. Analytical analysis, simulation and fabrication are discussed leading to results showing controlled tuning greater than one FSR.

More Details

Determination of the photoelastic constants of silicon nitride using piezo-optomechanical photonic integrated circuits and laser Doppler vibrometry

Optics InfoBase Conference Papers

Koppa, Matthew A.; Storey, Matthew J.; Dong, Mark; Heim, David; Leenheer, Andrew J.; Zimmermann, Matthew; Laros, James H.; Gilbert, Gerald; Englund, Dirk; Eichenfield, Matthew S.

We measure the photoelastic constants of piezo-optomechanical photonic integrated circuits incorporating a specially formulated, silicon-depleted silicon nitride thin films using a laser doppler vibrometer to calibrate the strain produced by the integrated piezoelectric actuators.

More Details

A Comparative Study of SiC JFET Super-Cascode Topologies

2021 IEEE Energy Conversion Congress and Exposition, ECCE 2021 - Proceedings

Gill, Lee G.; Garcia Rodriguez, Luciano A.; Mueller, Jacob M.; Neely, Jason C.

In spite of several advantages of SiC JFETs over enhancement mode SiC MOSFETs, the intrinsic normally-ON characteristic of the JFETs can be undesirable for many industrial power conversion applications due to the negative turn-OFF voltage requirement. This prevents normally-ON JFETs from being widely accepted in industry. However, a cascode configuration, which uses a low voltage (LV) Si MOSFET can be used to enable a normally-OFF behavior, making this approach an attractive solution to utilize the benefits of SiC JFETs. For medium-, and high-voltage applications that require larger blocking voltage than the rating of each JFET, additional devices can be connected in series to increase the overall blocking voltage capability, creating a super-cascode configuration. This paper provides a review of several super-cascode topology variations and presents a comprehensive comparative study, evaluating similarities and differences in operating principles, equivalent circuits, and design considerations and limitations.

More Details

CHARACTERIZING HUMAN PERFORMANCE: DETECTING TARGETS AT HIGH FALSE ALARM RATES

Proceedings of the 2021 International Topical Meeting on Probabilistic Safety Assessment and Analysis, PSA 2021

Speed, Ann S.; Wheeler, Jason W.; Russell, John L.; Oppel, Fred; Sanchez, Danielle; Silva, Austin R.; Chavez, Anna

The prevalence effect is the observation that, in visual search tasks as the signal (target) to noise (non-target) ratio becomes smaller, humans are more likely to miss the target when it does occur. Studied extensively in the basic literature [e.g., 1, 2], this effect has implications for real-world settings such as security guards monitoring physical facilities for attacks. Importantly, what seems to drive the effect is the development of a response bias based on learned sensitivity to the statistical likelihood of a target [e.g., 3-5]. This paper presents results from two experiments aimed at understanding how the target prevalence impacts the ability for individuals to detect a target on the 1,000th trial of a series of 1000 trials. The first experiment employed the traditional prevalence effect paradigm. This paradigm involves search for a perfect capital letter T amidst imperfect Ts. In a between-subjects design, our subjects experienced target prevalence rates of 50/50, 1/10, 1/100, or 1/1000. In all conditions, the final trial was always a target. The second (ongoing) experiment replicates this design using a notional physical facility in a mod/sim environment. This simulation enables triggering different intrusion detection sensors by simulated characters and events (e.g., people, animals, weather). In this experiment, subjects viewed 1000 “alarm” events and were asked to characterize each as either a nuisance alarm (e.g., set off by an animal) or an attack. As with the basic visual search study, the final trial was always an attack.

More Details

Impact of Load Allocation and High Penetration PV Modeling on QSTS-Based Curtailment Studies

IEEE Power and Energy Society General Meeting

Azzolini, Joseph A.; Reno, Matthew J.

The rising penetration levels of photovoltaic (PV) systems within distribution networks has driven considerable interest in the implementation of advanced inverter functions, like autonomous Volt- Var, to provide grid support in response to adverse conditions. Quasi-static time-series (QSTS) analyses are increasingly being utilized to evaluate advanced inverter functions on their potential benefits to the grid and to quantify the magnitude of PV power curtailment they may induce. However, these analyses require additional modeling efforts to appropriately capture the time-varying behavior of circuit elements like loads and PV systems. The contribution of this paper is to study QSTS-based curtailment evaluations with different load allocation and PV modeling practices under a variety of assumptions and data limitations. A total of 24 combinations of PV and load modeling scenarios were tested on a realistic test circuit with 1,379 loads and 701 PV systems. The results revealed that the average annual curtailment varied from the baseline value of 0.47% by an absolute difference of +0.55% to -0.43 % based on the modeling scenario.

More Details

Sage Advice? The Impacts of Explanations for Machine Learning Models on Human Decision-Making in Spam Detection

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Stites, Mallory C.; Nyre-Yu, Megan N.; Moss, Blake C.; Smutz, Charles S.; Smith, Michael R.

The impact of machine learning (ML) explanations and different attributes of explanations on human performance was investigated in a simulated spam detection task. Participants decided whether the metadata presented about an email indicated that it was spam or benign. The task was completed with the aid of a ML model. The ML model’s prediction was displayed on every trial. The inclusion of an explanation and, if an explanation was presented, attributes of the explanation were manipulated within subjects: the number of model input features (3, 7) and visualization of feature importance values (graph, table), as was trial type (i.e., hit, false alarm). Overall model accuracy (50% vs 88%) was manipulated between subjects, and user trust in the model was measured as an individual difference metric. Results suggest that a user’s trust in the model had the largest impact on the decision process. The users showed better performance with a more accurate model, but no differences in accuracy based on number of input features or visualization condition. Rather, users were more likely to detect false alarms made by the more accurate model; they were also more likely to comply with a model “miss” when more model explanation was provided. Finally, response times were longer in individuals reporting low model trust, especially when they did not comply with the model’s prediction. Our findings suggest that the factors impacting the efficacy of ML explanations depends, minimally, on the task, the overall model accuracy, the likelihood of different model errors, and user trust.

More Details

High-Al-content heterostructures and devices

Semiconductors and Semimetals

Kaplar, Robert K.; Baca, A.G.; Douglas, Erica A.; Klein, Brianna A.; Allerman, A.A.; Crawford, Mary H.; Reza, Shahed R.

Ultra-wide-bandgap aluminum gallium nitride (AlGaN) possesses several material properties that make it attractive for use in a variety of applications. This chapter focuses on power switching and radio-frequency (RF) devices based on Al-rich AlGaN heterostructures. The relevant figures of merit for both power switching and RF devices are discussed as motivation for the use of AlGaN heterostructures in such applications. The key physical parameters impacting these figures of merit include critical electric field, channel mobility, channel carrier density, and carrier saturation velocity, and the factors influencing these and the trade-offs between them are discussed. Surveys of both power switching and RF devices are given and their performance is described including in special operating regimes such as at high temperatures. Challenges to be overcome, such as the formation of low-resistivity Ohmic contacts, are presented. Finally, an overview of processing-related challenges, especially related to surfaces and interfaces, concludes the chapter.

More Details

Deep Conservation: A Latent-Dynamics Model for Exact Satisfaction of Physical Conservation Laws

35th AAAI Conference on Artificial Intelligence, AAAI 2021

Lee, Kookjin L.; Carlberg, Kevin T.

This work proposes an approach for latent-dynamics learning that exactly enforces physical conservation laws. The method comprises two steps. First, the method computes a low-dimensional embedding of the high-dimensional dynamical-system state using deep convolutional autoencoders. This defines a low-dimensional nonlinear manifold on which the state is subsequently enforced to evolve. Second, the method defines a latent-dynamics model that associates with the solution to a constrained optimization problem. Here, the objective function is defined as the sum of squares of conservation-law violations over control volumes within a finite-volume discretization of the problem; nonlinear equality constraints explicitly enforce conservation over prescribed subdomains of the problem. Under modest conditions, the resulting dynamics model guarantees that the time-evolution of the latent state exactly satisfies conservation laws over the prescribed subdomains.

More Details

Comments on rendering synthetic aperture radar (SAR) images

Proceedings of SPIE - The International Society for Optical Engineering

Doerry, Armin

Once Synthetic Aperture Radar (SAR) images are formed, they typically need to be stored in some file format which might restrict the dynamic range of what can be represented. Thereafter, for exploitation by human observers, the images might need to be displayed in a manner to reveal the subtle scene reflectivity characteristics the observer seeks, which generally requires further manipulation of dynamic range. Proper image scaling, for both storage and for display, to maximize the perceived dynamic range of interest to an observer depends on many factors, and an understanding of underlying data characteristics. While SAR images are typically rendered with grayscale, or at least monochromatic intensity variations, color might also be usefully employed in some cases. We analyze these and other issues pertaining to SAR image scaling, dynamic range, radiometric calibration, and display.

More Details

Motion measurement impact on synthetic aperture radar (SAR) geolocation

Proceedings of SPIE - The International Society for Optical Engineering

Doerry, Armin; Bickel, Douglas L.

Often a crucial exploitation of a Synthetic Aperture Radar (SAR) image requires accurate and precise knowledge of its geolocation, or at least the geolocation of a feature of interest in the image. However, SAR, like all radar modes of operation, makes its measurements relative to its own location or position. Consequently, it is crucial to understand how the radar's own position and motion impacts the ability to geolocate a feature in the SAR image. Furthermore, accuracy and precision of navigation aids like GPS directly impact the goodness of the geolocation solution.

More Details

Investigation of post-injection strategies for diesel engine Catalyst Heating Operation using a vapor-liquid-equilibrium-based spray model

Journal of Supercritical Fluids

Perini, Federico; Busch, Stephen B.; Reitz, Rolf D.

Most multidimensional engine simulations spend much time solving for non-equilibrium spray dynamics (atomization, collision, vaporization). However, their accuracy is limited by significant grid dependency, and the need for extensive calibration. This is critical for modeling cold-start diesel fuel post injections, which occur at low temperatures and pressures, far from typical model validation ranges. At the same time, resolving micron-scale spray phenomena would render full Eulerian multiphase calculations prohibitive. In this study, an improved phase equilibrium based approach was implemented and assessed for simulating diesel catalyst heating operation strategies. A phase equilibrium solver based on the model by Yue and Reitz [1] was implemented: a fully multiphase CFD solver is employed with an engineering-size engine grid, and fuel injection is modeled using the standard Lagrangian parcels approach. Mass and energy from the liquid parcels are released to the Eulerian multiphase mixture according to an equilibrium-based liquid jet model. An improved phase equilibrium solver was developed to handle large real-gas mixtures such as those from accurate chemical kinetics mechanisms. The liquid-jet model was improved such that momentum transfer to the Eulerian solver better reproduces the physical spray jet structure. Validation of liquid/vapor penetration predictions showed that the model yields accurate results with very limited tuning and low sensitivity to the few calibration constants. In-cylinder simulations of diesel catalyst heating operation strategies showed that capturing spray structure is paramount when short, transient injection pulses and low temperatures are present. Furthermore, the EP model provides improved predictions of post-injection spray structure and ignitability, while conventional spray modeling does not capture the increase of liquid penetration during the expansion stroke. Finally, the only important EP model calibration constant, Cliq, does not affect momentum transfer, but it changes the local charge cooling distribution through the local energy transfer, which makes it candidate to additional research. The results confirm that non-equilibrium spray processes do not need to be resolved in engineering simulations of high-pressure diesel sprays.

More Details

AMNESIA RADIUS VERSIONS OF CONDITIONAL POINT SAMPLING FOR RADIATION TRANSPORT IN 1D STOCHASTIC MEDIA

Proceedings of the International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering, M and C 2021

Vu, Emily V.; Olson, Aaron J.

Conditional Point Sampling (CoPS) is a newly developed Monte Carlo method for computing radiation transport quantities in stochastic media. The algorithm involves a growing list of point-wise material designations during simulation that causes potentially unbounded increases in memory and runtime, making the calculation of probability density functions (PDFs) computationally expensive. In this work, we adapt CoPS by omitting material points used in the computation from being stored in persisting memory if they are within a user-defined “amnesia radius” from neighboring material points already defined within a realization. We conduct numerical studies to investigate trade-offs between accuracy, required computer memory, and computation time. We demonstrate CoPS's ability to produce accurate mean leakage results and PDFs of leakage results while improving memory and runtime through use of an amnesia radius. We show that a limit on required computer memory per cohort of histories and average runtime per history is imposed as a function of a non-zero amnesia radius. We find that, for the benchmark set investigated, using an amnesia radius of ra = 0.01 introduces minimal error (a 0.006 increase in CoPS3PO root mean squared relative error) in results while improving memory and runtime by an order of magnitude for a cohort size of 100.

More Details

Response effects due to polygonal representation of pores in porous media thermal models

Proceedings of the 2021 ASME Verification and Validation Symposium, VVS 2021

Irick, Kevin W.; Fathi, Nima

Physics models-such as thermal, structural, and fluid models-of engineering systems often incorporate a geometric aspect such that the model resembles the shape of the true system that it represents. However, the physical domain of the model is only a geometric representation of the true system, where geometric features are often simplified for convenience in model construction and to avoid added computational expense to running simulations. The process of simplifying or neglecting different aspects of the system geometry is sometimes referred to as "defeaturing."Typically, modelers will choose to remove small features from the system model, such as fillets, holes, and fasteners. This simplification process can introduce inherent error into the computational model. A similar event can even take place when a computational mesh is generated, where smooth, curved features are represented by jagged, sharp geometries. The geometric representation and feature fidelity in a model can play a significant role in a corresponding simulation's computational solution. In this paper, a porous material system-represented by a single porous unit cell-is considered. The system of interest is a two-dimensional square cell with a centered circular pore, ranging in porosity from 1% to 78%. However, the circular pore was represented geometrically by a series of regular polygons with number of sides ranging from 3 to 100. The system response quantity under investigation was the dimensionless effective thermal conductivity, k∗, of the porous unit cell. The results show significant change in the resulting k∗ value depending on the number of polygon sides used to represent the circular pore. In order to mitigate the convolution of discretization error with this type of model form error, a series of five systematically refined meshes was used for each pore representation. Using the finite element method (FEM), the heat equation was solved numerically across the porous unit cell domain. Code verification was performed using the Method of Manufactured Solutions (MMS) to assess the order of accuracy of the implemented FEM. Likewise, solution verification was performed to estimate the numerical uncertainty due to discretization in the problem of interest. Specifically, a modern grid convergence index (GCI) approach was employed to estimate the numerical uncertainty on the systematically refined meshes. The results of the analyses presented in this paper illustrate the importance of understanding the effects of geometric representation in engineering models and can help to predict some model form error introduced by the model geometry.

More Details

Evaluating the Impact of Algorithm Confidence Ratings on Human Decision Making in Visual Search

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Jones, Aaron P.; Trumbo, Michael C.; Matzen, Laura E.; Stites, Mallory C.; Howell, Breannan C.; Divis, Kristin; Gastelum, Zoe N.

As the ability to collect and store data grows, so does the need to efficiently analyze that data. As human-machine teams that use machine learning (ML) algorithms as a way to inform human decision-making grow in popularity it becomes increasingly critical to understand the optimal methods of implementing algorithm assisted search. In order to better understand how algorithm confidence values associated with object identification can influence participant accuracy and response times during a visual search task, we compared models that provided appropriate confidence, random confidence, and no confidence, as well as a model biased toward over confidence and a model biased toward under confidence. Results indicate that randomized confidence is likely harmful to performance while non-random confidence values are likely better than no confidence value for maintaining accuracy over time. Providing participants with appropriate confidence values did not seem to benefit performance any more than providing participants with under or over confident models.

More Details

An asymptotically compatible approach for Neumann-type boundary condition on nonlocal problems

ESAIM: Mathematical Modelling and Numerical Analysis

You, Huaiqian; Lu, Xin Y.; Trask, Nathaniel A.; Yu, Yue

In this paper we consider 2D nonlocal diffusion models with a finite nonlocal horizon parameter δ characterizing the range of nonlocal interactions, and consider the treatment of Neumann-like boundary conditions that have proven challenging for discretizations of nonlocal models. We propose a new generalization of classical local Neumann conditions by converting the local flux to a correction term in the nonlocal model, which provides an estimate for the nonlocal interactions of each point with points outside the domain. While existing 2D nonlocal flux boundary conditions have been shown to exhibit at most first order convergence to the local counter part as δ → 0, the proposed Neumann-type boundary formulation recovers the local case as O(δ2) in the L∞(ω) norm, which is optimal considering the O(δ2) convergence of the nonlocal equation to its local limit away from the boundary. We analyze the application of this new boundary treatment to the nonlocal diffusion problem, and present conditions under which the solution of the nonlocal boundary value problem converges to the solution of the corresponding local Neumann problem as the horizon is reduced. To demonstrate the applicability of this nonlocal flux boundary condition to more complicated scenarios, we extend the approach to less regular domains, numerically verifying that we preserve second-order convergence for non-convex domains with corners. Based on the new formulation for nonlocal boundary condition, we develop an asymptotically compatible meshfree discretization, obtaining a solution to the nonlocal diffusion equation with mixed boundary conditions that converges with O(δ2) convergence.

More Details

The Multiple Instance Learning Gaussian Process Probit Model

Proceedings of Machine Learning Research

Wang, Fulton W.; Pinar, Ali P.

In the Multiple Instance Learning (MIL) scenario, the training data consists of instances grouped into bags. Bag labels specify whether each bag contains at least one positive instance, but instance labels are not observed. Recently, Haußmann et al [10] tackled the MIL instance label prediction task by introducing the Multiple Instance Learning Gaussian Process Logistic (MIL-GP-Logistic) model, an adaptation of the Gaussian Process Logistic Classification model that inherits its uncertainty quantification and flexibility. Notably, they give a fast mean-field variational inference procedure. However, due to their use of the logit link, they do not maximize the variational inference ELBO objective directly, but rather a lower bound on it. This approximation, as we show, hurts predictive performance. In this work, we propose the Multiple Instance Learning Gaussian Process Probit (MIL-GP-Probit) model, an adaptation of the Gaussian Process Probit Classification model to solve the MIL instance label prediction problem. Leveraging the analytical tractability of the probit link, we give a variational inference procedure based on variable augmentation that maximizes the ELBO objective directly. Applying it, we show MIL-GP-Probit is more calibrated than MIL-GP-Logistic on all 20 datasets of the benchmark 20 Newsgroups dataset collection, and achieves higher AUC than MIL-GP-Logistic on an additional 51 out of 59 datasets. Finally, we show how the probit formulation enables principled bag label predictions and a Gibbs sampling scheme. This is the first exact inference scheme for any Bayesian model for the MIL scenario.

More Details

HIERARCHICAL PARALLELISM FOR TRANSIENT SOLID MECHANICS SIMULATIONS

World Congress in Computational Mechanics and ECCOMAS Congress

Littlewood, David J.; Jones, Reese E.; Laros, James H.; Plews, Julia A.; Hetmaniuk, Ulrich L.; Lifflander, Jonathan

Software development for high-performance scientific computing continues to evolve in response to increased parallelism and the advent of on-node accelerators, in particular GPUs. While these hardware advancements have the potential to significantly reduce turnaround times, they also present implementation and design challenges for engineering codes. We investigate the use of two strategies to mitigate these challenges: the Kokkos library for performance portability across disparate architectures, and the DARMA/vt library for asynchronous many-task scheduling. We investigate the application of Kokkos within the NimbleSM finite element code and the LAMÉ constitutive model library. We explore the performance of DARMA/vt applied to NimbleSM contact mechanics algorithms. Software engineering strategies are discussed, followed by performance analyses of relevant solid mechanics simulations which demonstrate the promise of Kokkos and DARMA/vt for accelerated engineering simulators.

More Details

Computing potential of the mean force profiles for ion permeation through channelrhodopsin Chimera, C1C2

Methods in Molecular Biology

Rempe, Susan R.; Priest, Chad; Vangordon, Monika R.; Rempe, Caroline; Stevens, Mark J.; Rick, Steve

Umbrella sampling, coupled with a weighted histogram analysis method (US-WHAM), can be used to construct potentials of mean force (PMFs) for studying the complex ion permeation pathways of membrane transport proteins. Despite the widespread use of US-WHAM, obtaining a physically meaningful PMF can be challenging. Here, we provide a protocol to resolve that issue. Then, we apply that protocol to compute a meaningful PMF for sodium ion permeation through channelrhodopsin chimera, C1C2, for illustration.

More Details

Fracture Formation in Layered Synthetic Rocks with Oriented Mineral Fabric under Mixed Mode I and II Loading Conditions

55th U.S. Rock Mechanics / Geomechanics Symposium 2021

Jiang, Liyang; Yoon, Hongkyu Y.; Bobet, Antonio; Pyrak-Nolte, Laura J.

Anisotropy in the mechanical properties of rock is often attributed to bedding and mineral texture. Here, we use 3D printed synthetic rock to show that, in addition to bedding layers, mineral fabric orientation governs sample strength, surface roughness and fracture path under mixed mode I and II three point bending tests (3PB). Arrester (horizontal layering) and short traverse (vertical layering) samples were printed with different notch locations to compare pure mode I induced fractures to mixed mode I and II fracturing. For a given sample type, the location of the notch affected the intensity of mode II loading, and thus affected the peak failure load and fracture path. When notches were printed at the same location, crack propagation, peak failure load and fracture surface roughness were found to depend on both the layer and mineral fabric orientations. The uniqueness of the induced fracture path and roughness is a potential method for the assessment of the orientation and relative bonding strengths of minerals in a rock. With this information, we will be able to predict isotropic or anisotropic flow rates through fractures which is vital to induced fracturing, geothermal energy production and CO2 sequestration.

More Details

Solving Inverse Problems for Process-Structure Linkages Using Asynchronous Parallel Bayesian Optimization

Minerals, Metals and Materials Series

Laros, James H.; Wildey, Timothy M.

Process-structure linkage is one of the most important topics in materials science due to the fact that virtually all information related to the materials, including manufacturing processes, lies in the microstructure itself. Therefore, to learn more about the process, one must start by thoroughly examining the microstructure. This gives rise to inverse problems in the context of process-structure linkages, which attempt to identify the processes that were used to manufacturing the given microstructure. In this work, we present an inverse problem for structure-process linkages which we solve using asynchronous parallel Bayesian optimization which exploits parallel computing resources. We demonstrate the effectiveness of the method using kinetic Monte Carlo model for grain growth simulation.

More Details

Optimizing Power System Stability Margins after Wide-Area Emergencies

IEEE Power and Energy Society General Meeting

Hoffman, Matthew J.; Arguello, Bryan A.; Pierre, Brian J.; Guttromson, Ross G.; Nelson, April M.

Severe, wide-area power system emergencies are rare but highly impactful. Such emergencies are likely to move the system well outside normal operating conditions. Appropriate remedial operation plans are unlikely to exist, and visibility into system stability is limited. Inspired by the literature on Transient Stability Constrained Optimal Power Flow and Emergency Control, we propose a stability-incentivized dynamic control optimization formulation. The formulation is designed to safely bring the system to an operating state with better operational and stability margins, reduced transmission line overlimits, and better power quality. Our use case demonstrates proof of concept that coordinated wide-area control has the potential to significantly improve power system state following a severe emergency.

More Details

Etched-and-Regrown GaN pn-Diodes with 1600 v Blocking Voltage

IEEE Journal of the Electron Devices Society

Armstrong, Andrew A.; Allerman, A.A.; Pickrell, Gregory P.; Crawford, Mary H.; Glaser, Caleb E.; Smith, Trevor S.

Etched-and-regrown GaN pn-diodes capable of high breakdown voltage (1610 V), low reverse current leakage (1 nA = 6 μ A /cm2 at 1250 V), excellent forward characteristics (ideality factor 1.6), and low specific on-resistance (1.1 m Ω.cm2) were realized by mitigating plasma etch-related defects at the regrown interface. Epitaxial n -GaN layers grown by metal-organic chemical vapor deposition on free-standing GaN substrates were etched using inductively coupled plasma etching (ICP), and we demonstrate that a slow reactive ion etch (RIE) prior to p -GaN regrowth dramatically increases diode electrical performance compared to wet chemical surface treatments. Etched-and-regrown diodes without a junction termination extension (JTE) were characterized to compare diode performance using the post-ICP RIE method with prior studies of other post-ICP treatments. Then, etched-and-regrown diodes using the post-ICP RIE etch steps prior to regrowth were fabricated with a multi-step JTE to demonstrate kV-class operation.

More Details

Simultaneous 10 kHz three-dimensional CH2O and tomographic PIV measurements in a lifted partially-premixed jet flame

Proceedings of the Combustion Institute

Zhou, Bo; Li, Tao; Frank, Jonathan H.; Dreizler, Andreas; Bohm, Benjamin

High-speed, three-dimensional (3D) scalar-velocity field measurements were demonstrated in a lifted partially-premixed dimethyl-ether/air jet flame using simultaneous laser-induced fluorescence (LIF) of formaldehyde and tomographic particle image velocimetry (TPIV). The 3D LIF measurements were conducted by raster scanning the laser beam from a 100 kHz pulse-burst laser across the probe volume using an acousto-optic deflector. The volumetric reconstruction of the LIF signal from ten parallel planes provides quasi-instantaneous 3D LIF measurements that are synchronized with 10 kHz TPIV measurements. The temporally resolved formaldehyde-LIF and velocity field data were employed to analyze Lagrangian particle trajectories and displacement speeds at the base of the lifted flame. The particle trajectories revealed flow structures that are difficult to observe in an Eulerian reference frame. Positive and negative displacement speeds were observed at the formaldehyde-LIF surfaces at the inner and outer regions of the jet flame with a maximum displacement speed of approximately eight times the laminar flame speed of a stoichiometric dimethyl-ether/air mixture.

More Details

Data-Driven Incident Detection in Power Distribution Systems

IEEE Power and Energy Society General Meeting

Aguiar, Nayara; Trevizan, Rodrigo D.; Gupta, Vijay; Chalamala, Babu C.; Byrne, Raymond H.

In a power distribution network with energy storage systems (ESS) and advanced controls, traditional monitoring and protection schemes are not well suited for detecting anomalies such as malfunction of controllable devices. In this work, we propose a data-driven technique for the detection of incidents relevant to the operation of ESS in distribution grids. This approach leverages the causal relationship observed among sensor data streams, and does not require prior knowledge of the system model or parameters. Our methodology includes a data augmentation step which allows for the detection of incidents even when sensing is scarce. The effectiveness of our technique is illustrated through case studies which consider active power dispatch and reactive power control of ESS.

More Details

Near-Surface Imaging of the Multicomponent Gas Phase above a Silver Catalyst during Partial Oxidation of Methanol

ACS Catalysis

Zhou, Bo; Huang, Erxiong H.; Almeida, Raybel A.; Gurses, Sadi; Ungar, Alexander; Zetterberg, Johan; Kulkarni, Ambarish; Kronawitter, Coleman X.; Osborn, David L.; Hansen, Nils H.; Frank, Jonathan H.

Fundamental chemistry in heterogeneous catalysis is increasingly explored using operando techniques in order to address the pressure gap between ultrahigh vacuum studies and practical operating pressures. Because most operando experiments focus on the surface and surface-bound species, there is a knowledge gap of the near-surface gas phase and the fundamental information the properties of this region convey about catalytic mechanisms. We demonstrate in situ visualization and measurement of gas-phase species and temperature distributions in operando catalysis experiments using complementary near-surface optical and mass spectrometry techniques. The partial oxidation of methanol over a silver catalyst demonstrates the value of these diagnostic techniques at 600 Torr (800 mbar) pressure and temperatures from 150 to 410 °C. Planar laser-induced fluorescence provides two-dimensional images of the formaldehyde product distribution that show the development of the boundary layer above the catalyst under different flow conditions. Raman scattering imaging provides measurements of a wide range of major species, such as methanol, oxygen, nitrogen, formaldehyde, and water vapor. Near-surface molecular beam mass spectrometry enables simultaneous detection of all species using a gas sampling probe. Detection of gas-phase free radicals, such as CH3 and CH3O, and of minor products, such as acetaldehyde, dimethyl ether, and methyl formate, provides insights into catalytic mechanisms of the partial oxidation of methanol. The combination of these techniques provides a detailed picture of the coupling between the gas phase and surface in heterogeneous catalysis and enables parametric studies under different operating conditions, which will enhance our ability to constrain microkinetic models of heterogeneous catalysis.

More Details

EMPIRE-PIC: A performance portable unstructured particle-in-cell code

Communications in Computational Physics

Bettencourt, Matthew T.; Brown, Dominic A.S.; Cartwright, Keith L.; Cyr, Eric C.; Glusa, Christian A.; Lin, Paul T.; Moore, Stan G.; Mcgregor, Duncan A.O.; Pawlowski, Roger P.; Phillips, Edward G.; Roberts, Nathan V.; Wright, Steven A.; Maheswaran, Satheesh; Jones, John P.; Jarvis, Stephen A.

In this paper we introduce EMPIRE-PIC, a finite element method particle-in-cell (FEM-PIC) application developed at Sandia National Laboratories. The code has been developed in C++ using the Trilinos library and the Kokkos Performance Portability Framework to enable running on multiple modern compute architectures while only requiring maintenance of a single codebase. EMPIRE-PIC is capable of solving both electrostatic and electromagnetic problems in two- and three-dimensions to second-order accuracy in space and time. In this paper we validate the code against three benchmark problems - a simple electron orbit, an electrostatic Langmuir wave, and a transverse electromagnetic wave propagating through a plasma. We demonstrate the performance of EMPIRE-PIC on four different architectures: Intel Haswell CPUs, Intel's Xeon Phi Knights Landing, ARM Thunder-X2 CPUs, and NVIDIA Tesla V100 GPUs attached to IBM POWER9 processors. This analysis demonstrates scalability of the code up to more than two thousand GPUs, and greater than one hundred thousand CPUs.

More Details

Photothermal alternative to device fabrication using atomic precision advanced manufacturing techniques

Journal of Micro/Nanopatterning, Materials and Metrology

Katzenmeyer, Aaron M.; Dmitrovic, Sanja; Baczewski, Andrew D.; Campbell, Quinn C.; Bussmann, Ezra B.; Lu, Tzu-Ming L.; Anderson, Evan M.; Schmucker, Scott W.; Ivie, Jeffrey A.; Campbell, DeAnna M.; Ward, Daniel R.; Scrymgeour, David S.; Wang, George T.; Misra, Shashank M.

The attachment of dopant precursor molecules to depassivated areas of hydrogen-terminated silicon templated with a scanning tunneling microscope (STM) has been used to create electronic devices with subnanometer precision, typically for quantum physics experiments. This process, which we call atomic precision advanced manufacturing (APAM), dopes silicon beyond the solid-solubility limit and produces electrical and optical characteristics that may also be useful for microelectronic and plasmonic applications. However, scanned probe lithography lacks the throughput required to develop more sophisticated applications. Here, we demonstrate and characterize an APAM device workflow where scanned probe lithography of the atomic layer resist has been replaced by photolithography. An ultraviolet laser is shown to locally and controllably heat silicon above the temperature required for hydrogen depassivation on a nanosecond timescale, a process resistant to under- and overexposure. STM images indicate a narrow range of energy density where the surface is both depassivated and undamaged. Modeling that accounts for photothermal heating and the subsequent hydrogen desorption kinetics suggests that the silicon surface temperatures reached in our patterning process exceed those required for hydrogen removal in temperature-programmed desorption experiments. A phosphorus-doped van der Pauw structure made by sequentially photodepassivating a predefined area and then exposing it to phosphine is found to have a similar mobility and higher carrier density compared with devices patterned by STM. Lastly, it is also demonstrated that photodepassivation and precursor exposure steps may be performed concomitantly, a potential route to enabling APAM outside of ultrahigh vacuum.

More Details

A second benchmarking exercise on estimating extreme environmental conditions: Methodology & baseline results

Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering - OMAE

Mackay, Ed; Haselsteiner, Andreas F.; Coe, Ryan G.; Manuel, Lance

Estimating extreme environmental conditions remains a key challenge in the design of offshore structures. This paper describes an exercise for benchmarking methods for extreme environmental conditions, which follows on from an initial benchmarking exercise introduced at OMAE 2019. In this second exercise, we address the problem of estimating extreme metocean conditions in a variable and changing climate. The study makes use of several very long datasets from a global climate model, including a 165-year historical run, a 700-year pre-industrial control run, which represents a quasi-steady state climate, and several runs under various future emissions scenarios. The availability of the long datasets allows for an in-depth analysis of the uncertainties in the estimated extreme conditions and an attribution of the relative importance of uncertainties resulting from modelling choices, natural climate variability, and potential future changes to the climate. This paper outlines the methodology for the second collaborative benchmarking exercise as well as presenting baseline results for the selected datasets.

More Details

A Case Study on Pathogen Transport, Deposition, Evaporation and Transmission: Linking High-Fidelity Computational Fluid Dynamics Simulations to Probability of Infection

International Journal of Computational Fluid Dynamics

Domino, Stefan P.

A high-fidelity, low-Mach computational fluid dynamics simulation tool that includes evaporating droplets and variable-density turbulent flow coupling is well-suited to ascertain transmission probability and supports risk mitigation methods development for airborne infectious diseases such as COVID-19. A multi-physics large-eddy simulation-based paradigm is used to explore droplet and aerosol pathogen transport from a synthetic cough emanating from a kneeling humanoid. For an outdoor configuration that mimics the recent open-space social distance strategy of San Francisco, maximum primary droplet deposition distances are shown to approach 8.1 m in a moderate wind configuration with the aerosol plume transported in excess of 15 m. In quiescent conditions, the aerosol plume extends to approximately 4 m before the emanating pulsed jet becomes neutrally buoyant. A dose–response model, which is based on previous SARS coronavirus (SARS-CoV) data, is exercised on the high-fidelity aerosol transport database to establish relative risk at eighteen virtual receptor probe locations.

More Details

Machine learning application for permeability estimation of three-dimensional rock images

CEUR Workshop Proceedings

Yoon, Hongkyu Y.; Melander, Darryl J.; Verzi, Stephen J.

Estimation of permeability in porous media is fundamental to understanding coupled multi-physics processes critical to various geoscience and environmental applications. Recent emerging machine learning methods with physics-based constraints and/or physical properties can provide a new means to improve computational efficiency while improving machine learning-based prediction by accounting for physical information during training. Here we first used three-dimensional (3D) real rock images to estimate permeability of fractured and porous media using 3D convolutional neural networks (CNNs) coupled with physics-informed pore topology characteristics (e.g., porosity, surface area, connectivity) during the training stage. Training data of permeability were generated using lattice Boltzmann simulations of segmented real rock 3D images. Our preliminary results show that neural network architecture and usage of physical properties strongly impact the accuracy of permeability predictions. In the future we can adjust our methodology to other rock types by choosing the appropriate architecture and proper physical properties, and optimizing the hyperparameters.

More Details

On the use of MBPE to mitigate corrupted data in radar applications

Proceedings of SPIE - The International Society for Optical Engineering

Maio, Brianna N.; Dawood, Muhammed; Loui, Hung L.

An algorithm is developed based on Edmund K. Miller's Model-Based Parameter Estimation (MBPE) technique to mitigate the effects of missing or corrupted data in random regions of wideband linear frequency modulated (LFM) radar signals. Two methods of applying MBPE in the spectral/frequency domain are presented that operate on either the full complex data or separated magnitude/phase data, respectively. The final algorithm iteratively applies MBPE using the latter approach to re-generate results in the corrupted regions of a windowed LFM signal until the difference is minimized relative to un-corrupted data. Several sets of simulations were conducted across many randomized gap parameters where impulse response (IPR) impacts are summarized. Conditions where the algorithm successfully improved the IPR for a single target are provided. The algorithm's effectiveness on multiple targets, especially when the corrupted regions are relatively large compared to the overall bandwidth of the signal, are also explored.

More Details

Greedy fiedler spectral partitioning for data-driven discrete exterior calculus

CEUR Workshop Proceedings

Huang, Andy H.; Trask, Nathaniel A.; Brissette, Christopher; Hu, Xiaozhe

The data-driven discrete exterior calculus (DDEC) structure provides a novel machine learning architecture for discovering structure-preserving models which govern data, allowing for example machine learning of reduced order models for complex continuum scale physical systems. In this work, we present a Greedy Fiedler Spectral (GFS) partitioning method to obtain a chain complex structure to support DDEC models, incorporating synthetic data obtained from high-fidelity solutions to partial differential equations. We provide justification for the effectiveness of the resulting chain complex and demonstrate its DDEC model trained for Darcy flow on a heterogeneous domain.

More Details

Phenomenology-informed techniques for machine learning with measured and synthetic SAR imagery

Proceedings of SPIE - The International Society for Optical Engineering

Walker, Christopher W.; Laros, James H.; Erteza, Ireena A.; Bray, Brian K.

Phenomenology-Informed (PI) Machine Learning is introduced to address the unique challenges faced when applying modern machine-learning object recognition techniques to the SAR domain. PI-ML includes a collection of data normalization and augmentation techniques inspired by successful SAR ATR algorithms designed to bridge the gap between simulated and real-world SAR data for use in training Convolutional Neural Networks (CNNs) that perform well in the low-noise, feature-dense space of camera-based imagery. The efficacy of PI-ML will be evaluated using ResNet, EfficientNet, and other networks, using both traditional training techniques and all-SAR transfer learning.

More Details

Phenomenology-informed techniques for machine learning with measured and synthetic SAR imagery

Proceedings of SPIE - The International Society for Optical Engineering

Walker, Christopher W.; Laros, James H.; Erteza, Ireena A.; Bray, Brian K.

Phenomenology-Informed (PI) Machine Learning is introduced to address the unique challenges faced when applying modern machine-learning object recognition techniques to the SAR domain. PI-ML includes a collection of data normalization and augmentation techniques inspired by successful SAR ATR algorithms designed to bridge the gap between simulated and real-world SAR data for use in training Convolutional Neural Networks (CNNs) that perform well in the low-noise, feature-dense space of camera-based imagery. The efficacy of PI-ML will be evaluated using ResNet, EfficientNet, and other networks, using both traditional training techniques and all-SAR transfer learning.

More Details

Extended-short-wavelength infrared AlInAsSb and InPAsSb detectors on InAs

Proceedings of SPIE - The International Society for Optical Engineering

Klem, John F.; Olesberg, Jonathon T.; Hawkins, Samuel D.; Weiner, P.H.; Deitz, Julia D.; Kadlec, C.N.; Shaner, Eric A.; Coon, Wesley T.

We have fabricated and characterized AlInAsSb- and InPAsSb-absorber nBn infrared detectors with 200 K cutoff wavelengths from 2.55 to 3.25 μm. Minority-carrier lifetimes determined by microwave reflectance measurements were 0.2-1.0 μs in doped n-type absorber materials. Devices having 4 μm thick absorbers exhibited sharp cutoff at wavelengths of 2.9 μm or longer and softer cutoff at shorter wavelengths. Top-illuminated devices with n+ InAs window/contact layers had external quantum efficiencies of 40-50% without anti-reflection coating at 50 mV reverse bias and wavelengths slightly shorter than cutoff. Despite the shallow-etch mesa nBn design, perimeter currents contributed significantly to the 200 K dark current. Dark currents for InPAsSb devices were lower than AlInAsSb devices with similar cutoff wavelengths. For unoptimized InPAsSb devices with 2.55 μm cutoff, 200 K areal and perimeter dark current densities at -0.2 V bias in devices of various sizes were approximately 1x10-7 A/cm2 and 1.4x10-8 A/cm, respectively.

More Details

Risk-Informed Access Delay Timeline Development

Proceedings of the 2021 International Topical Meeting on Probabilistic Safety Assessment and Analysis, PSA 2021

Osborn, Douglas M.; Brooks, Dusty M.; Thompson, Andrew D.

The U.S. Department of Energy Office of Nuclear Energy’s Light Water Reactor Sustainability Program is developing a new method to modernize how access delay timelines are developed and utilized in physical security system evaluations. This new method utilizes Bayesian methods to combine subject matter expert judgement and small performance test datasets in a consistent and defensible way. It will also allow a more holistic view of delay performance that provides distributions of task times and task success probabilities to account for tasks that, if failed, would result in failure of the attack. This paper describes the methodology and its application to an example problem, demonstrating that it can be applied to access delay timeline analyses to summarize delay performance using subjective and objective information.

More Details

Postprocessing techniques for gradient percolation predictions on the square lattice

Physical Review E

Tencer, John T.; Forsberg, Kelsey A.

In this work, we revisit the classic problem of site percolation on a regular square lattice. In particular, we investigate the effect of quantization bias errors on percolation threshold predictions for large probability gradients and propose a mitigation strategy. We demonstrate through extensive computational experiments that the assumption of a linear relationship between probability gradient and percolation threshold used in previous investigations is invalid. Moreover, we demonstrate that, due to skewness in the distribution of occupation probabilities visited the average does not converge monotonically to the true percolation threshold. We identify several alternative metrics which do exhibit monotonic (albeit not linear) convergence and document their observed convergence rates.

More Details

Exploring characteristics of neural network architecture computation for enabling SAR ATR

Proceedings of SPIE - The International Society for Optical Engineering

Melzer, Ryan D.; Severa, William M.; Plagge, Mark P.; Vineyard, Craig M.

Neural network approaches have periodically been explored in the pursuit of high performing SAR ATR solutions. With deep neural networks (DNNs) now offering many state-of-The-Art solutions to computer vision tasks, neural networks are once again being revisited for ATR processing. Here, we characterize and explore a suite of neural network architectural topologies. In doing so, we assess how different architectural approaches impact performance and consider the associated computational costs. This includes characterizing network depth, width, scale, connectivity patterns, as well as convolution layer optimizations. We have explored a suite of architectural topologies applied to both the canonical MSTAR dataset, as well as the more operationally realistic Synthetic and Measured Paired and Labeled Experiment (SAMPLE) dataset. The latter pairs high fidelity computational models of targets with actual measured SAR data. Effectively, this dataset offers the ability to train a DNN on simulated data and test the network performance on measured data. Not only does our in-depth architecture topology analysis offer insight into how different architectural approaches impact performance, but we also have trained DNNs attaining state-of-The-Art performance on both datasets. Furthermore, beyond just accuracy, we also assess how efficiently an accelerator architecture executes these neural networks. Specifically, Using an analytical assessment tool, we forecast energy and latency for an edge TPU like architecture. Taken together, this tradespace exploration offers insight into the interplay of accuracy, energy, and latency for executing these networks.

More Details

Development of Quantum Interconnects (QuICs) for Next-Generation Information Technologies

PRX Quantum

Davids, Paul D.

Just as "classical"information technology rests on a foundation built of interconnected information-processing systems, quantum information technology (QIT) must do the same. A critical component of such systems is the "interconnect,"a device or process that allows transfer of information between disparate physical media, for example, semiconductor electronics, individual atoms, light pulses in optical fiber, or microwave fields. While interconnects have been well engineered for decades in the realm of classical information technology, quantum interconnects (QuICs) present special challenges, as they must allow the transfer of fragile quantum states between different physical parts or degrees of freedom of the system. The diversity of QIT platforms (superconducting, atomic, solid-state color center, optical, etc.) that will form a "quantum internet"poses additional challenges. As quantum systems scale to larger size, the quantum interconnect bottleneck is imminent, and is emerging as a grand challenge for QIT. For these reasons, it is the position of the community represented by participants of the NSF workshop on "Quantum Interconnects"that accelerating QuIC research is crucial for sustained development of a national quantum science and technology program. Given the diversity of QIT platforms, materials used, applications, and infrastructure required, a convergent research program including partnership between academia, industry, and national laboratories is required.

More Details

Dynamic Programming Method to Optimally Select Power Distribution System Reliability Upgrades

IEEE Open Access Journal of Power and Energy

Raja, S.; Arguello, Bryan A.; Pierre, Brian J.

This paper presents a novel dynamic programming (DP) technique for the determination of optimal investment decisions to improve power distribution system reliability metrics. This model is designed to select the optimal small-scale investments to protect an electrical distribution system from disruptions. The objective is to minimize distribution system reliability metrics: System Average Interruption Duration Index (SAIDI) and System Average Interruption Frequency Index (SAIFI). The primary input to this optimization model is years of recent utility historical outage data. The DP optimization technique is compared and validated against an equivalent mixed integer linear program (MILP). Through testing on synthetic and real datasets, both approaches are verified to yield equally optimal solutions. Efficiency profiles of each approach indicate that the DP algorithm is more efficient when considering wide budget ranges or a larger outage history, while the MILP model more efficiently handles larger distribution systems. The model is tested with utility data from a distribution system operator in the U.S. Results demonstrate a significant improvement in SAIDI and SAIFI metrics with the optimal small-scale investments.

More Details

Correlating incident heat flux and source temperature to meet astm e1529 requirements for ram packaging components thermal testing

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Baird, Austin R.; Gill, Walt; Mendoza, Hector M.; Figueroa Faria, Victor G.

Often in fire resistance testing of packaging vessels and other components, both the heat source temperature and the incident heat flux on a test specimen need to be measured and correlated. Standards such as ASTM E1529 require a specified temperature range from the heat source and a specified heat flux on the surface of the test specimen. There are other standards that have similar requirements. The geometry of the test environment and specimen may make heat flux measurements using traditional instruments (directional flame thermometers (DFTs) and water-cooled radiometers) difficult to implement. Orientation of the test specimen with respect to the thermal environment is also important to ensure that the heat flux on the surface of the test specimen is properly measured. Other important factors in the flux measurement include the thermal mass and surface emissivity of the test specimen. This paper describes the development of a cylindrical calorimeter using water-cooled wide-angle Schmidt-Bolter gauges to measure the incident heat flux for a vessel exposed to a radiant heat source. The calorimeter is designed to be modular to be modular with multiple configurations while meeting emissivity and thermal mass requirements via a variable thermal mass. The results of the incident heat flux and source temperature along with effective/apparent emissivity calculations are discussed.

More Details

Fire-induced pressure response and failure of 3013 containers

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Mendoza, Hector M.; Gill, Walt; Baird, Austin R.; Figueroa Faria, Victor G.; Hensel, Steve; Sanborn, Scott E.

Several Department of Energy (DOE) facilities have nuclear or hazardous materials stored in robust, welded, stainless-steel containers with undetermined fire-induced pressure response behaviors. Lack of test data related to fire exposure requires conservative safety analysis assumptions for container response at these facilities. This conservatism can in turn result in the implementation of challenging operational restrictions with costly nuclear safety controls. To help address this issue for sites that store DOE 3013 stainless-steel containers, a series of five tests were undertaken at Sandia National Laboratories. The goal of this test series was to obtain the response behavior for various configurations of the DOE 3013 containers when exposed to various fire conditions. Key parameters measured in the test series included identification of failure-specific characteristics such as pressure, temperature, and leak/burst failure type. This paper describes the development and execution of the test series performed to identify these failure-specific characteristics. Work completed to define the test configurations, payload compositions, thermal insults, and experimental setups are discussed. Test results are presented along with corresponding discussions for each test.

More Details

Modelling Airborne Transmission and Ventilation Impacts of a COVID-19 Outbreak in a Restaurant in Guangzhou, China

International Journal of Computational Fluid Dynamics

Ho, Clifford K.

Computational fluid dynamics (CFD) modelling was performed to simulate spatial and temporal airborne pathogen concentrations during an observed COVID-19 outbreak in a restaurant in Guangzhou, China. The reported seating configuration, overlap durations, room ventilation, layout, and dimensions were modelled in the CFD simulations to determine relative exposures and probabilities of infection. Results showed that the trends in the simulated probabilities of infection were consistent with the observed rates of infection at each of the tables surrounding the index patient. Alternative configurations that investigated different boundary conditions and ventilation conditions were also simulated. Increasing the fresh-air percentage to 10%, 50%, and 100% of the supply air reduced the accumulated pathogen mass in the room by an average of ∼30%, ∼70%, and ∼80%, respectively, over 73 min. The probability of infection was reduced by ∼10%, 40%, and 50%, respectively. Highlights: Computational fluid dynamics (CFD) models used to simulate pathogen concentrations Infection model developed using spatial and temporal CFD results Simulating spatial variability was important to match observed infection rates Recirculation increased exposures and probability of infection Increased fresh-air ventilation decreased exposures and probability of infection.

More Details

Mechanical Response of Castlegate Sandstone under Hydrostatic Cyclic Loading

Geofluids

Kibikas, William M.; Bauer, Stephen J.

The stress history of rocks in the subsurface affects their mechanical and petrophysical properties. Rocks can often experience repeated cycles of loading and unloading due to fluid pressure fluctuations, which will lead to different mechanical behavior from static conditions. This is of importance for several geophysical and industrial applications, for example, wastewater injection and reservoir storage wells, which generate repeated stress perturbations. Laboratory experiments were conducted with Castlegate sandstone to observe the effects of different cyclic pressure loading conditions on a common reservoir analogue. Each sample was hydrostatically loaded in a triaxial cell to a low effective confining pressure, and either pore pressure or confining pressure was cycled at different rates over the course of a few weeks. Fluid permeability was measured during initial loading and periodically between stress cycles. Samples that undergo cyclic loading experience significantly more inelastic (nonrecoverable) strain compared to samples tested without cyclic hydrostatic loading. Permeability decreases rapidly for all tests during the first few days of testing, but the decrease and variability of permeability after this depend upon the loading conditions of each test. Cycling conditions do affect the mechanical behavior; the elastic moduli decrease with the increasing loading rate and stress cycling. The degree of volumetric strain induced by stress cycles is the major control on permeability change in the sandstones, with less compaction leading to more variation from measurement to measurement. The data indicate that cyclic loading degrades permeability and porosity more than static conditions over a similar period, but the petrophysical properties are dictated more by the hydrostatic loading rate rather than the total length of time stress cycling is imposed.

More Details

Detection and localization of objects hidden in fog

Proceedings of SPIE - The International Society for Optical Engineering

Bentz, Brian Z.; Laros, James H.; Glen, Andrew G.; Pattyn, Christian A.; Redman, Brian J.; Martinez-Sanchez, Andres M.; Westlake, Karl W.; Hastings, Ryan L.; Webb, Kevin J.; Wright, Jeremy B.

Degraded visual environments like fog pose a major challenge to safety and security because light is scattered by tiny particles. We show that by interpreting the scattered light it is possible to detect, localize, and characterize objects normally hidden in fog. First, a computationally efficient light transport model is presented that accounts for the light reflected and blocked by an opaque object. Then, statistical detection is demonstrated for a specified false alarm rate using the Neyman-Pearson lemma. Finally, object localization and characterization are implemented using the maximum likelihood estimate. These capabilities are being tested at the Sandia National Laboratory Fog Chamber Facility.

More Details

EXPLORING VITAL AREA IDENTIFICATION USING SYSTEMS-THEORETIC PROCESS ANALYSIS

Proceedings of the 2021 International Topical Meeting on Probabilistic Safety Assessment and Analysis, PSA 2021

Sandt, Emily S.; Clark, Andrew; Williams, Adam D.; Cohn, Brian C.; Osborn, Douglas M.; Aldemir, Tunc

Vital Area Identification (VAI) is an important element in securing nuclear facilities, including the range of recently proposed advanced reactors (AR). As ARs continue to develop and progress to licensure status, it will be necessary to ensure that safety analysis methods are compatible with the new reactor designs. These reactors tout inherently passive safety systems that drastically reduce the number of active components whose failures need to be considered as basic events in a Level 1 probabilistic risk assessment (PRA). Instead, ARs rely on natural processes for their safety, which may be difficult to capture through the use of fault trees (FTs) and subsequently difficult to determine the effects of lost equipment when completing a traditional VAI analysis. Traditional VAI methodology incorporates FTs from Level 1 PRA as a substantial portion of the effort to identify candidate vital area sets. The outcome of VAI is a selected set of areas deemed vital which must be protected in order to prevent radiological sabotage. An alternative methodology is proposed to inform the VAI process and selection of vital areas: Systems-Theoretic Process Analysis (STPA). STPA is a systems-based, top-down approach which analyzes a system as a hierarchical control structure composed of components (both those that are controlled and their controllers) and controlled actions taken by/acted upon those components. The control structure is then analyzed based on several situational parameters, including a time component, to produce a list of scenarios which may lead to system losses. A case study is presented to demonstrate how STPA can be used to inform VAI for ARs.

More Details

INTEGRATED SAFETY AND SECURITY ANALYSIS OF NUCLEAR POWER PLANTS USING DYNAMIC EVENT TREES

Proceedings of the 2021 International Topical Meeting on Probabilistic Safety Assessment and Analysis, PSA 2021

Cohn, Brian C.; Haskin, Troy C.; Noel, Todd G.; Cardoni, Jeffrey N.; Osborn, Douglas M.; Aldemir, Tunc

Nuclear security relies on the method of vital area identification (VAI) to inform the sabotage target locations within a nuclear power plant (NPP) that need to be protected. The VAI methodology uses fault trees (FTs) and event trees (ETs) to identify locations in the NPP that contain vital systems, structures, or components. However, the traditional FT/ET process cannot fully capture the dynamics occurring following NPP sabotage or of mitigating actions. A methodology is presented which examines the consequences of sabotage to NPP systems using the dynamic probabilistic risk assessment approach to explore these dynamics. A force-on-force computer code determines the timing and extent of damage to NPP systems and a reactor response code models the effects of this damage on the reactor. These two codes are connected using the novel leading simulator/trailing simulator (LS/TS) methodology. A case study is created using the LS/TS methodology to model an adversary attack on an NPP. This case study models uncertainties in an adversary attack and in the response to determine if reactor core damage would occur, and the time to core damage, as well as the extent of core damage, if damage occurs.

More Details

TEM Studies of Segregation in a Ge–Sb–Te Alloy During Heating

Springer Proceedings in Materials

Singh, Manish K.; Ghosh, Chanchal; Kotula, Paul G.; Bakan, Gokhan; Silva, Helena; Carter, Clive B.

Phase-change materials are important for optical and electronic computing memory. Ge–Sb–Te (GST) is one of the important phase-change materials and has been studied extensively for fast, reversible, and non-volatile electronic phase-change memory. GST exhibits structural transformations from amorphous to metastable fcc at ~150 ℃ and fcc to hcp at ~300 ℃. The investigation of the structural, microstructural, and microchemical changes with high-temporal resolution during heating is crucial to gain insights on the changes that materials undergo during phase transformations. The as-deposited GST film has amorphous island morphology which transform to the metastable fcc phase at ~130 ℃. The second-phase transformation, from fcc to hexagonal, is observed at ~170 ℃. While the as-deposited amorphous islands show a homogeneous distribution of Ge, Sb and Te, these islands boundaries become Ge-rich after heating. Morphological and structural evolutions were captured during heating inside an aberration corrected environmental TEM equipped with a high-speed camera under a low-dose conditions to minimize beam-induced changes in the samples. Microchemical studies were carried out employing ChemiSTEM technique in probe-corrected mode with a monochromated beam.

More Details

Exploring the value of nodes with multicommunity membership for classification with graph convolutional neural networks

Information (Switzerland)

Hopwood, Michael W.; Pho, Phuong; Mantzaris, Alexander V.

Sampling is an important step in the machine learning process because it prioritizes samples that help the model best summarize the important concepts required for the task at hand. The process of determining the best sampling method has been rarely studied in the context of graph neural networks. In this paper, we evaluate multiple sampling methods (i.e., ascending and descending) that sample based off different definitions of centrality (i.e., Voterank, Pagerank, degree) to observe its relation with network topology. We find that no sampling method is superior across all network topologies. Additionally, we find situations where ascending sampling provides better classification scores, showing the strength of weak ties. Two strategies are then created to predict the best sampling method, one that observes the homogeneous connectivity of the nodes, and one that observes the network topology. In both methods, we are able to evaluate the best sampling direction consistently.

More Details

PROBABILITY DISTRIBUTION FUNCTIONS OF THE NUMBER OF SCATTERING COLLISIONS IN ELECTRON SLOWING DOWN

Proceedings of the International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering, M and C 2021

Franke, Brian C.; Prinja, Anil K.

The probability distribution of the number of collisions experienced by electrons slowing down below a threshold energy is investigated to understand the impact of statistical distribution of energy losses on computational efficiency of Monte Carlo simulations. A theoretical model based on an exponentially peaked differential cross section with parameters that reproduce the exact stopping power and straggling at a fixed energy is shown to yield a Poisson distribution for the collision number distribution. However, simulation with realistic energy-loss physics, including both inelastic and bremsstrahlung energy loss interactions, reveal significant departures from the Poisson distribution. In particular, the low collision numbers are more prominent when true cross sections are employed while a Poisson distribution constructed with the exact variance-to-mean ratio is found to be unrealistically peaked. Detailed numerical investigations show that collisions with large energy losses, although infrequent, are statistically important in electron slowing down.

More Details

Proctor: A Semi-Supervised Performance Anomaly Diagnosis Framework for Production HPC Systems

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Aksar, Burak; Zhang, Yijia; Ates, Emre; Schwaller, Benjamin S.; Aaziz, Omar R.; Leung, Vitus J.; Brandt, James M.; Egele, Manuel; Coskun, Ayse K.

Performance variation diagnosis in High-Performance Computing (HPC) systems is a challenging problem due to the size and complexity of the systems. Application performance variation leads to premature termination of jobs, decreased energy efficiency, or wasted computing resources. Manual root-cause analysis of performance variation based on system telemetry has become an increasingly time-intensive process as it relies on human experts and the size of telemetry data has grown. Recent methods use supervised machine learning models to automatically diagnose previously encountered performance anomalies in compute nodes. However, supervised machine learning models require large labeled data sets for training. This labeled data requirement is restrictive for many real-world application domains, including HPC systems, because collecting labeled data is challenging and time-consuming, especially considering anomalies that sparsely occur. This paper proposes a novel semi-supervised framework that diagnoses previously encountered performance anomalies in HPC systems using a limited number of labeled data points, which is more suitable for production system deployment. Our framework first learns performance anomalies’ characteristics by using historical telemetry data in an unsupervised fashion. In the following process, we leverage supervised classifiers to identify anomaly types. While most semi-supervised approaches do not typically use anomalous samples, our framework takes advantage of a few labeled anomalous samples to classify anomaly types. We evaluate our framework on a production HPC system and on a testbed HPC cluster. We show that our proposed framework achieves 60% F1-score on average, outperforming state-of-the-art supervised methods by 11%, and maintains an average 0.06% anomaly miss rate.

More Details

The palo verde water cycle model (pvwcm) development of an integrated multi-physics and economics model for effective water management

American Society of Mechanical Engineers, Power Division (Publication) POWER

Middleton, Bobby M.; Brady, Patrick V.; Brown, Jeffrey A.; Lawles, Serafina T.

Water management has become critical for thermoelectric power generation in the US. Increasing demand for scarce water resources for domestic, agricultural, and industrial use affects water availability for power plants. In particular, the population in the Southwestern part of the US is growing and water resources are over-stressed. The engineering and management teams at the Palo Verde Generating Station (PV) in the Sonoran Desert have long understood this problem and began a partnership with Sandia National Laboratories in 2017 to develop a long-Term water strategy for PV. As part of this program, Sandia and Palo Verde staff have developed a comprehensive software tool that models all aspects of the PV (plant cooling) water cycle. The software tool the Palo Verde Water Cycle Model (PVWCM) tracks water operations from influent to the plant through evaporation in one of the nine cooling towers or one of the eight evaporation ponds. The PVWCM has been developed using a process called System Dynamics. The PVWCM is developed to allow scenario comparison for various plant operating strategies.

More Details

The sandia national laboratories1 natural circulation cooler

American Society of Mechanical Engineers, Power Division (Publication) POWER

Middleton, Bobby M.; Brady, Patrick V.; Lawles, Serafina

Sandia National Laboratories (SNL) is developing a cooling technology concept the Sandia National Laboratories Natural Circulation Cooler (SNLNCC) that has potential to greatly improve the economic viability of hybrid cooling for power plants. The SNLNCC is a patented technology that holds promise for improved dry heat rejection capabilities when compared to currently available technologies. The cooler itself is a dry heat rejection device, but is conceptualized here as a heat exchanger used in conjunction with a wet cooling tower, creating a hybrid cooling system for a thermoelectric power plant. The SNLNCC seeks to improve on currently available technologies by replacing the two-phase refrigerant currently used with either a supercritical fluid such as supercritical CO2 (sCO2) or a zeotropic mixture of refrigerants. In both cases, the heat being rejected by the water to the SNLNCC would be transferred over a range of temperatures, instead of at a single temperature as it is in a thermosyphon. This has the potential to improve the economics of dry heat rejection performance in three ways: decreasing the minimum temperature to which the water can be cooled, increasing the temperature to which air can be heated, and increasing the fraction of the year during which dry cooling is economically viable. This paper describes the experimental basis and the current state of the SNLNCC.

More Details

Hyperspectral Image Target Detection Using Deep Ensembles for Robust Uncertainty Quantification

Conference Record - Asilomar Conference on Signals, Systems and Computers

Sahay, Rajeev S.; Ries, Daniel R.; Zollweg, Joshua D.; Brinton, Christopher G.

Deep learning (DL) has been widely proposed for target detection in hyperspectral image (HSI) data. Yet, standard DL models produce point estimates at inference time, with no associated measure of uncertainty, which is vital in high-consequence HSI applications. In this work, we develop an uncertainty quantification (UQ) framework using deep ensemble (DE) learning, which builds upon the successes of DL-based HSI target detection, while simultaneously providing UQ metrics. Specifically, we train an ensemble of convolutional deep learning detection models using one spectral prototype at a particular time of day and atmospheric condition. We find that our proposed framework is capable of accurate target detection in additional atmospheric conditions and times of day despite not being exposed to them during training. Furthermore, in comparison to Bayesian Neural Networks, another DL based UQ approach, we find that DEs provide increased target detection performance while achieving comparable probabilities of detection at constant false alarm rates.

More Details

PERFORMANCE OF ITERATIVE NETWORK UNCERTAINTY QUANTIFICATION FOR MULTICOMPONENT SYSTEM QUALIFICATION

ASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)

Rojas, Edward; Tencer, John T.

In order to impact design decisions and realize the full promise of high-fidelity computational tools, simulation results must be integrated at the earliest stages in the design process. This is particularly challenging when dealing with uncertainty and optimizing for system-level performance metrics as full-system models (often notoriously expensive and time-consuming to develop) are generally required to propagate uncertainties to system-level quantities of interest. Methods for propagating parameter and boundary condition uncertainty in networks of interconnected components hold promise for enabling design under uncertainty in real-world applications. These methods preclude the need for time consuming mesh generation of full-system geometries when changes are made to components or subassemblies. Additionally, they explicitly tie full-system model predictions to component/subassembly validation data which is valuable for qualification. This is accomplished by taking advantage of the fact that many engineered systems are inherently modular, being comprised of a hierarchy of components and subassemblies which are individually modified or replaced to define new system designs. We leverage this hierarchical structure to enable rapid model development and the incorporation of uncertainty quantification and rigorous sensitivity analysis earlier in the design process. The resulting formulation of the uncertainty propagation problem is iterative. We express the system model as a network of interconnected component models which exchange stochastic solution information at component boundaries. We utilize Jacobi iteration with Anderson acceleration to converge stochastic representations of system level quantities of interest through successive evaluations of component or subassembly forward problems. We publish our open-source tools for uncertainty propagation in networks remarking that these tools are extensible and can be used with any simulation tool (including arbitrary surrogate modeling tools) through the construction of a simple Python interface class. Additional interface classes for a variety of simulation tools are currently under active development. The performance of the uncertainty quantification method is determined by the number of iterations needed to achieve a desired level of accuracy. Performance of these networks for simple canonical systems from both a heat transfer and solid mechanics perspective are investigated; the models are examined with thermal and mechanical Dirichlet and Neumann type boundary conditions separately imposed and the impact of varying governing equations and boundary condition type on the performance of the networks is analyzed. The form of the boundary conditions is observed to have a large impact on the convergence rate with Neumann-type boundary conditions corresponding to significant performance degradation compared to the Dirichlet boundary conditions. Nonmonotonicity is observed in the solution convergence in some cases.

More Details

A Process to Colorize and Assess Visualizations of Noisy X-Ray Computed Tomography Hyperspectral Data of Materials with Similar Spectral Signatures

2021 IEEE Nuclear Science Symposium and Medical Imaging Conference Record, NSS/MIC 2021 and 28th International Symposium on Room-Temperature Semiconductor Detectors, RTSD 2022

Clifford, Joshua M.; Kemp, Emily K.; Limpanukorn, Ben L.; Jimenez, Edward S.

Dimension reduction techniques have frequently been used to summarize information from high dimensional hyperspectral data, usually done in effort to classify or visualize the materials contained in the hyperspectral image. The main challenge in applying these techniques to Hyperspectral Computed Tomography (HCT) data is that if the materials in the field of view are of similar composition then it can be difficult for a visualization of the hyperspectral image to differentiate between the materials. We propose novel alternative methods of preprocessing and summarizing HCT data in a single colorized image and novel measures to assess desired qualities in the resultant colored image, such as the contrast between different materials and the consistency of color within the same object. Proposed processes in this work include a new majority-voting method for multi-level thresholding, binary erosion, median filters, PAM clustering for grouping pixels into objects (of homogeneous materials) and mean/median assignment along the spectral dimension for representing the underlying signature, UMAP or GLMs to assign colors, and quantitative coloring assessment with developed measures. Strengths and weaknesses of various combinations of methods are discussed. These results have the potential to create more robust material identification methods from HCT data that has wide use in industrial, medical, and security-based applications for detection and quantification, including visualization methods to assist with rapid human interpretability of these complex hyperspectral signatures.

More Details

Ultrathin epitaxial NbN superconducting films with high upper critical field grown at low temperature

Materials Research Letters

Lu, Ping L.

Ultrathin (5–50 nm) epitaxial superconducting niobium nitride (NbN) films were grown on AlN-buffered c-plane Al2O3 by an industrial scale physical vapor deposition technique at 400°C. Both X-ray diffraction and scanning electron microscopy analysis show high crystallinity of the (111)-oriented NbN films, with a narrow full-width-at-half-maximum of the rocking curve down to 0.030°. The lattice constant decreases with decreasing NbN layer thickness, suggesting lattice strain for films with thicknesses below 20 nm. The superconducting transition temperature, the transition width, the upper critical field, the irreversibility line, and the coherence length are closely correlated to the film thickness. IMPACT STATEMENT: This work realized high quality ultrathin epitaxial NbN films by an industry-scale PVD technology at low substrate temperature, which opens up new opportunities for quantum devices.

More Details

A MULTILAYER NETWORK APPROACH TO ASSESSING THE IMPACT OF HUMAN PERFORMANCE SHAPING FACTORS ON SECURITY FOR NUCLEAR POWER PLANTS

Proceedings of the 2021 International Topical Meeting on Probabilistic Safety Assessment and Analysis, PSA 2021

Williams, Adam D.; Fleming Lindsley, Elizabeth S.

Multilayered networks (MLN), when integrated with traditional task analyses, offer a model-based approach to describe human performance in nuclear power plant security. MLNs demonstrate the interconnected links between security-related roles, security operating procedures, and technical components within a security system. However, when used in isolation, MLNs and task analyses may not fully reveal the impacts humans have within a security system. Thus, the Systems Context Lenses were developed to enhance design for and analysis of desired complex system behaviors, like security at Nuclear Power Plants (NPPs). The System Context Lenses integrate systems engineering concepts and human factors considerations to describe how human actors interact within (and across) the system design, operational environment, and sociotechnical context. Through application of the Systems Context Lenses, critical Performance Shaping Factors (PSFs) influencing human performance can be identified and used to analytically connect human actions with technical and environmental resources in an MLN. This paper summarizes the benefit of a tiered-lens approach on a use case of a multilayered network model of NPP security, including demonstrating how NPP security performance can be improved by more robustly incorporating varying human, institutional, and broader socio-technical interactions.

More Details

Near-field and far-field sampling of aerosol plumes to evaluate particulate emission rates from a falling particle receiver during on-sun testing

Proceedings of the ASME 2021 15th International Conference on Energy Sustainability, ES 2021

Glen, Andrew G.; Dexheimer, Darielle D.; Sanchez, A.L.; Ho, Clifford K.; China, Swarup; Mei, Fan; Lata, Nurun N.

High-temperature falling particle receivers are being investigated for next-generation concentrating solar power applications. Small sand-like particles are released into an open-cavity receiver and are irradiated by concentrated sunlight from a field of heliostats. The particles are heated to temperatures over 700 °C and can be stored to produce heat for electricity generation or industrial applications when needed. As the particles fall through the receiver, particles and particulate fragments in the form of aerosolized dust can be emitted from the aperture, which can lower thermal efficiency, increase costs of particle replacement, and pose a particulate matter (PM) inhalation risk. This paper describes sampling methods that were deployed during on-sun tests to record nearfield (several meters) and far-field (tens to hundreds of meters) concentrations of aerosol particles within emitted plumes. The objective was to quantify the particulate emission rates and loss from the falling particle receiver in relation to OSHA and EPA National Ambient Air Quality Standards (NAAQS). Near-field instrumentation placed on the platform in proximity to the receiver aperture included several real-time aerosol size distribution and concentration measurement techniques, including a TSI Aerodynamic Particle Sizers (APS), TSI DustTraks, Handix Portable Optical Particle Spectrometers (POPS), Alphasense Optical Particle Counters (OPC), TSI Condensation Particle Counters (CPC), Cascade Particle Impactors, 3D-printed prototype tipping buckets, and meteorological instrumentation. Far-field particle sampling techniques utilized multiple tethered balloons located upwind and downwind of the particle receiver to measure the advected plume concentrations using a suite of airborne aerosol and meteorological instruments including POPS, CPCs, OPCs and cascade impactors. The combined aerosol size distribution for all these instruments spanned particle sizes from 0.02 μm - 500 μm. Results showed a strong influence of wind direction on particle emissions and concentration, with preliminary results showing representative concentrations below both the OSHA and NAAQS standards.

More Details

SODIUM FILTER PERFORMANCE IN THE NASCORD DATABASE

Proceedings of the 2021 International Topical Meeting on Probabilistic Safety Assessment and Analysis, PSA 2021

Mohmand, Jamal A.; Clark, Andrew

Sodium-cooled Fast Reactors (SFRs) have an extensive operational history that can be leveraged to accelerate the licensing process for advanced reactor designs. Sandia National Laboratories has reconstituted the United States SFR data from the Centralized Reliability Data Organization (CREDO) into a new database called the Sodium System Component Reliability Database (NaSCoRD). The NaSCoRD database and others like it will help reduce parametric uncertainties encountered in probabilistic risk analysis (PRA) models for advanced non-light water reactor technologies. This paper is an extension of previous work done at Sandia National Laboratories which analyzed pump data. This paper investigates the failure rates of filters/strainers. NaSCoRD contains unique records of 147 filters/strainers that have operated in Experimental Breeder Reactor II, Fast Flux Test Facility, and test loops including those operated by both Westinghouse and the Energy Technology Engineering Center. This paper presents filter failure rates for various conditions allowable from the CREDO data that has been recovered under NaSCoRD. The current filter reliability estimates are presented in comparison to estimates provided in historical studies. The impacts of the suggested corrections from the Idaho National Laboratory report, Generic Component Failure Data Base for Light Water and Liquid Sodium Reactor PRAs, and various prior distributions on these reliability estimates are also presented. The paper also briefly describes the potential improvement of the NaSCoRD database.

More Details

ANALYTIC FORMULA FOR THE DIFFERENCE OF THE CIRCUMRADIUS AND ORTHORADIUS OF A WEIGHTED TRIANGLE

Proceedings of the 29th International Meshing Roundtable, IMR 2021

Hummel, Michelle H.

Understanding and quantifying the effects of vertex insertion, perturbation, and weight allocation is useful for mesh generation and optimization. For weighted primal-dual meshes, the sensitivity of the orthoradius to mesh variations is especially important. To this end, this paper presents an analytic formula for the difference between the circumradius and orthoradius of a weighted triangle in terms of edge lengths and point weights under certain weight and edge assumptions. Current literature [1] offers a loose upper bound on the this difference, but as far as we know this is the first formula presented in terms of edge lengths and point weights. A formula in these terms is beneficial as these are fundamental quantities which enable a more immediate determination of how the perturbation of a point location or weight affects this difference. We apply this result to the VoroCrust algorithm to obtain the same quality guarantees under looser sampling conditions.

More Details

Terrestrial heat repository for months of storage (THERMS): A novel radial thermocline system

Proceedings of the ASME 2021 15th International Conference on Energy Sustainability, ES 2021

Ho, Clifford K.; Gerstle, Walter

This paper describes a terrestrial thermocline storage system comprised of inexpensive rock, gravel, and/or sand-like materials to store high-temperature heat for days to months. The present system seeks to overcome past challenges of thermocline storage (cost and performance) by utilizing a confined radialbased thermocline storage system that can better control the flow and temperature distribution in a bed of porous materials with one or more layers or zones of different particle sizes, materials, and injection/extraction wells. Air is used as the heat-transfer fluid, and the storage bed can be heated or "trickle charged"by flowing hot air through multiple wells during periods of low electricity demand using electrical heating or heat from a solar thermal plant. This terrestrial-based storage system can provide low-cost, large-capacity energy storage for both high- (∼400- 800°C) and low- (∼100-400°C) temperature applications. Bench-scale experiments were conducted, and computational fluid dynamics (CFD) simulations were performed to verify models and improve understanding of relevant features and processes that impact the performance of the radial thermocline storage system. Sensitivity studies were performed using the CFD model to investigate the impact o f the air flow rate, porosity, particle thermal conductivity, and air-to-particle heattransfer coefficient on temperature profiles. A preliminary technoeconomic analysis was also performed to estimate the levelized cost of storage for different storage durations and discharging scenarios.

More Details

Scalable3-BO: Big data meets HPC - A scalable asynchronous parallel high-dimensional Bayesian optimization framework on supercomputers

Proceedings of the ASME Design Engineering Technical Conference

Laros, James H.

Bayesian optimization (BO) is a flexible and powerful framework that is suitable for computationally expensive simulation-based applications and guarantees statistical convergence to the global optimum. While remaining as one of the most popular optimization methods, its capability is hindered by the size of data, the dimensionality of the considered problem, and the nature of sequential optimization. These scalability issues are intertwined with each other and must be tackled simultaneously. In this work, we propose the Scalable3-BO framework, which employs sparse GP as the underlying surrogate model to scope with Big Data and is equipped with a random embedding to efficiently optimize high-dimensional problems with low effective dimensionality. The Scalable3-BO framework is further leveraged with asynchronous parallelization feature, which fully exploits the computational resource on HPC within a computational budget. As a result, the proposed Scalable3-BO framework is scalable in three independent perspectives: with respect to data size, dimensionality, and computational resource on HPC. The goal of this work is to push the frontiers of BO beyond its well-known scalability issues and minimize the wall-clock waiting time for optimizing high-dimensional computationally expensive applications. We demonstrate the capability of Scalable3-BO with 1 million data points, 10,000-dimensional problems, with 20 concurrent workers in an HPC environment.

More Details

Optimizing Distributed Load Balancing for Workloads with Time-Varying Imbalance

Proceedings - IEEE International Conference on Cluster Computing, ICCC

Lifflander, Jonathan; Slattengren, Nicole S.; Pebay, Philippe P.; Miller, Phil; Rizzi, Francesco N.; Bettencourt, Matthew T.

This paper explores dynamic load balancing algorithms used by asynchronous many-task (AMT), or 'taskbased', programming models to optimize task placement for scientific applications with dynamic workload imbalances. AMT programming models use overdecomposition of the computational domain. Overdecompostion provides a natural mechanism for domain developers to expose concurrency and break their computational domain into pieces that can be remapped to different hardware. This paper explores fully distributed load balancing strategies that have shown great promise for exascalelevel computing but are challenging to theoretically reason about and implement effectively. We present a novel theoretical analysis of a gossip-based load balancing protocol and use it to build an efficient implementation with fast convergence rates and high load balancing quality. We demonstrate our algorithm in a nextgeneration plasma physics application (EMPIRE) that induces time-varying workload imbalance due to spatial non-uniformity in particle density across the domain. Our highly scalable, novel load balancing algorithm, achieves over a 3x speedup (particle work) compared to a bulk-synchronous MPI implementation without load balancing.

More Details

Construction of Differentially Private Empirical Distributions from a Low-Order Marginals Set Through Solving Linear Equations with l2 Regularization

Lecture Notes in Networks and Systems

Eugenio, Evercita; Liu, Fang

We introduce a new algorithm, Construction of dIfferentially Private Empirical Distributions from a low-order marginals set tHrough solving linear Equations with l2 Regularization (CIPHER), that produces differentially private empirical joint distributions from a set of low-order marginals. CIPHER is conceptually simple and requires no more than decomposing joint probabilities via basic probability rules to construct a linear equation set and subsequently solving the equations. Compared to the full-dimensional histogram (FDH) sanitization, CIPHER has drastically lower requirements on computational storage and memory, which is practically attractive especially considering that the high-order signals preserved by the FDH sanitization are likely just sample randomness and rarely of interest. Our experiments demonstrate that CIPHER outperforms the multiplicative weighting exponential mechanism in preserving original information and has similar or superior cost-normalized utility to FDH sanitization at the same privacy budget.

More Details

A Dynamic Mode Decomposition Scheme to Analyze Power Quality Events

IEEE Access

Wilches-Bernal, Felipe; Reno, Matthew J.; Hernandez Alvidrez, Javier H.

This paper presents a new method for detecting power quality disturbances, such as faults. The method is based on the dynamic mode decomposition (DMD)-a data-driven method to estimate linear dynamics whose eigenvalues and eigenvectors approximate those of the Koopman operator. The proposed method uses the real part of the main eigenvalue estimated by the DMD as the key indicator that a power quality event has occurred. The paper shows how the proposed method can be used to detect events using current and voltage signals to distinguish different faults. Because the proposed method is window-based, the effect that the window size has on the performance of the approach is analyzed. In addition, a study on the effect that noise has on the proposed approach is presented.

More Details

A Survey of Traveling Wave Protection Schemes in Electric Power Systems

IEEE Access

Wilches-Bernal, Felipe; Bidram, Ali; Reno, Matthew J.; Hernandez Alvidrez, Javier H.; Barba, Pedro; Reimer, Benjamin; Carr, Christopher C.; Lavrova, Olga A.

As a result of the increase in penetration of inverter-based generation such as wind and solar, the dynamics of the grid are being modified. These modifications may threaten the stability of the power system since the dynamics of these devices are completely different from those of rotating generators. Protection schemes need to evolve with the changes in the grid to successfully deliver their objectives of maintaining safe and reliable grid operations. This paper explores the theory of traveling waves and how they can be used to enable fast protection mechanisms. It surveys a list of signal processing methods to extract information on power system signals following a disturbance. The paper also presents a literature review of traveling wave-based protection methods at the transmission and distribution levels of the grid and for AC and DC configurations. The paper then discusses simulations tools to help design and implement protection schemes. A discussion of the anticipated evolution of protection mechanisms with the challenges facing the grid is also presented.

More Details

A tailored convolutional neural network for nonlinear manifold learning of computational physics data using unstructured spatial discretizations

SIAM Journal on Scientific Computing

Tencer, John T.; Potter, Kevin M.

We propose a nonlinear manifold learning technique based on deep convolutional autoencoders that is appropriate for model order reduction of physical systems in complex geometries. Convolutional neural networks have proven to be highly advantageous for compressing data arising from systems demonstrating a slow-decaying Kolmogorov n-width. However, these networks are restricted to data on structured meshes. Unstructured meshes are often required for performing analyses of real systems with complex geometry. Our custom graph convolution operators based on the available differential operators for a given spatial discretization effectively extend the application space of deep convolutional autoencoders to systems with arbitrarily complex geometry that are typically discretized using unstructured meshes. We propose sets of convolution operators based on the spatial derivative operators for the underlying spatial discretization, making the method particularly well suited to data arising from the solution of partial differential equations. We demonstrate the method using examples from heat transfer and fluid mechanics and show better than an order of magnitude improvement in accuracy over linear methods.

More Details

Materials compatibility concerns for hydrogen blended into natural gas

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Ronevich, Joseph A.; San Marchi, Christopher W.

Hydrogen additions to natural gas are being considered around the globe as a means to utilize existing infrastructure to distribute hydrogen. Hydrogen is known to enhance fatigue crack growth and reduce fracture resistance of structural steels used for pressure vessels, piping and pipelines. Most research has focused on high-pressure hydrogen environments for applications of storage (>100 MPa) and delivery (10-20 MPa) in the context of hydrogen fuel cell vehicles, which typically store hydrogen onboard at pressure of 70 MPa. In applications of blending hydrogen into natural gas, a wide range of hydrogen contents are being considered, typically in the range of 2-20%. In natural gas infrastructure, the pressure differs depending on location in the system (i.e., transmission systems are relatively high pressure compared to low-pressure distribution systems), thus the anticipated partial pressure of hydrogen can be less than an atmosphere or more than 10 MPa. In this report, it is shown that low partial pressure hydrogen has a very strong effect on fatigue and fracture behavior of infrastructure steels. While it is acknowledged that materials compatibility with hydrogen will be important for systems operating with high stresses, the effects of hydrogen do not seem to be a significant threat for systems operating at low pressure as in distribution infrastructure. In any case, system operators considering the addition of hydrogen to their network must carefully consider the structural performance of their system and the significant effects of hydrogen on structural integrity, as fatigue and fracture properties of all steels in the natural gas infrastructure will be degraded by hydrogen, even for partial pressure of hydrogen less than 0.1 MPa.

More Details

Fire-induced pressure response and failure of 3013 containers

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Mendoza, Hector M.; Gill, Walt; Baird, Austin R.; Figueroa Faria, Victor G.; Hensel, Steve; Sanborn, Scott E.

Several Department of Energy (DOE) facilities have nuclear or hazardous materials stored in robust, welded, stainless-steel containers with undetermined fire-induced pressure response behaviors. Lack of test data related to fire exposure requires conservative safety analysis assumptions for container response at these facilities. This conservatism can in turn result in the implementation of challenging operational restrictions with costly nuclear safety controls. To help address this issue for sites that store DOE 3013 stainless-steel containers, a series of five tests were undertaken at Sandia National Laboratories. The goal of this test series was to obtain the response behavior for various configurations of the DOE 3013 containers when exposed to various fire conditions. Key parameters measured in the test series included identification of failure-specific characteristics such as pressure, temperature, and leak/burst failure type. This paper describes the development and execution of the test series performed to identify these failure-specific characteristics. Work completed to define the test configurations, payload compositions, thermal insults, and experimental setups are discussed. Test results are presented along with corresponding discussions for each test.

More Details

Development and validation of radiant heat systems to test ram packages under non-uniform thermal environments

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Mendoza, Hector M.; Gill, Walt; Figueroa Faria, Victor G.; Sanborn, Scott E.

Certification of radioactive material (RAM) packages for storage and transportation requires multiple tiers of testing that simulate accident conditions in order to assure safety. One of these key testing aspects focuses on container response to thermal insults when a package includes materials that decompose, combust, or change phase between-40 °C and 800 °C. Thermal insult for RAM packages during testing can be imposed from a direct pool fire, but it can also be imposed using a furnace or a radiant heat system. Depending on variables such as scale, heating rates, desired environment, intended diagnostics, cost, etc., each of the different methods possess their advantages and disadvantages. While a direct fire can be the closest method to represent a plausible insult, incorporating comprehensive diagnostics in a controlled fire test can pose various challenges due to the nature of a fire. Radiant heat setups can instead be used to impose a comparable heat flux on a test specimen in a controlled manner that allows more comprehensive diagnostics. With radiant heat setups, however, challenges can arise when attempting to impose desired nonuniform heat fluxes that would account for specimen orientation and position in a simulated accident scenario. This work describes the development, implementation, and validation of a series of techniques used by Sandia National Laboratories to create prescribed non-uniform thermal environments using radiant heat sources for RAM packages as large as a 55-gallon drum.

More Details

Using Machine Learning to Predict Bilingual Language Proficiency from Reaction Time Priming Data

Proceedings of the 43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021

Matzen, Laura E.; Ting, Christina T.; Stites, Mallory C.

Studies of bilingual language processing typically assign participants to groups based on their language proficiency and average across participants in order to compare the two groups. This approach loses much of the nuance and individual differences that could be important for furthering theories of bilingual language comprehension. In this study, we present a novel use of machine learning (ML) to develop a predictive model of language proficiency based on behavioral data collected in a priming task. The model achieved 75% accuracy in predicting which participants were proficient in both Spanish and English. Our results indicate that ML can be a useful tool for characterizing and studying individual differences.

More Details

Comparison of pyrometry and thermography for thermal analysis of thermite reactions

Applied Optics

Woodruff, Connor; Dean, Steven W.; Pantoya, Michelle L.

This study examines the thermal behavior of a laser ignited thermite composed of aluminum and bismuth trioxide. Temperature data were collected during the reaction using a four-color pyrometer and a high-speed color camera modified for thermography. The two diagnostics were arranged to collect data simultaneously, with similar fields of view and with similar data acquisition rates, so that the two techniques could be directly compared. Results show that at initial and final stages of the reaction, a lower signal-to-noise ratio affects the accuracy of the measured temperatures. Both diagnostics captured the same trends in transient thermal behavior, but the average temperatures measured with thermography were about 750 K higher than those from the pyrometer. This difference was attributed to the lower dynamic range of the thermography camera’s image sensor, which was unable to resolve cooler temperatures in the field of view as well as the photomultiplier tube sensors in the pyrometer. Overall, while the camera could not accurately capture the average temperature of a scene, its ability to capture peak temperatures and spatial data make it the preferred method for tracking thermal behavior in thermite reactions.

More Details

Comparison of Surface Phenomena Created by Underground Chemical Explosions in Dry Alluvium and Granite Geology from Fully Polarimetric VideoSAR Data

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing

West, Roger D.; Abbott, Robert A.; Yocky, David A.

Phase I of the Source Physics Experiment (SPE) series involved six underground chemical explosions, all of which were conducted at the same experimental pad. Research from the sixth explosion of the series (SPE-6) demonstrated that polarimetric synthetic aperture radar (PolSAR) is a viable technology for monitoring an underground chemical explosion when the geologic structure is Cretaceous granitic intrusive. It was shown that a durable signal is measurable by the H/A/alpha polarimetric decomposition parameters. After the SPE-6 experiment, the SPE program moved to the Phase II location, which is composed of dry alluvium geology (DAG). The loss of wavefront energy is greater through dry alluvium than through granite. In this article, we compare the SPE-6 analysis to the second DAG (DAG-2) experiment. We hypothesize that despite the geology at the DAG site being more challenging than at the Phase I location, combined with the DAG-2 experiment having a 3.37 times deeper scaled depth of burial than the SPE-6, a durable nonprompt signal is still measurable by a PolSAR sensor. We compare the PolSAR time-series measures from videoSAR frames, from the SPE-6 and DAG-2 experiments, with accelerometer data. We show which PolSAR measures are invariant to the two types of geology and which are geology dependent. We compare a coherent change detection (CCD) map from the DAG-2 experiment with the data from a fiber-optic distributed acoustic sensor to show the connection between the spatial extent of coherence loss in CCD maps and spallation caused by the explosion. Finally, we also analyze the spatial extent of the PolSAR measures from both explosions.

More Details

Path towards a vertical TFET enabled by atomic precision advanced manufacturing

2021 Silicon Nanoelectronics Workshop, SNW 2021

Lu, Tzu-Ming L.; Gao, Xujiao G.; Anderson, Evan M.; Mendez Granado, Juan P.; Campbell, DeAnna M.; Ivie, Jeffrey A.; Schmucker, Scott W.; Grine, Albert D.; Lu, Ping L.; Tracy, Lisa A.; Arghavani, Reza A.; Misra, Shashank M.

We propose a vertical TFET using atomic precision advanced manufacturing (APAM) to create an abrupt buried n++-doped source. We developed a gate stack that preserves the APAM source to accumulate holes above it, with a goal of band-to-band tunneling (BTBT) perpendicular to the gate – critical for the proposed device. A metal-insulator-semiconductor (MIS) capacitor shows hole accumulation above the APAM source, corroborated by simulation, demonstrating the TFET’s feasibility.

More Details

Narrowband microwave-photonic notch filtering using Brillouin interactions in silicon

Optics InfoBase Conference Papers

Gertler, Shai; Otterstrom, Nils T.; Gehl, M.; Starbuck, Andrew L.; Dallo, Christina M.; Pomerene, Andrew P.; Lentine, Anthony L.; Rakich, Peter T.

We present narrowband RF-photonic filters in an integrated silicon platform. Using Brillouin interactions, the filters yield narrowband (∼MHZ) filter bandwidths with high signal rejection, and demonstrate tunability over a wide (∼GHz) frequency range.

More Details

Investigation of electrical chatter in bifurcated contact receptacles

Electrical Contacts, Proceedings of the Annual Holm Conference on Electrical Contacts

Zastrow, Benjamin G.; Flicek, Robert C.; Walczak, Karl A.; Pacini, Benjamin R.; Johnson, Kelsey M.; Johnson, Brianna; Schumann, Christopher; Rafeedi, Fadi

Electrical switches are often subjected to shock and vibration environments, which can result in sudden increases in the switch's electrical resistance, referred to as 'chatter'. This paper describes experimental and numerical efforts to investigate the mechanism that causes chatter in a contact pair formed between a cylindrical pin and a bifurcated receptacle. First, the contact pair was instrumented with shakers, accelerometers, laser doppler vibrometers, a high speed camera, and a 'chatter tester' that detects fluctuations in the contact's electrical resistance. Chatter tests were performed over a range of excitation amplitudes and frequencies, and high speed video from the tests suggested that 'bouncing' (i.e. loss of contact) was the primary physical mechanism causing chatter. Structural dynamics models were then developed of the pin, receptacle, and contact pair, and corresponding modal experiments were performed for comparison and model validation. Finally, a high-fidelity solid mechanics model of the contact pair was developed to study the bouncing physics observed in the high speed videos. Chatter event statistics (e.g. mean chatter event duration) were used to compare the chatter behavior recorded during testing to the behavior simulated in the high-fidelity model, and this comparison suggested that the same bouncing mechanism is the cause of chatter in both scenarios.

More Details

Is the Testing Effect Ready to Be Put to Work? Evidence From the Laboratory to the Classroom

Translational Issues in Psychological Science

Trumbo, Michael C.; Mcdaniel, Mark A.; Hodge, Gordon K.; Jones, Aaron P.; Matzen, Laura E.; Kittinger, Liza; Kittinger, Robert; Clark, Vincent P.

The testing effect refers to the benefits to retention that result from structuring learning activities in the form of a test. As educators consider implementing testenhanced learning paradigms in real classroom environments, we think it is critical to consider how an array of factors affecting test-enhanced learning in laboratory studies bear on test-enhanced learning in real-world classroom environments. This review discusses the degree to which test feedback, test format (of formative tests), number of tests, level of the test questions, timing of tests (relative to initial learning), and retention duration have import for testing effects in ecologically valid contexts (e.g., classroom studies). Attention is also devoted to characteristics of much laboratory testing-effect research that may limit translation to classroom environments, such as the complexity of the material being learned, the value of the testing effect relative to other generative learning activities in classrooms, an educational orientation that favors criterial tests focused on transfer of learning, and online instructional modalities. We consider how student-centric variables present in the classroom (e.g., cognitive abilities, motivation) may have bearing on the effects of testing-effect techniques implemented in the classroom. We conclude that the testing effect is a robust phenomenon that benefits a wide variety of learners in a broad array of learning domains. Still, studies are needed to compare the benefit of testing to other learning strategies, to further characterize how individual differences relate to testing benefits, and to examine whether testing benefits learners at advanced levels.

More Details

Distributed Inference with Sparse and Quantized Communication

IEEE Transactions on Signal Processing

Mitra, Aritra; Richards, John R.; Bagchi, Saurabh; Sundaram, Shreyas

We consider the problem of distributed inference where agents in a network observe a stream of private signals generated by an unknown state, and aim to uniquely identify this state from a finite set of hypotheses. We focus on scenarios where communication between agents is costly, and takes place over channels with finite bandwidth. To reduce the frequency of communication, we develop a novel event-triggered distributed learning rule that is based on the principle of diffusing low beliefs on each false hypothesis. Building on this principle, we design a trigger condition under which an agent broadcasts only those components of its belief vector that have adequate innovation, to only those neighbors that require such information. We prove that our rule guarantees convergence to the true state exponentially fast almost surely despite sparse communication, and that it has the potential to significantly reduce information flow from uninformative agents to informative agents. Next, to deal with finite-precision communication channels, we propose a distributed learning rule that leverages the idea of adaptive quantization. We show that by sequentially refining the range of the quantizers, every agent can learn the truth exponentially fast almost surely, while using just 1 bit to encode its belief on each hypothesis. For both our proposed algorithms, we rigorously characterize the trade-offs between communication-efficiency and the learning rate.

More Details

INCREMENTAL INTERVAL ASSIGNMENT BY INTEGER LINEAR ALGEBRA

Proceedings of the 29th International Meshing Roundtable, IMR 2021

Mitchell, Scott A.

Interval Assignment (IA) is the problem of selecting the number of mesh edges (intervals) for each curve for conforming quad and hex meshing. The intervals x is fundamentally integer-valued, yet many approaches perform floating-point optimization and convert a floating-point solution into an integer solution. We avoid such steps: we start integer, stay integer. Incremental Interval Assignment (IIA) uses integer linear algebra (Hermite normal form) to find an initial solution to the matrix equation Ax = b satisfying the meshing constraints. Solving for reduced row echelon form provides integer vectors spanning the nullspace of A. We add vectors from the nullspace to improve the initial solution. Compared to floating-point optimization approaches, IIA is faster and always produces an integer solution. The potential drawback is that there is no theoretical guarantee that the solution is optimal, but in practice we achieve solutions close to the user goals. The software is freely available.

More Details

Deep learning denoising applied to regional distance seismic data in Utah

Bulletin of the Seismological Society of America

Tibi, Rigobert T.; Hammond, Patrick H.; Brogan, Ronald; Young, Christopher J.; Koper, Keith

Seismic waveform data are generally contaminated by noise from various sources. Suppressing this noise effectively so that the remaining signal of interest can be successfully exploited remains a fundamental problem for the seismological community. To date, the most common noise suppression methods have been based on frequency filtering. These methods, however, are less effective when the signal of interest and noise share similar frequency bands. Inspired by source separation studies in the field of music information retrieval (Jansson et al., 2017) and a recent study in seismology (Zhu et al., 2019), we implemented a seismic denoising method that uses a trained deep convolutional neural network (CNN) model to decompose an input waveform into a signal of interest and noise. In our approach, the CNN provides a signal mask and a noise mask for an input signal. The short-time Fourier transform (STFT) of the estimated signal is obtained by multiplying the signal mask with the STFT of the input signal. To build and test the denoiser, we used carefully compiled signal and noise datasets of seismograms recorded by the University of Utah Seismograph Stations network. Results of test runs involving more than 9000 constructed waveforms suggest that on average the denoiser improves the signal-to-noise ratios (SNRs) by ∼ 5 dB, and that most of the recovered signal waveforms have high similarity with respect to the target waveforms (average correlation coefficient of ∼ 0:80) and suffer little distortion. Application to real data suggests that our denoiser achieves on average a factor of up to ∼ 2–5 improvement in SNR over band-pass filtering and can suppress many types of noise that band-pass filtering cannot. For individual waveforms, the improvement can be as high as ∼ 15 dB.

More Details

Deep Reinforcement Learning for Online Distribution Power System Cybersecurity Protection

2021 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids, SmartGridComm 2021

Bailey, Tyson B.; Johnson, Jay; Levin, Drew L.

The sophistication and regularity of power system cybersecurity attacks has been growing in the last decade, leading researchers to investigate new innovative, cyber-resilient tools to help grid operators defend their networks and power systems. One promising approach is to apply recent advances in deep reinforcement learning (DRL) to aid grid operators in making real-time changes to the power system equipment to counteract malicious actions. While multiple transmission studies have been conducted in the past, in this work we investigate the possibility of defending distribution power systems using a DRL agent who has control of a collection of utility-owned distributed energy resources (DER). A game board using a modified version of the IEEE 13-bus model was simulated using OpenDSS to train the DRL agent and compare its performance to a random agent, a greedy agent, and human players. Both the DRL agent and the greedy approach performed well, suggesting a greedy approach can be appropriate for computationally tractable system configurations and a DRL agent is a viable path forward for systems of increased complexity. This work paves the way to create multi-player distribution system control games which could be designed to defend the power grid under a sophisticated cyber-attack.

More Details

Fracture Formation in Layered Synthetic Rocks with Oriented Mineral Fabric under Mixed Mode I and II Loading Conditions

55th U.S. Rock Mechanics / Geomechanics Symposium 2021

Jiang, Liyang; Yoon, Hongkyu Y.; Bobet, Antonio; Pyrak-Nolte, Laura J.

Anisotropy in the mechanical properties of rock is often attributed to bedding and mineral texture. Here, we use 3D printed synthetic rock to show that, in addition to bedding layers, mineral fabric orientation governs sample strength, surface roughness and fracture path under mixed mode I and II three point bending tests (3PB). Arrester (horizontal layering) and short traverse (vertical layering) samples were printed with different notch locations to compare pure mode I induced fractures to mixed mode I and II fracturing. For a given sample type, the location of the notch affected the intensity of mode II loading, and thus affected the peak failure load and fracture path. When notches were printed at the same location, crack propagation, peak failure load and fracture surface roughness were found to depend on both the layer and mineral fabric orientations. The uniqueness of the induced fracture path and roughness is a potential method for the assessment of the orientation and relative bonding strengths of minerals in a rock. With this information, we will be able to predict isotropic or anisotropic flow rates through fractures which is vital to induced fracturing, geothermal energy production and CO2 sequestration.

More Details

Aero-Optical Distortions of Turbulent Boundary Layers: DNS up to Mach 8

AIAA Aviation and Aeronautics Forum and Exposition, AIAA AVIATION Forum 2021

Miller, Nathan M.; Guildenbecher, Daniel R.; Lynch, Kyle P.

The character of aero-optical distortions produced by turbulence is investigated for subsonic, supersonic, and hypersonic boundary layers. Data from four Direct Numerical Simulations (DNS) of boundary layers with nominal Mach numbers ranging from 0.5 to 8 are used. The DNS data for the subsonic and supersonic boundary layers are of flow over flat plates. Two hypersonic boundary layers are both from flows with a Mach 8 inlet condition, one of which is flow over a flat plate while the other is a boundary layer on a sharp cone. Density fields from these datasets are converted to index-of-refraction fields which are integrated along an expected beam path to determine the effective Optical Path Lengths that a beam would experience while passing through the refractions of the turbulent field. By then accounting for the mean path length and tip/tilt issues related to bulk boundary layer effects, the distribution of Optical Path Differences (OPD s) is determined. Comparisons of the root-mean-squares of the OPDs are made to an existing model. The OPDr m s values determined from the subsonic and supersonic data were found to match the existing model well. As could be expected, the hypersonic data does not match as well due to assumptions like the Strong Reynold Analogy that were made in the derivation of the model. Until now, the model has never been compared to flows with Mach numbers as high as included herein or to flow over a sharp cone geometry.

More Details

Development of a Spatially Filtered Wavefront Sensor as an Aero-Optical Measurement Technique

AIAA Aviation and Aeronautics Forum and Exposition, AIAA AVIATION Forum 2021

Butler, Luke; Gordeyev, Stanislav; Lynch, Kyle P.; Guildenbecher, Daniel R.

This paper validates the concept of a spatially filtered wavefront sensor, which uses a convergent-divergent beam to reduce sensitivity to aero-optical distortions near the focal point while retaining sensitivity at large beam diameters. This sensor was used to perform wavefront measurements in a cavity flow test section. The focal point was traversed to various spanwise locations across the test section, and the overall OPDRMS levels and aperture-averaged spectra of wavefronts were computed. It was demonstrated that the sensor was able to effectively suppress the stronger aero-optical signal from the cavity flow and recover the aero-optical signal from the boundary layer when the focal point was placed inside the shear region of the cavity flow. To model these measured quantities, additional collimated beam wavefronts were taken at various subsonic speeds in a wind tunnel test section with two turbulent boundary layers, and then in the cavity flow test section, where the signal from the cavity was dominant. The results from the experimental model agree with the measured convergent-divergent beam results, confirming that the spatial filtering properties of the proposed sensor are due to attenuating effects at small apertures.

More Details

Evaluation of Microhole drilling technology for geothermal exploration, assessment, and monitoring

Transactions - Geothermal Resources Council

Su, Jiann-Cherng S.; Mazumdar, Anirban; Buerger, Stephen B.; Foris, Adam J.; Faircloth, Brian

The well documented promise of microholes has not yet matched expectations. A fundamental issue is that delivering high weight-on-bit (WOB), high torque rotational horsepower to a conventional drill bit does not scale down to the hole sizes necessary to realize the envisioned cost savings. Prior work has focused on miniaturizing the various systems used in conventional drilling technologies, such as motors, steering systems, mud handling and logging tools, and coiled tubing drilling units. As smaller diameters are targeted for these low WOB drilling technologies, several associated sets of challenges arise. For example, energy transfer efficiency in small diameter percussive hammers is different than conventional hammers. Finding adequate methods of generating rotation at the bit are also more difficult. A low weight-on-bit microhole drilling system was proposed, conceived, and tested on a limited scale. The utility of a microhole was quantified using flow analyses to establish bounds for usable microholes. Two low weight-on-bit rock reduction techniques were evaluated and developed, including a low technology readiness level concept in the laser-assisted mechanical drill and a modified commercial percussive hammer. Supporting equipment, including downhole rotation and a drill string twist reaction tool, were developed to enable wireline deployment of a drilling assembly. Although the various subsystems were tested and shown to work well individually in a laboratory environment, there is still room for improvement before the microhole drilling system is ready to be deployed. Ruggedizing the various components will be key, as well as having additional capacity in a conveyance system to provide additional capacity for pullback and deployment.

More Details

Validation of multi-frame piv image interrogation algorithms in the spectral domain

AIAA Scitech 2021 Forum

Beresh, Steven J.; Neal, Douglas R.; Sciacchitano, Andrea

Multi-frame correlation algorithms for time-resolved PIV have been shown in previous studies to reduce noise and error levels in comparison with conventional two-frame correlations. However, none of these prior efforts tested the accuracy of the algorithms in spectral space. Even should a multi-frame algorithm reduce the error of vector computations summed over an entire data set, this does not imply that these improvements are observed at all frequencies. The present study examines the accuracy of velocity spectra in comparison with simultaneous hot-wire data. Results indicate that the high-frequency content of the spectrum is very sensitive to choice of the interrogation algorithm and may not return an accurate response. A top-hat-weighted sliding sum-of-correlation is contaminated by high-frequency ringing whereas Gaussian weighting is indistinguishable from a low-pass filtering effect. Some evidence suggests the pyramid correlation modestly increases bandwidth of the measurement at high frequencies. The apparent benefits of multi-frame interrogation algorithms may be limited in their ability to reveal additional spectral content of the flow.

More Details

Electrical conduction and polarization of silica-based capacitors under electro-thermal poling

Annual Report - Conference on Electrical Insulation and Dielectric Phenomena, CEIDP

Nieves-Sanabria, Cesar N.; Wilke, Rudeger H.T.; Bishop, Sean R.; Lanagan, Michael T.; Clem, Paul G.

Electrical conduction in silica-based capacitors under a combined effect of intermediate electric field and temperature (2.5 - 10 kV/mm, 50-300°C) is dominated by localized motion of high mobility ions such as sodium. Thermally stimulated polarization and depolarization current (TSPC/TSDC) characterization was carried out on poled fused silica and AF32 glass samples. Two relaxation mechanisms were found during the depolarization step and an anomalous response for the second TSDC peak was observed. Absorption current measurements were performed on the glass samples and a time-dependent response was observed when subjected to different electro-thermal conditions. It was found that at low temperature (T = 175 °C) and short times, the current follows a linear behavior (I α V) while at high temperature (T = 250 °C), the current follows V0.5. TSPC/TSDC and absorption current measurements results led to the conclusion that (1) Poole-Frenkel dominates conduction at high temperatures and at longer times and that (2) ionic blockage and/or H+/H3O+ injection are responsible for the observed anomalous current response.

More Details

Scaling of Reflected Shock Bifurcation at High Incident Mach Number

AIAA Aviation and Aeronautics Forum and Exposition, AIAA AVIATION Forum 2021

Daniel, Kyle; Lynch, Kyle P.; Downing, Charley R.; Wagner, Justin W.

Measurements of bifurcated reflected shocks over a wide range of incident shock Mach numbers, 2.9 < Ms < 9.4, are carried out in Sandia’s high temperature shock tube. The size of the non-uniform flow region associated with the bifurcation is measured using high speed schlieren imaging. Measurements of the bifurcation height are compared to historical data from the literature. A correlation for the bifurcation height from Petersen et al. [1] is examined and found to over estimate the bifurcation height for Ms > 6. An improved correlation is introduced that can predict the bifurcation height over the range 2.15 < Ms < 9.4. The time required for the non-uniform flow region to pass over a stationary sensor is also examined. A non-dimensional time related to the induced velocity behind the shock and the distance to the endwall is introduced. This non-dimensional time collapses the data and yields a new correlation that predicts the temporal duration of the bifurcation.

More Details

Configurable Microgrid Modelling with Multiple Distributed Energy Resources for Dynamic System Analysis

IEEE Power and Energy Society General Meeting

Darbali-Zamora, Rachid; Wilches-Bernal, Felipe; Naughton, Brian T.

As renewable energy sources are becoming more dominant in electric grids, particularly in micro grids, new approaches for designing, operating, and controlling these systems are required. The integration of renewable energy devices such as photovoltaics and wind turbines require system design considerations to mitigate potential power quality issues caused by highly variable generation. Power system simulations play an important role in understanding stability and performance of electrical power systems. This paper discusses the modeling of the Global Laboratory for Energy Asset Management and Manufacturing (GLEAMM) micro grid integrated with the Sandia National Laboratories Scaled Wind Farm Technology (SWiFT) test site, providing a dynamic simulation model for power flow and transient stability analysis. A description of the system as well as the dynamic models is presented.

More Details

Analysis and optimization of a closed loop geothermal system in hot rock reservoirs

Transactions - Geothermal Resources Council

Vasyliv, Yaroslav V.; Bran Anleu, Gabriela A.; Kucala, Alec K.; Subia, Samuel R.; Martinez, Mario J.

Recent advances in drilling technology, especially horizontal drilling, have prompted a renewed interest in the use of closed loop geothermal energy extraction systems. Deeply placed closed loops in hot wet or dry rock reservoirs offer the potential to exploit the vast thermal energy in the subsurface. To better understand the potential and limitations for recovering thermal and mechanical energy from closed-loop geothermal systems (CLGS), a collaborative study is underway to investigate an array of system configurations, working fluids, geothermal reservoir characteristics, operational periods, and heat transfer enhancements (Parisi et al., 2021; White et al., 2021). This paper presents numerical results for the heat exchange between a closed loop system (single U-tube) circulating water as the working fluid in a hot rock reservoir. The characteristics of the reservoir are based on the Frontier Observatory for Research in Geothermal Energy (FORGE) site, near Milford Utah. To determine optimal system configurations, a mechanical (electrical) objective function is defined for a bounded optimization study over a specified design space. The objective function includes a surface plant thermal to mechanical energy conversion factor, pump work, and an energy drilling capital cost. To complement the optimization results, detailed parametric studies are also performed. The numerical model is built using the Sandia National Laboratories (SNL) massively parallel Sierra computational framework, while the optimization and parametric studies are driven using the SNL Dakota software package. Together, the optimization and parametric studies presented in this paper will help assess the impact of CLGS parameters (e.g., flow rate, tubing length and diameter, insulation length, etc.) on CLGS performance and optimal energy recovery.

More Details

Machine learning methods for estimating down-hole depth of cut

Transactions - Geothermal Resources Council

Sacks, Jacob; Choi, Kevin; Bruss, Kathryn; Su, Jiann-Cherng S.; Buerger, Stephen B.; Mazumdar, Anirban; Boots, Byron

Depth of cut (DOC) refers to the depth a bit penetrates into the rock during drilling. This is an important quantity for estimating drilling performance. In general, DOC is determined by dividing the rate of penetration (ROP) by the rotational speed. Surface based sensors at the top of the drill string are used to determine both ROP and rotational speed. However, ROP measurements using top-hole sensors are noisy and often require taking a derivative. Filtering reduces the update rate, and both top-hole linear and angular velocity can be delayed relative to downhole behavior. In this work, we describe recent progress towards estimating ROP and DOC using down-hole sensing. We assume downhole measurements of torque, weight-on-bit (WOB), and rotational speed and anticipate that these measurements are physically realizable. Our hypothesis is that these measurements can provide more rapid and accurate measures of drilling performance. We examine a range of machine learning techniques for estimating ROP and DOC based on this local sensing paradigm. We show how machine learning can provide rapid and accurate performance when evaluated on experimental data taken from Sandia's Hard Rock Drilling Facility. These results have the potential to enable better drilling assessment, improved control, and extended component life-times.

More Details

Scalable3-BO: Big data meets HPC - A scalable asynchronous parallel high-dimensional Bayesian optimization framework on supercomputers

Proceedings of the ASME Design Engineering Technical Conference

Laros, James H.

Bayesian optimization (BO) is a flexible and powerful framework that is suitable for computationally expensive simulation-based applications and guarantees statistical convergence to the global optimum. While remaining as one of the most popular optimization methods, its capability is hindered by the size of data, the dimensionality of the considered problem, and the nature of sequential optimization. These scalability issues are intertwined with each other and must be tackled simultaneously. In this work, we propose the Scalable3-BO framework, which employs sparse GP as the underlying surrogate model to scope with Big Data and is equipped with a random embedding to efficiently optimize high-dimensional problems with low effective dimensionality. The Scalable3-BO framework is further leveraged with asynchronous parallelization feature, which fully exploits the computational resource on HPC within a computational budget. As a result, the proposed Scalable3-BO framework is scalable in three independent perspectives: with respect to data size, dimensionality, and computational resource on HPC. The goal of this work is to push the frontiers of BO beyond its well-known scalability issues and minimize the wall-clock waiting time for optimizing high-dimensional computationally expensive applications. We demonstrate the capability of Scalable3-BO with 1 million data points, 10,000-dimensional problems, with 20 concurrent workers in an HPC environment.

More Details

Lost circulation in a hydrothermally cemented Basin-fill reservoir: Don A. Campbell Geothermal field, Nevada

Transactions - Geothermal Resources Council

Winn, Carmen L.; Dobson, Patrick; Ulrich, Craig; Kneafsey, Timothy; Lowry, Thomas S.; Akerley, John; Delwiche, Ben; Samuel, Abraham; Bauer, Stephen J.

Significant costs can be related to losing circulation of drilling fluids in geothermal drilling. This paper is the second of four case studies of geothermal fields operated by Ormat Technologies, directed at forming a comprehensive strategy to characterize and address lost circulation in varying conditions, and examines the geologic context of and common responses to lost circulation in the loosely consolidated, shallow sedimentary reservoir of the Don A. Campbell geothermal field. The Don A. Campbell Geothermal Field is in the SW portion of Gabbs Valley in NV, along the eastern margin of the Central Walker Lane shear zone. The reservoir here is shallow and primarily in the basin fill, which is hydrothermally altered along fault zones. Wells in this reservoir are highly productive (250-315 L/s) with moderate temperatures (120-125 °C) and were drilled to an average depth of ~1500 ft (450 m). Lost circulation is frequently reported beginning at depths of about 800 ft, slightly shallower than the average casing shoe depth of 900- 1000 ft (275-305 m). Reports of lost circulation frequently coincide with drilling through silicified basin fill. Strategies to address lost circulation differ above and below the cased interval; bentonite chips were used at shallow depths and aerated, gelled drilling fluids were used in the production intervals. Further study of this and other areas will contribute to developing a systematic understanding of geologic contextual-informed lost circulation mitigation strategies.

More Details

Advanced analytics of rig parameter data using rock reduction model constraints for improved drilling performance

Transactions - Geothermal Resources Council

Raymond, David W.; Foris, Adam J.; Norton, Jaiden; Mclennan, John

Drill rig parameter measurements are routinely used during deep well construction to monitor and guide drilling conditions for improved performance and reduced costs. While insightful into the drilling process, these measurements are of reduced value without a standard to aid in data evaluation and decision making. A method is demonstrated whereby rock reduction model constraints are used to interpret drilling response parameters; the method could be applied in real-time to improved decision-making in the field and to further discern technology performance during post-drilling evaluations. Drill rig parameter data were acquired by drilling contractor Frontier Drilling and evaluated for two wells drilled at the DOE-sponsored site, Utah Frontier Observatory for Research in Geothermal Energy (FORGE). The subject wells include: 1) FORGE 16A(78)-32, a directional well with vertical depth to a kick-off point at 5892 ft and a 65 degree tangent to a measured depth of 10987 ft and, 2) FORGE 56-32, a vertical monitoring well to a measured depth of 9145 ft. Drilling parameters are evaluated using laboratory-validated rock reduction models for predicting the phenomenological response of drag bits (Detournay and Defourny, 1992) along with other model constraints in computational algorithms. The method is used to evaluate overall bit performance, develop rock strength approximations, determine bit aggressiveness, characterize frictional energy losses, evaluate bit wear rates, and detect the presence of drillstring vibrations contributing to bit failure; comparisons are made to observations of bit wear and damage. Analyses are also presented to correlate performance to bit run cost drivers to provide guidance on the relative tradeoff between bit penetration rate and life. The method presented has applicability to development of advanced analytics on future geothermal wells using real-time electronic data recording for improved performance and reduced drilling costs.

More Details

Aero-Optical Measurements of a Mach 8 Boundary Layer

AIAA Aviation and Aeronautics Forum and Exposition, AIAA AVIATION Forum 2021

Lynch, Kyle P.; Spillers, Russell W.; Miller, Nathan M.; Guildenbecher, Daniel R.; Gordeyev, Stanislav

Measurements are presented of the aero-optic distortion produced by a Mach 8 turbulent boundary layer in the Sandia Hypersonic Wind Tunnel. Flat optical inserts installed in the test section walls enabled a double-pass arrangement of a collimated laser beam. The distortion of this beam was imaged by a high-speed Shack-Hartmann sensor at a sampling rate of up to 1 MHz. Analysis is performed using two processing methods to extract the aero-optic distortion from the data. A novel de-aliasing algorithm is proposed to extract convective-only spectra and is demonstrated to correctly quantify the physical spectra even in case of relatively low sampling rates. The results are compared with an existing theoretical model, and it is shown that this model under-predicts the experimentally measured distortions regardless of the processing method used. Possible explanations for this discrepancy are presented. The presented results represent to-date the highest Mach number for which aero-optic boundary layer distortion measurements are available.

More Details

Effects of Convection On Experimental Investigation Of Heat Generation During Plastic Deformation

ASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)

Hodges, Wyatt L.; Phinney, Leslie M.; Lester, Brian T.; Talamini, Brandon T.; Jones, Amanda

In order to predict material failure accurately, it is critical to have knowledge of deformation physics. Uniquely challenging is determination of the conversion coefficient of plastic work into thermal energy. Here, we examine the heat transfer problem associated with the experimental determination of β in copper and stainless steel. A numerical model of the tensile test sample is used to estimate temperature rises across the mechanical test sample at a variety of convection coefficients, as well as to estimate heat losses to the chamber by conduction and convection. This analysis is performed for stainless steel and copper at multiple environmental conditions. These results are used to examine the relative importance of convection and conduction as heat transfer pathways. The model is additionally used to perform sensitivity analysis on the parameters that will ultimately determine b. These results underscore the importance of accurate determination of convection coefficients and will be used to inform future design of samples and experiments. Finally, an estimation of convection coefficient for an example mechanical test chamber is detailed as a point of reference for the modeling results.

More Details

Evaluation of Energy Storage Providing Virtual Transmission Capacity

IEEE Power and Energy Society General Meeting

Nguyen, Tu A.; Byrne, Raymond H.

In this work, we introduce the concept of virtual transmission using large-scale energy storage systems. We also develop an optimization framework to maximize the monetized benefits of energy storage providing virtual transmission in wholesale markets. These benefits often come from relieving congestion for a transmission line, including both reduction in energy cost for the downstream loads and increase in production revenue for the upstream generators of the congested line. A case study is conducted using ISO-New England data to demonstrate the framework.

More Details

Compact, Pull-in-Free Electrostatic MEMS Actuated Tunable Ring Resonator for Optical Multiplexing

Optics InfoBase Conference Papers

Ruyack, Alexander R.; Grine, Alejandro J.; Finnegan, Patrick S.; Serkland, Darwin K.; Robinson, Samuel; Weatherred, Scott E.; Frost, Megan D.; Nordquist, Christopher N.; Wood, Michael G.

We present an optical wavelength division multiplexer enabled by a ring resonator tuned by MEMS electrostatic actuation. Analytical analysis, simulation and fabrication are discussed leading to results showing controlled tuning greater than one FSR.

More Details

Potential of Solid-State Transformers to Improve Grid Resilience

IEEE Power and Energy Society General Meeting

Schoenwald, David A.; Pierre, Brian J.; Munoz-Ramos, Karina M.

A methodology for the design of control systems for wide-area power systems using solid-state transformers (SSTs) as actuators is presented. Due to their ability to isolate the primary side from the secondary side, an SST can limit the propagation of disturbances, such as frequency and voltage deviations, from one side to the other. This paper studies a control strategy based on SSTs deployed in the transmission grid to improve the resilience of power grids to disturbances. The control design is based on an empirical model of an SST that is appropriate for control design in grid level applications. A simulation example illustrating the improvement provided by an SST in a large-scale power system via a reduction in load shedding due to severe disturbances are presented.

More Details

Multivariate Design and Optimization of the AeroMINE Internal Turbine Blade

AIAA Propulsion and Energy Forum, 2021

Krath, Elizabeth H.; Houchens, Brent C.; Marian, David V.; Pol, Suhas U.; Westergaard, Carsten

Multivariate designs using three optimization procedures were performed on a low Reynolds number (order 100,000) turbine blade that maximized lift over drag. The turbine blade was created to interface to AeroMINE, a novel wind energy harvester that has no external moving parts. To speed up the optimization process, an interpolation-based procedure using the Proper Orthogonal Decomposition (POD) method was used. This method was used in two ways: by itself (POD-i) and as an initial guess to a full-order model (FOM) solution that is truncated before it reaches full convergence (POD-i with truncated FOM). To compare the result of these methods and their efficiency, optimization using a FOM was also conducted. It was found that there exists a trade off between efficiency and optimal result. The FOM found the highest L/D of 28.87 while POD-i found a L/D of 16.19 and POD-i with truncated FOM found a L/D of 19.11. Nonetheless, POD-i and POD-i with truncated FOM were 32,302 and 697 times faster than the FOM, respectively.

More Details

Exploring life extension opportunitites of high-pressure hydrogen pressure vessels at refueling stations

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Ronevich, Joseph A.; San Marchi, Christopher W.; Brooks, Dusty M.; Emery, John M.; Grimmer, Peter W.; Chant, Eileen; Robert Sims, J.; Belokobylka, Alex; Farese, Dave; Felbaum, John

High pressure Type 2 hoop-wrapped, thick-walled vessels are commonly used at hydrogen refueling stations. Vessels installed at stations circa 2010 are now reaching their design cycle limit and are being retired, which is the motivation for exploring life extension opportunities. The number of design cycles is based on a fatigue life calculation using a fracture mechanics assessment according to ASME Section VIII, Division 3, which assumes each cycle is the full pressure range identified in the User's Design Specification for a given pressure vessel design; however, assessment of service data reveals that the actual pressure cycles are more conservative than the design specification. A case study was performed in which in-service pressure cycles were used to re-calculate the design cycles. It was found that less than 1% of the allowable crack extension was consumed when crack growth was assessed using in-service design pressures compared to the original design fatigue life from 2010. Additionally, design cycles were assessed on the 2010 era vessels based on design curves from the recently approved ASME Code Case 2938, which were based on fatigue crack growth rate relationships over a broader range of K. Using the Code Case 2938 design curves yielded nearly 2.7 times greater design cycles compared to the 2010 vessel original design basis. The benefits of using inservice pressure cycles to assess the design life and the implications of using the design curves in Code Case 2938 are discussed in detail in this paper.

More Details

PMEMCPY: A simple, lightweight, and portable I/O library for storing data in persistent memory

Proceedings - IEEE International Conference on Cluster Computing, ICCC

Logan, Luke; Lofstead, Gerald F.; Levy, Scott L.; Widener, Patrick W.; Sun, Xian H.; Kougkas, Anthony

Persistent memory (PMEM) devices can achieve comparable performance to DRAM while providing significantly more capacity. This has made the technology compelling as an expansion to main memory. Rethinking PMEM as storage devices can offer a high performance buffering layer for HPC applications to temporarily, but safely store data. However, modern parallel I/O libraries, such as HDF5 and pNetCDF, are complicated and introduce significant software and metadata overheads when persisting data to these storage devices, wasting much of their potential. In this work, we explore the potential of PMEM as storage through pMEMCPY: a simple, lightweight, and portable I/O library for storing data in persistent memory. We demonstrate that our approach is up to 2x faster than other popular parallel I/O libraries under real workloads.

More Details

Space Nuclear Thermal Propulsion Critical Assembly Boron Worth Experiments

Transactions of the American Nuclear Society

Laros, James H.; Lutz, Elijah L.

The Space Nuclear Thermal Propulsion (SNTP) project was an attempt to create a more powerful and more efficient rocket engine utilizing nuclear technologies. As part of this project a zero-power critical assembly referred to as SNTPCX was designed and installed at Sandia National Laboratories. The SNTP-CX was a light water moderated particle bed reactor utilizing highly enriched uranium fuel in the form of UC particles. The SNTP-CX performed 142 runs covering numerous experiments from the year 1989 to 1992. The program was canceled in 1994 as the nation’s priorities shifted. Now these experiments are being evaluated for use as criticality safety benchmarks. Nineteen of the 142 reactor runs were dedicated to a series of experiments to calculate the worth of the boron used in the light water moderator. This series of experiments has been selected for further evaluation as a critical benchmark for the International Criticality Safety Benchmark Evaluation Project (ICSBEP).

More Details

AirNet-SNL: End-to-end training of iterative reconstruction and deep neural network regularization for sparse-data XPCI CT

Optics InfoBase Conference Papers

Lee, Dennis J.; Mulcahy-Stanislawczyk, Johnathan M.; Jimenez, Edward S.; Goodner, Ryan N.; West, Roger D.; Epstein, Collin; Thompson, Kyle R.; Dagel, Amber L.

We present a deep learning image reconstruction method called AirNet-SNL for sparse view computed tomography. It combines iterative reconstruction and convolutional neural networks with end-to-end training. Our model reduces streak artifacts from filtered back-projection with limited data, and it trains on randomly generated shapes. This work shows promise to generalize learning image reconstruction.

More Details

Understanding uncertainty in geothermal energy development using a formalized performance assessment approach

Transactions - Geothermal Resources Council

Lowry, Thomas S.

For over 50 years, performance assessment (PA) has been used throughout the world to inform decisions concerning the storage and management of radioactive waste. Some of the applications of PA include environmental assessments of nuclear disposal sites, development of methodologies and regulations for the long-term storage of nuclear waste, regulatory assessment for site selection and licensing at the Waste Isolation Pilot Plant and Yucca Mountain, and safety assessments for nuclear reactors. PA begins with asking the following questions: 1) What can happen? 2) How likely is it to happen? 3) What are the consequences when it does happen? and 4) What is the uncertainty of the first three questions? This work presents an approach for applying PA methodologies to geothermal resource evaluation that is adaptable and conformable to all phases of geothermal energy production. It provides a consistent and transparent framework for organizing data and information in a manner that supports decision making and accounts for uncertainties. The process provides a better understanding of the underlying risks that can jeopardize the development and/or performance of a geothermal project and identifies the best pathways for reducing or eliminating those risks. The approach is demonstrated through hypothetical examples of both hydrothermal and enhanced geothermal systems (EGS).

More Details

Decentralized Classification with Assume-Guarantee Planning ∗

IEEE International Conference on Intelligent Robots and Systems

Carr, Steven; Quattrociocchi, Jesse; Bharadwaj, Suda; Spencer, Steven; Parikh, Anup; Young, Carol C.; Buerger, Stephen B.; Wu, Bo; Topcu, Ufuk

We study the problem of decentralized classification conducted over a network of mobile sensors. We model the multiagent classification task as a hypothesis testing problem where each sensor has to almost surely find the true hypothesis from a finite set of candidate hypotheses. Each sensor makes noisy local observations and can also share information on their observations with other mobile sensors in communication range. In order to address the state-space explosion in the multiagent system, we propose a decentralized synthesis procedure that guarantees that each sensor will almost surely converge to the true hypothesis even in the presence of faulty or malicious agents. Additionally, we employ a contract-based synthesis approach that produces trajectories designed to empirically increase information-sharing between mobile sensors in order to converge faster to the true hypothesis. We implement and test the approach on experiments with both physical and simulated hardware to showcase the approach's scalability and viability in real-world systems. Finally, we run a Gazebo/ROS simulated experiment with 12 agents to demonstrate the scalability of our approach in large environments with many agents.

More Details

Towards A Model For The Melt And Flow Of Aluminum Alloys In Fires

ASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)

Brown, Alexander B.; Tencer, John T.; Kucala, Alec K.; Pierce, Flint P.; Noble, David R.

Melting and flowing of aluminum alloys is a challenging problem for computational codes. Unlike most common substances, the surface of an aluminum melt exhibits rapid oxidation and elemental migration, and like a bag filled with water can remain 2-dimensionally unruptured while the metal inside is flowing. Much of the historical work in this area focuses on friction welding and neglects the surface behavior due to the high stress of the application. We are concerned with low-stress melting applications, in which the bag behavior is more relevant. Adapting models and measurements from the literature, we have developed a formulation for the viscous behavior of the melt based on an abstraction of historical measurement, and a construct for the bag behavior. These models are implemented and demonstrated in a 3D level-set multi-phase solver package, SIERRA/Aria. A series of increasingly complex simulation scenarios are illustrated that help verify implementation of the models in conjunction with other required model components like convection, radiation, gravity, and surface interactions.

More Details

Reusability First: Toward FAIR Workflows

Proceedings - IEEE International Conference on Cluster Computing, ICCC

Wolf, Matthew; Logan, Jeremy; Mehta, Kshitij; Jacobson, Daniel; Cashman, Mikaela; Walker, Angelica M.; Eisenhauer, Greg; Widener, Patrick W.; Cliff, Ashley

The FAIR principles of open science (Findable, Accessible, Interoperable, and Reusable) have had transformative effects on modern large-scale computational science. In particular, they have encouraged more open access to and use of data, an important consideration as collaboration among teams of researchers accelerates and the use of workflows by those teams to solve problems increases. How best to apply the FAIR principles to workflows themselves, and software more generally, is not yet well understood. We argue that the software engineering concept of technical debt management provides a useful guide for application of those principles to workflows, and in particular that it implies reusability should be considered as 'first among equals'. Moreover, our approach recognizes a continuum of reusability where we can make explicit and selectable the tradeoffs required in workflows for both their users and developers. To this end, we propose a new abstraction approach for reusable workflows, with demonstrations for both synthetic workloads and real-world computational biology workflows. Through application of novel systems and tools that are based on this abstraction, these experimental workflows are refactored to rightsize the granularity of workflow components to efficiently fill the gap between end-user simplicity and general customizability. Our work makes it easier to selectively reason about and automate the connections between trade-offs across user and developer concerns when exposing degrees of freedom for reuse. Additionally, by exposing fine-grained reusability abstractions we enable performance optimizations, as we demonstrate on both institutional-scale and leadership-class HPC resources.

More Details

AC-Optimal Power Flow Solutions with Security Constraints from Deep Neural Network Models

Computer Aided Chemical Engineering

Kilwein, Zachary; Boukouvala, Fani; Laird, Carl D.; Castillo, Anya; Blakely, Logan; Eydenberg, Michael S.; Jalving, Jordan H.; Batsch-Smith, Lisa

In power grid operation, optimal power flow (OPF) problems are solved several times per day to find economically optimal generator setpoints that balance given load demands. Ideally, we seek an optimal solution that is also “N-1 secure”, meaning the system can absorb contingency events such as transmission line or generator failure without loss of service. Current practice is to solve the OPF problem and then check a subset of contingencies against heuristic values, resulting in, at best, suboptimal solutions. Unfortunately, online solution of the OPF problem including the full N-1 contingencies (i.e., two-stage stochastic programming formulation) is intractable for even modest sized electrical grids. To address this challenge, this work presents an efficient method to embed N-1 security constraints into the solution of the OPF by using Neural Network (NN) models to represent the security boundary. Our approach introduces a novel sampling technique, as well as a tuneable parameter to allow operators to balance the conservativeness of the security model within the OPF problem. Our results show that we are able to solve contingency formulations of larger size grids than reported in literature using non-linear programming (NLP) formulations with embedded NN models to local optimality. Solutions found with the NN constraint have marginally increased computational time but are more secure to contingency events.

More Details

Understanding the Effects of DRAM Correctable Error Logging at Scale

Proceedings - IEEE International Conference on Cluster Computing, ICCC

Ferreira, Kurt B.; Levy, Scott L.; Kuhns, Victor G.; Debardeleben, Nathan; Blanchard, Sean

Fault tolerance poses a major challenge for future large-scale systems. Current research on fault tolerance has been principally focused on mitigating the impact of uncorrectable errors: errors that corrupt the state of the machine and require a restart from a known good state. However, correctable errors occur much more frequently than uncorrectable errors and may be even more common on future systems. Although an application can safely continue to execute when correctable errors occur, recovery from a correctable error requires the error to be corrected and, in most cases, information about its occurrence to be logged. The potential performance impact of these recovery activities has not been extensively studied in HPC. In this paper, we use simulation to examine the relationship between recovery from correctable errors and application performance for several important extreme-scale workloads. Our paper contains what is, to the best of our knowledge, the first detailed analysis of the impact of correctable errors on application performance. Our study shows that correctable errors can have significant impact on application performance for future systems. We also find that although the focus on correctable errors is focused on reducing failure rates, reducing the time required to log individual errors may have a greater impact on overheads at scale. Finally, this study outlines the error frequency and durations targets to keep correctable overheads similar to that of today's systems. This paper provides critical analysis and insight into the overheads of correctable errors and provides practical advice to systems administrators and hardware designers in an effort to fine-tune performance to application and system characteristics.

More Details

Backfilling HPC Jobs with a Multimodal-Aware Predictor

Proceedings - IEEE International Conference on Cluster Computing, ICCC

Lamar, Kenneth; Goponenko, Alexander V.; Peterson, Christina; Allan, Benjamin A.; Brandt, James M.; Dechev, Damian

Job scheduling aims to minimize the turnaround time on the submitted jobs while catering to the resource constraints of High Performance Computing (HPC) systems. The challenge with scheduling is that it must honor job requirements and priorities while actual job run times are unknown. Although approaches have been proposed that use classification techniques or machine learning to predict job run times for scheduling purposes, these approaches do not provide a technique for reducing underprediction, which has a negative impact on scheduling quality. A common cause of underprediction is that the distribution of the duration for a job class is multimodal, causing the average job duration to fall below the expected duration of longer jobs. In this work, we propose the Top Percent predictor, which uses a hierarchical classification scheme to provide better accuracy for job run time predictions than the user-requested time. Our predictor addresses multimodal job distributions by making a prediction that is higher than a specified percentage of the observed job run times. We integrate the Top Percent predictor into scheduling algorithms and evaluate the performance using schedule quality metrics found in literature. To accommodate the user policies of HPC systems, we propose priority metrics that account for job flow time, job resource requirements, and job priority. The experiments demonstrate that the Top Percent predictor outperforms the related approaches when evaluated using our proposed priority metrics.

More Details

Transient thermal design for athermal bond-line estimates

Proceedings of SPIE - The International Society for Optical Engineering

Gillund, Daniel P.; Kasunic, Keith J.

Transient operating temperatures often allow a lens cell to expand before the lens itself, potentially leading to stresses well in excess of the lens tensile strength. The transients thus affect the calculation of the athermal bond-line thickness, estimates of which have historically been based on thermal equilibrium conditions. In this paper, we present both analytical expressions and finite-element modeling results for thermal-transient bond-line design. Our results show that a cell with a large CTE and a bond thickness based on thermal transients is the best strategy for reducing the tensile stress on the bonded lens over a range of operating temperatures.

More Details

Monte-Carlo modeling and design of a high-resolution hyperspectral computed tomography system with a multi-material patterned anodes for material identification applications

Proceedings of SPIE - The International Society for Optical Engineering

Dalton, Gabriella D.; Laros, James H.; Clifford, Joshua M.; Kemp, Emily K.; Limpanukorn, Ben L.; Jimenez, Edward S.

Industrial and security communities leverage x-ray computed tomography for several applications in non-destructive evaluation such as material detection and metrology. Many of these applications ultimately reach a limit as most x-ray systems have a nonlinear mathematical operator due to the Bremsstrahlung radiation emitted from the x-ray source. This work proposes a design of a multi-metal pattered anode coupled with a hyperspectral X-ray detector to improve spatial resolution, absorption signal, and overall data quality for various quantitative. The union of a multi-metal pattered anode x-ray source with an energy-resolved photon counting detector permits the generation and detection of a preferential set of X-ray energy peaks. When photons about the peaks are detected, while rejecting photons outside this neighborhood, the overall quality of the image is improved by linearizing the operator that defines the image formation. Additionally, the effective X-ray focal spot size allows for further improvement of the image quality by increasing resolution. Previous works use machine learning techniques to analyze the hyperspectral computed tomography signal and reliably identify and discriminate a wide range of materials based on a material's composition, improving data quality through a multi-material pattern anode will further enhance these identification and classification methods. This work presents initial investigations of a multi-metal patterned anode along with a hyperspectral detector using a general-purpose Monte Carlo particle transport code known as PHITS version 3.24. If successful, these results will have tremendous impact on several nondestructive evaluation applications in industry, security, and medicine.

More Details

Negative Perceptions About the Applicability of Source-to-Source Compilers in HPC: A Literature Review

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Milewicz, Reed M.; Pirkelbauer, Peter; Soundararajan, Prema; Ahmed, Hadia; Skjellum, Tony

A source-to-source compiler is a type of translator that accepts the source code of a program written in a programming language as its input and produces an equivalent source code in the same or different programming language. S2S techniques are commonly used to enable fluent translation between high-level programming languages, to perform large-scale refactoring operations, and to facilitate instrumentation for dynamic analysis. Negative perceptions about S2S’s applicability in High Performance Computing (HPC) are studied and evaluated here. This is a first study that brings to light reasons why scientists do not use source-to-source techniques for HPC. The primary audience for this paper are those considering S2S technology in their HPC application work.

More Details

WEC Arrays with Power Packet Networks for Efficient Energy Storage and Grid Integration

Oceans Conference Record (IEEE)

Wilson, David G.; Robinett, Rush D.; Weaver, Wayne W.; Glover, Steven F.

This paper develops a power packet network (PPN) for integrating wave energy converter (WEC) arrays into microgrids. First a simple AC Resistor-Inductor-Capacitor (RLC) circuit operating at a power factor of one is introduced and shown to be a PPN. Next, an AC inverter-based network is analyzed and shown to be a PPN. Then this basic idea is utilized to asynchronously connect a WEC array to an idealized microgrid without additional energy storage. Specifically, NWECs can be physically positioned such that the incoming regular waves will produce an output emulating an N-phase AC system such that the PPN output power is constant. In the final example, the benefits of utilizing PPN phasing is demonstrated that analyzes a grid to substation to WEC array configuration. The numerical simulation results show that for ideal physical WEC buoy phasing of 60 and 120 degrees the energy storage system (ESS) peak power and energy capacity requirements are at the minimum.

More Details

Experimental Validation of Crosstalk Minimization in Metallic Barriers with Simultaneous Ultrasonic Power and Data Transfer

IEEE International Ultrasonics Symposium, IUS

Sugino, Christopher; Oxandale, Sam; Allam, Ahmed; Arrington, Christian L.; St John, Christopher S.; Baca, Ehren B.; Steinfeldt, Jeffrey A.; Swift, Stephen H.; Reinke, Charles M.; Erturk, Alper; El-Kady, I.

For systems that require complete metallic enclosures, it is impossible to power and communicate with interior electronics using conventional electromagnetic techniques. Instead, pairs of ultrasonic transducers can be used to send and receive elastic waves through the enclosure, forming an equivalent electrical transmission line that bypasses the Faraday cage effect. These mechanical communication systems introduce the possibility for electromechanical crosstalk between channels on the same barrier, in which receivers output erroneous electrical signals due to ultrasonic guided waves generated by transmitters in adjacent communication channels. To minimize this crosstalk, this work investigates the use of a phononic crystal/metamaterial machined into the barrier via periodic grooving. Barriers with simultaneous ultrasonic power and data transfer are fabricated and tested to measure the effect of grooving on crosstalk between channels.

More Details

Computational Optimization of Mechanical Energy Transduction (COMET) Toolkit

IEEE International Ultrasonics Symposium, IUS

Kohtanen, Eetu; Sugino, Christopher; Allam, Ahmed; El-Kady, I.

Ultrasonic transducers can be leveraged to transmit power and data through metallic enclosures such as Faraday cages for which standard electromagnetic methods are infeasible. The design of these systems features a number of variables that must be carefully tweaked for optimal data and power transfer rate and efficiency. The objective of this work is to present a toolkit, COMET, standing for Computational Optimization of Mechanical Energy Transduction, in which the design process and analysis of such transducer systems is streamlined. The toolkit features flexible tools for introducing an arbitrary number of backing/bonding layers, material libraries, parameter sweeps, and optimization.

More Details

Introducing primre’s mre software knowledge hub (February 2021)

Proceedings of the European Wave and Tidal Energy Conference

Ruehl, Kelley M.; Topper, Mathew B.R.; Faltas, Mina A.; Lansing, Carina; Weers, Jon; Driscoll, Frederick

This paper focuses on the role of the Marine Renewable Energy (MRE) Software Knowledge Hub on the Portal and Repository for Information on Marine Renewable Energy (PRIMRE). The MRE Software Knowledge Hub provides online services for MRE software users and developers, and seeks to develop assessments and recommendations for improving MRE software in the future. Online software discovery platforms, known as the Code Hub and the Code Catalog, are provided. The Code Hub is a collection of open-source MRE software that includes a landing page with search functionality, linked to files hosted on the MRE Code Hub GitHub organization. The Code Catalog is a searchable online platform for discovery of useful (open-source or commercial) software packages, tools, codes, and other software products. To gather information about the existing MRE software landscape, a software survey is being performed, the preliminary results of which are presented herein. Initially, the data collected in the MRE software survey will be used to populate the MRE Software knowledge hub on PRIMRE, and future work will use data from the survey to perform a gap analysis and develop a vision for future software development. Additionally, as one of PRIMRE’s roles is to support development of MRE software within project partners, a silo of knowledge relating to best practices has been gathered. An early draft of new guidance developed from this knowledge is presented.

More Details

Bandwidth Enhancement Strategies for Acoustic Data Transmission by Piezoelectric Transduction

IEEE International Ultrasonics Symposium, IUS

Gerbe, Romain; Ruzzene, Massimo; Sugino, Christopher; Erturk, Alper; Steinfeldt, Jeffrey A.; Oxandale, Samuel W.; Reinke, Charles M.; El-Kady, I.

Several applications, such as underwater vehicles or waste containers, require the ability to transfer data from transducers enclosed by metallic structures. In these cases, Faraday shielding makes electromagnetic transmission highly inefficient, and suggests the employment of ultrasonic transmission as a promising alternative. While ultrasonic data transmission by piezoelectric transduction provides a practical solution, the amplitude of the transmitted signal strongly depends on acoustic resonances of the transmission line, which limits the bandwidth over which signals are sent and the rate of data transmission. The objective of this work is to investigate piezoelectric acoustic transducer configurations that enable data transmission at a relatively constant amplitude over large frequency bands. This is achieved through structural modifications of the transmission line, which includes layering of the transducers, as well as the introduction of electric circuits connected to both transmitting and receiving transducers. Both strategies lead to strong enhancements in the available bandwidth and show promising directions for the design of effective acoustic transmission across metallic barriers.

More Details

Real-Time Estimation of Microgrid Inertia and Damping Constant

IEEE Access

Tamrakar, Ujjwol; Copp, David A.; Nguyen, Tu A.; Hansen, Timothy M.; Tonkoski, Reinaldo

The displacement of rotational generation and the consequent reduction in system inertia is expected to have major stability and reliability impacts on modern power systems. Fast-frequency support strategies using energy storage systems (ESSs) can be deployed to maintain the inertial response of the system, but information regarding the inertial response of the system is critical for the effective implementation of such control strategies. In this paper, a moving horizon estimation (MHE)-based approach for online estimation of inertia constant of low inertia microgrids is presented. Based on the frequency measurements obtained in response to a non-intrusive excitation signal from an ESS, the inertia constant was estimated using local measurements from the ESS's phase-locked loop. The proposed MHE formulation was first tested in a linearized power system model, followed by tests in a modified microgrid benchmark from Cordova, Alaska. Even under moderate measurement noise, the technique was able to estimate the inertia constant of the system well within ±20% of the true value. Estimates provided by the proposed method could be utilized for applications such as fast-frequency support, adaptive protection schemes, and planning and procurement of spinning reserves.

More Details

Detachable Dry-Coupled Ultrasonic Power Transfer Through Metallic Enclosures

IEEE International Ultrasonics Symposium, IUS

Allam, Ahmed; Patel, Herit; Sugino, Christopher; Arrington, Christian L.; St John, Christopher S.; Steinfeldt, Jeffrey A.; Erturk, Alper; El-Kady, I.

Ultrasonic waves can be used to transfer power and data to electronic devices in sealed metallic enclosures. Two piezoelectric transducers are used to transmit and receive elastic waves that propagate through the metal. For an efficient power transfer, both transducers are typically bonded to the metal or coupled with a gel which limits the device portability. We present an ultrasonic power transfer system with a detachable transmitter that uses a dry elastic layer and a magnetic joint for efficient coupling. We show that the system can deliver more than 2 W of power to an electric load with 50% efficiency.

More Details

Wideband Acoustic Data Transmission Through Staircase Piezoelectric Transducers

IEEE International Ultrasonics Symposium, IUS

Gerbe, Romain; Ruzzene, Massimo; Sugino, Christopher; Erturk, Alper; Steinfeldt, Jeffrey A.; Oxandale, Samuel W.; Reinke, Charles M.; El-Kady, I.

Ultrasounds have been investigated for data communication to transmit data across enclosed metallic structures affected by Faraday shielding. A typical channel consists in two piezoelectric transducers bonded across the structure, communicating through elastic mechanical waves. The rate of data communication is proportional to the transmission bandwidth, which can be widened by reducing the thickness of the transducers. However, thin transducers become brittle, difficult to bond and have a high capacitance that would draw a high electric current from function generators. This work focuses on investigating novel transducer shapes that would allow to provide a constant transmission across a large bandwidth while maintaining large-enough thickness to avoid brittleness and electrical impedance constraints. The transducers are shaped according to a staircase thickness distribution, whose geometry has been designed through an analytical model describing its electro-mechanical behavior formulated for this purpose.

More Details

A Bayesian MACHINE LEARNING FRAMEWORK FOR SELECTION OF THE STRAIN GRADIENT PLASTICITY MULTISCALE MODEL

ASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)

Tan, Jingye; Maupin, Kathryn A.; Faghihi, Danial

A class of sequential multiscale models investigated in this study consists of discrete dislocation dynamics (DDD) simulations and continuum strain gradient plasticity (SGP) models to simulate the size effect in plastic deformation of metallic micropillars. The high-fidelity DDD explicitly simulates the microstructural (dislocation) interactions. These simulations account for the effect of dislocation densities and their spatial distributions on plastic deformation. The continuum SGP captures the size-dependent plasticity in micropillars using two length parameters. The main challenge in predictive DDD-SGP multiscale modeling is selecting the proper constitutive relations for the SGP model, which is necessitated by the uncertainty in computational prediction due to DDD's microstructural randomness. This contribution addresses these challenges using a Bayesian learning and model selection framework. A family of SGP models with different fidelities and complexities is constructed using various constitutive relation assumptions. The parameters of the SGP models are then learned from a set of training data furnished by the DDD simulations of micropillars. Bayesian learning allows the assessment of the credibility of plastic deformation prediction by characterizing the microstructural variability and the uncertainty in training data. Additionally, the family of the possible SGP models is subjected to a Bayesian model selection to pick the model that adequately explains the DDD training data. The framework proposed in this study enables learning the physics-based multiscale model from uncertain observational data and determining the optimal computational model for predicting complex physical phenomena, i.e., size effect in plastic deformation of micropillars.

More Details

Supervisory Optimal Control for Photovoltaics Connected to an Electric Power Grid

IET Conference Proceedings

Young, Joseph; Weaver, Wayne; Wilson, David G.; Robinett, Rush D.

The following research presents an optimal control framework called Oxtimal that facilitates the efficient use and control of photovoltaic (PV) solar arrays. This framework consists of reduced order models (ROM) of photovoltaics and DC connection components connected to an electric power grid (EPG), a discretization of the resulting state equations using an orthogonal spline collocation method (OSCM), and an optimization driver to solve the resulting formulation. Once formulated, the framework is validated using realistic solar profiles and loads from actual residential applications.

More Details

Using particle image velocimetry to determine turbulence model parameters

AIAA Journal

Miller, Nathan M.; Beresh, Steven J.

The primary parameter of a standard k-ϵ model, Cμ, was calculated from stereoscopic particle image velocimetry (PIV) data for a supersonic jet exhausting into a transonic crossflow. This required the determination of turbulent kinetic energy, turbulent eddy viscosity, and turbulent energy dissipation rate. Image interrogation was optimized, with different procedures used for mean strain rates and Reynolds stresses, to produce useful turbulent eddy viscosity fields. The eddy viscosity was calculated by a least-squares fit to all components of the three-dimensional strain-rate tensor that were available from the PIV data. This eliminated artifacts and noise observed when using a single strain component. Local dissipation rates were determined via Kolmogorov’s similarity hypotheses and the second-order structure function. The eddy viscosity and dissipation rates were then combined to determine Cμ. Considerable spatial variation was observed in Cμ, with the highest values found in regions where turbulent kinetic energy was relatively ow but where turbulent mixing was important, e.g., along the high-strain jet edges and in the wake region. This suggests that use of a constant Cμ in modeling may lead to poor Reynolds stress predictions at mixing interfaces. A data-driven modeling approach that can predict this spatial variation of Cμ based on known state variables may lead to improved simulation results without the need for calibration.

More Details

Detailed measurements of transient two-stage ignition and combustion processes in high-pressure spray flames using simultaneous high-speed formaldehyde PLIF and schlieren imaging

Proceedings of the Combustion Institute

Sim, Hyung S.; Weiss, Lukas; Maes, Noud; Pickett, Lyle M.; Skeen, Scott A.

The low- and high-temperature ignition and combustion processes in a high-pressure spray flame of n-dodecane were investigated using simultaneous 50-kHz formaldehyde (HCHO) planar laser-induced fluorescence (PLIF) and 100-kHz schlieren imaging. PLIF measurements were facilitated through the use of a pulse-burst-mode Nd:YAG laser, and the high-speed HCHO PLIF signal was imaged using a non-intensified CMOS camera with dynamic background emission correction. The experiments were conducted in the Sandia constant-volume preburn vessel equipped with a new Spray A injector. The effects of ambient conditions on the ignition delay times of the two-stage ignition events, HCHO structures, and lift-off length values were examined. Consistent with past studies of traditional Spray A flames, the formation of HCHO was first observed in the jet peripheries where the equivalence ratio (Φ) is expected to be leaner and hotter and then grows in size and in intensity downstream into the jet core where Φ is expected to be richer and colder. The measurements showed that the formation and propagation of HCHO from the leaner to richer region leads to high-temperature ignition events, supporting the identification of a phenomenon called “cool-flame wave propagation” during the transient ignition process. Subsequent high-temperature ignition was found to consume the previously formed HCHO in the jet head, while the formation of HCHO persisted in the fuel-rich zone near the flame base over the entire combustion period.

More Details

Dakota, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis: Version 6.16 Theory Manual

Dalbey, Keith R.; Eldred, Michael S.; Geraci, Gianluca; Jakeman, John D.; Maupin, Kathryn A.; Monschke, Jason A.; Seidl, Daniel T.; Tran, Anh; Menhorn, Friedrich; Zeng, Xiaoshu

The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of Dakota’s iterative analysis capabilities.

More Details

What fuel properties enable higher thermal efficiency in spark-ignited engines?

Progress in Energy and Combustion Science

Szybist, James P.; Busch, Stephen B.; Mccormick, Robert L.; Pihl, Josh A.; Splitter, Derek A.; Ratcliff, Matthew A.; Kolodziej, Christopher P.; Storey, John M.E.; Moses-Debusk, Melanie; Vuilleumier, David; Sjoberg, Carl M.; Sluder, C.S.; Rockstroh, Toby; Miles, Paul C.

The Co-Optimization of Fuels and Engines (Co-Optima) initiative from the US Department of Energy aims to co-develop fuels and engines in an effort to maximize energy efficiency and the utilization of renewable fuels. Many of these renewable fuel options have fuel chemistries that are different from those of petroleum-derived fuels. Because practical market fuels need to meet specific fuel-property requirements, a chemistry-agnostic approach to assessing the potential benefits of candidate fuels was developed using the Central Fuel Property Hypothesis (CFPH). The CFPH states that fuel properties are predictive of the performance of the fuel, regardless of the fuel's chemical composition. In order to use this hypothesis to assess the potential of fuel candidates to increase efficiency in spark-ignition (SI) engines, the individual contributions towards efficiency potential in an optimized engine must be quantified in a way that allows the individual fuel properties to be traded off for one another. This review article begins by providing an overview of the historical linkages between fuel properties and engine efficiency, including the two dominant pathways currently being used by vehicle manufacturers to reduce fuel consumption. Then, a thermodynamic-based assessment to quantify how six individual fuel properties can affect efficiency in SI engines is performed: research octane number, octane sensitivity, latent heat of vaporization, laminar flame speed, particulate matter index, and catalyst light-off temperature. The relative effects of each of these fuel properties is combined into a unified merit function that is capable of assessing the fuel property-based efficiency potential of fuels with conventional and unconventional compositions.

More Details

Elastic Depths for Detecting Shape Anomalies in Functional Data

Technometrics

Tucker, James D.; Harris, Trevor; Shand, Lyndsay S.; Bolin, Anthony W.

We propose a new family of depth measures called the elastic depths that can be used to greatly improve shape anomaly detection in functional data. Shape anomalies are functions that have considerably different geometric forms or features from the rest of the data. Identifying them is generally more difficult than identifying magnitude anomalies because shape anomalies are often not distinguishable from the bulk of the data with visualization methods. The proposed elastic depths use the recently developed elastic distances to directly measure the centrality of functions in the amplitude and phase spaces. Measuring shape outlyingness in these spaces provides a rigorous quantification of shape, which gives the elastic depths a strong theoretical and practical advantage over other methods in detecting shape anomalies. A simple boxplot and thresholding method is introduced to identify shape anomalies using the elastic depths. We assess the elastic depth’s detection skill on simulated shape outlier scenarios and compare them against popular shape anomaly detectors. Finally, we use hurricane trajectories to demonstrate the elastic depth methodology on manifold valued functional data.

More Details

Parameterized neural ordinary differential equations: Applications to computational physics problems

Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences

Lee, Kookjin L.; Parish, Eric J.

This work proposes an extension of neural ordinary differential equations (NODEs) by introducing an additional set of ODE input parameters to NODEs. This extension allows NODEs to learn multiple dynamics specified by the input parameter instances. Our extension is inspired by the concept of parameterized ODEs, which are widely investigated in computational science and engineering contexts, where characteristics of the governing equations vary over the input parameters. We apply the proposed parameterized NODEs (PNODEs) for learning latent dynamics of complex dynamical processes that arise in computational physics, which is an essential component for enabling rapid numerical simulations for time-critical physics applications. For this, we propose an encoder-decoder-type framework, which models latent dynamics as PNODEs. We demonstrate the effectiveness of PNODEs on benchmark problems from computational physics.

More Details

The effects of earth model uncertainty on the inversion of seismic data for seismic source functions

Geophysical Journal International

Poppeliers, Christian P.; Preston, Leiph A.

We use Monte Carlo simulations to explore the effects of earth model uncertainty on the estimation of the seismic source time functions that correspond to the six independent components of the point source seismic moment tensor. Specifically, we invert synthetic data using Green's functions estimated from a suite of earth models that contain stochastic density and seismic wave-speed heterogeneities. We find that the primary effect of earth model uncertainty on the data is that the amplitude of the first-arriving seismic energy is reduced, and that this amplitude reduction is proportional to the magnitude of the stochastic heterogeneities. Also, we find that the amplitude of the estimated seismic source functions can be under-or overestimated, depending on the stochastic earth model used to create the data. This effect is totally unpredictable, meaning that uncertainty in the earth model can lead to unpredictable biases in the amplitude of the estimated seismic source functions.

More Details

Sodium Fire Collaborative Study Progress -- CNWG Fiscal Year 2020

Laros, James H.; Aoyagi, Mitsuhiro

This report discusses the progress on the collaboration between Sandia National Laboratories (Sandia) and Japan Atomic Energy Agency (JAEA) on the sodium fire research in fiscal year 2020. First, the current sodium pool fire model in MELCOR, which is adapted from CONTAIN-LMR code, is discussed. The associated sodium fire input requirements are also presented. These input requirements are flexible enough to permit further model development via control functions to enhance the current model without modifying the source code. The theoretical pool fire model improvement developed at Sandia is discussed. A control function model has been developed from this improvement. Then, the validation study of the sodium pool fire model in MELCOR carried out by both Sandia and JAEA’s staff is described. To validate this pool fire model with the enhancement, a JAEA sodium pool fire experiment (F7-1 test) is used. The results of the calculation are discussed as well as suggestions for further model improvement. Finally, recommendations are made for new MELCOR simulations for next fiscal year, 2021.

More Details

Hotel Room Computational Fluid Dynamics to Investigate Airborne Pathogen Dispersal Patterns

Rodriguez, Salvador B.

A hotel room unit consisting of a bedroom and bathroom was modelled using computational fluid dynamics (CFD) to investigate airborne pathogen dispersal patterns. The full-scale model includes a ‘typical’ hotel room configuration, furniture, and vents. The air sources and sinks include a bathroom vent, a heating, ventilation, and cooling (HVAC) unit located in the bedroom, and a ½” gap at the bottom of the entry door. In addition, the entry door and window can be opened or closed, as desired. Three key configuration simulations were conducted: 1) both the bathroom vent and HVAC were on, 2) only the HVAC was on, and 3) only the bathroom vent was on. If the HVAC air is from a fresh, clean source, or passes through a high-efficiency filter/UV device, then the first configuration is the safest, as contaminated air is highly reduced. The second configuration is also safe, but does not benefit from the outsourcing of potentially-infected air, such as contaminated air flowing through an ineffective filter. The third configuration should be avoided, as the bathroom vent causes air to flow from the hallway, which can be of dubious origin. The CFD simulations also showed that recirculation and swirling regions tend to accumulate the largest concentrations of heavier airborne particles, pathogens, dust, etc. These regions are associated with the largest turbulence kinetic energy (TKE) , and tend to occur in areas with flow recirculation and corners. Therefore, TKE presents a reasonable metric to guide the strategic location of pathogen mitigation devices. The simulations show complex flow patterns with distinct upper and lower flow regions, swirling flow, and significant levels of turbulent mixing. These simulations provide intriguing insights that can be applied to help mitigate pathogen aerosol dispersal, generate building design guidelines, as well as provide insights for the strategic placement of mitigation devices, such as ultraviolet (UV) light, supplemental fans, and filters.

More Details

Rock Fracturing Using High-Pressure Ethylene/Nitrous Oxide Detonations

Grubelich, Mark C.; Venkatesh, Prashanth B.; D'Entremont, James H.; Meyer, Scott E.; Bane, Sally P.M.

The present work investigates high initial pressure detonations of a stoichiometric mixture of ethylene and nitrous oxide (C2H4 + 6N2O) as a method of fracturing rock beneath the ground surface. These tests were conducted at a test site operated by the Energetic Materials Research and Testing Center (EMRTC), Socorro, New Mexico. The volume under the surface used for testing (called the Down Hole Assembly) consists of a 0.438 in. ID x 50 ft. long stainless-steel tube running down from the test site to a well bore which is 3 in. ID x 10 ft. long and the rock in the well bore is exposed to the propagating combustion wave. The testing carried out at Zucrow Laboratories in the smaller, alloy steel combustion vessel provided a scaling of pressures expected in the well bore. The combustion is initiated by energizing an EBW (Exploding Bridge Wire) above the ground surface. The experimental setup accommodates one high pressure (100,000 psia) transducer to measure the pressure peak and is placed approximately 5 ft. above the ground surface and 5 ft. downstream of the EBW. The focus of this series of experiments is to investigate the dependence of fracture to the rock beneath the surface on initial pressures of the mixture of ethylene and nitrous oxide. Experiments were carried out at initial pressures varying between 125 psia and 300 psi. The transducer recorded elevated pressures, which were 2.3 to 2.6 times in excess of the CJ values. The experimental results are discussed and explained in this report.

More Details

HUMAN FACTORS CONSIDERATIONS FOR AUTOMATING MICROREACTORS

Proceedings of the 2021 International Topical Meeting on Probabilistic Safety Assessment and Analysis, PSA 2021

Fleming Lindsley, Elizabeth S.; Nyre-Yu, Megan N.; Luxat, David L.

Many microreactor (<10MWh) sites are expected to be remote locations requiring off-grid power or in some cases military bases. However, before this new class of nuclear reactor can be fully developed and implemented by designers, an effort must be made to explore the technical issues and provide reasonable assurance to the public regarding health and safety impacts centered on various technical issues. One issue not yet fully explored is the possible change in role of the operations and support personnel. Due to the passive safety features of microreactors and their low level of nuclear material, the microreactor facilities may automate more functions and rely on inherent safety features more than its predecessor nuclear power plants. In some instances, human operators may not be located onsite and may instead be operating or monitoring the facility from a remote location. Some designs also call for operators to supervise and control multiple microreactors from the control room. This paper explores issues around reduced staffing of microreactors, highlights the historical safety functions associated with human operators, assesses current licensing requirements for appropriateness to varying levels of personnel support, and describes a recommended regulatory approach for reviewing the impact of reduced staff to the operation of microreactors.

More Details

LAMP Diagnostics at the Point-of-Care: Emerging Trends and Perspectives for the Developer Community

Expert Review of Molecular Diagnostics

Moehling, Taylor J.; Choi, Gihoon; Dugan, Lawrence C.; Salit, Marc; Meagher, Robert M.

More Details
Results 12401–12600 of 96,771
Results 12401–12600 of 96,771