We computationally explore the optical and elastic modes necessary for acoustoelectrically enhanced Brillouin interactions. The large simulated piezoelectric (k2 ≈ 6%) and optome-chanical (|g0| ≈ 8000 (rad/s)√m) coupling theoretically predicts a performance enhancement of several orders of magnitude in Brillouin-based photonic technologies.
Using neural networks to solve variational problems, and other scientific machine learning tasks, has been limited by a lack of consistency and an inability to exactly integrate expressions involving neural network architectures. We address these limitations by formulating a polynomial-spline network, a novel shallow multilinear perceptron (MLP) architecture incorporating free knot B-spline basis functions into a polynomial mixture-of-experts model. Effectively, our architecture performs piecewise polynomial approximation on each cell of a trainable partition of unity while ensuring the MLP and its derivatives can be integrated exactly, obviating a reliance on sampling or quadrature and enabling error-free computation of variational forms. We demonstrate hp-convergence for regression problems at convergence rates expected from approximation theory and solve elliptic problems in one and two dimensions, with a favorable comparison to adaptive finite elements.
The use of containerization technology in high performance computing (HPC) workflows has substantially increased recently because it makes workflows much easier to develop and deploy. Although many HPC workflows include multiple data and multiple applications, they have traditionally all been bundled together into one monolithic container. This hinders the ability to trace the thread of execution, thus preventing scientists from establishing data provenance, or having workflow reproducibility. To provide a solution to this problem we extend the functionality of a popular HPC container runtime, Singularity. We implement both the ability to compose fine-grained containerized workflows and execute these workflows within the Singularity runtime with automatic metadata collection. Specifically, the new functionality collects a record trail of execution and creates data provenance. The use of our augmented Singularity is demonstrated with an earth science workflow, SOMOSPIE. The workflow is composed via our augmented Singularity which creates fine-grained containers and collects the metadata to trace, explain, and reproduce the prediction of soil moisture at a fine resolution.
For the model-based control of low-voltage microgrids, state and parameter information are required. Different optimal estimation techniques can be employed for this purpose. However, these estimation techniques require knowledge of noise covariances (process and measurement noise). Incorrect values of noise covariances can deteriorate the estimator performance, which in turn can reduce the overall controller performance. This paper presents a method to identify noise covariances for voltage dynamics estimation in a microgrid. The method is based on the autocovariance least squares technique. A simulation study of a simplified 100 kVA, 208 V microgrid system in MATLAB/Simulink validates the method. Results show that estimation accuracy is close to the actual value for Gaussian noise, and non-Gaussian noise has a slightly larger error.
Incorrect modeling of control characteristics for inverter-based resources (IBRs) can affect the accuracy of electric power system studies. In many distribution system contexts, the control settings for behind-the-meter (BTM) IBRs are unknown. This paper presents an efficient method for selecting a small number of time series samples from net load meter data that can be used for reconstructing or classifying the control settings of BTM IBRs. Sparse approximation techniques are used to select the time series samples that cause the inversion of a matrix of candidate responses to be as well-conditioned as possible. We verify these methods on 451 actual advanced metering infrastructure (AMI) datasets from loads with BTM IBRs. Selecting 60 15-minute granularity time series samples, we recover BTM control characteristics with a mean error less than 0.2 kVAR.
The paper proposes an implementation of Graph Neural Networks (GNNs) for distribution power system Traveling Wave (TW) - based protection schemes. Simulated faults on the IEEE 34 system are processed by using the Karrenbauer Transform and the Stationary Wavelet Transform (SWT), and the energy of the resulting signals is calculated using the Parseval's Energy Theorem. This data is used to train Graph Convolutional Networks (GCNs) to perform fault zone location. Several levels of measurement noise are considered for comparison. The results show outstanding performance, more than 90% for the most developed models, and outline a fast, reliable, asynchronous and distributed protection scheme for distribution level networks.
We present a field-deployable microfluidic immunoassay device in response to the need for sensitive, quantitative, and high-throughput protein detection at point-of-need. The portable microfluidic system facilitates eight magnetic bead-based sandwich immunoassays from raw samples in 45 minutes. An innovative bead actuation strategy was incorporated into the system to automate multiple sample process steps with minimal user intervention. The device is capable of quantitative and sensitive protein analysis with a 10 pg/ml detection limit from interleukin 6-spiked human serum samples. We envision the reported device offering ultrasensitive point-of-care immunoassay tests for timely and accurate clinical diagnosis.
This paper presents a run-to-run (R2R) controller for mechanical serial sectioning (MSS). MSS is a destructive material analysis process which repeatedly removes a thin layer of material and images the exposed surface. The images are then used to gain insight into the material properties and often to construct a 3-dimensional reconstruction of the material sample. Currently, an experience human operator selects the parameters of the MSS to achieve the desired thickness. The proposed R2R controller will automate this process while improving the precision of the material removal. The proposed R2R controller solves an optimization problem designed to minimize the variance of the material removal subject to achieving the expected target removal. This optimization problem was embedded in an R2R framework to provide iterative feedback for disturbance rejection and convergence to the target removal amount. Since an analytic model of the MSS system is unavailable, we adopted a data-driven approach to synthesize our R2R controller from historical data. The proposed R2R controller is demonstrated through simulations. Future work will empirically demonstrate the proposed R2R through experiments with a real MSS system.
The penetration of renewable energy resources (RER) and energy storage systems (ESS) into the power grid has been accelerated in recent times due to the aggressive emission and RER penetration targets. The Integrated resource planning (IRP) framework can help in ensuring long-term resource adequacy while satisfying RER integration and emission reduction targets in a cost-effective and reliable manner. In this paper, we present pIRP (probabilistic Integrated Resource Planning), an open-source Python-based software tool designed for optimal portfolio planning for an RER and ESS rich future grid and for addressing the capacity expansion problem. The tool, which is planned to be released publicly, with its ESS and RER modeling capabilities along with enhanced uncertainty handling make it one of the more advanced non-commercial IRP tools available currently. Additionally, the tool is equipped with an intuitive graphical user interface and expansive plotting capabilities. Impacts of uncertainties in the system are captured using Monte Carlo simulations and lets the users analyze hundreds of scenarios with detailed scenario reports. A linear programming based architecture is adopted which ensures sufficiently fast solution time while considering hundreds of scenarios and characterizing profile risks with varying levels of RER and ESS penetration levels. Results for a test case using data from parts of the Eastern Interconnection are provided in this paper to demonstrate the capabilities offered by the tool.
Non-volatile memory arrays require select devices to ensure accurate programming. The one-selector one-resistor (1S1R) array where a two-terminal nonlinear select device is placed in series with a resistive memory element is attractive due to its high-density data storage; however, the effect of the nonlinear select device on the accuracy of analog in-memory computing has not been explored. This work evaluates the impact of select and memory device properties on the results of analog matrix-vector multiplications. We integrate nonlinear circuit simulations into CrossSim and perform end-to-end neural network inference simulations to study how the select device affects the accuracy of neural network inference. We propose an adjustment to the input voltage that can effectively compensate for the electrical load of the select device. Our results show that for deep residual networks trained on CIFAR-10, a compensation that is uniform across all devices in the system can mitigate these effects over a wide range of values for the select device I-V steepness and memory device On/Off ratio. A realistic I-V curve steepness of 60 mV/dec can yield an accuracy on CIFAR-10 that is within 0.44% of the floating-point accuracy.
Numerical simulations of Greenland and Antarctic ice sheets involve the solution of large-scale highly nonlinear systems of equations on complex shallow geometries. This work is concerned with the construction of Schwarz preconditioners for the solution of the associated tangent problems, which are challenging for solvers mainly because of the strong anisotropy of the meshes and wildly changing boundary conditions that can lead to poorly constrained problems on large portions of the domain. Here, two-level generalized Dryja-Smith-Widlund (GDSW)-type Schwarz preconditioners are applied to different land ice problems, i.e., a velocity problem, a temperature problem, as well as the coupling of the former two problems. We employ the message passing interface (MPI)- parallel implementation of multilevel Schwarz preconditioners provided by the package FROSch (fast and robust Schwarz) from the Trilinos library. The strength of the proposed preconditioner is that it yields out-of-the-box scalable and robust preconditioners for the single physics problems. To the best of our knowledge, this is the first time two-level Schwarz preconditioners have been applied to the ice sheet problem and a scalable preconditioner has been used for the coupled problem. The preconditioner for the coupled problem differs from previous monolithic GDSW preconditioners in the sense that decoupled extension operators are used to compute the values in the interior of the subdomains. Several approaches for improving the performance, such as reuse strategies and shared memory OpenMP parallelization, are explored as well. In our numerical study we target both uniform meshes of varying resolution for the Antarctic ice sheet as well as nonuniform meshes for the Greenland ice sheet. We present several weak and strong scaling studies confirming the robustness of the approach and the parallel scalability of the FROSch implementation. Among the highlights of the numerical results are a weak scaling study for up to 32 K processor cores (8 K MPI ranks and 4 OpenMP threads) and 566 M degrees of freedom for the velocity problem as well as a strong scaling study for up to 4 K processor cores (and MPI ranks) and 68 M degrees of freedom for the coupled problem.
This paper describes how the performance of motion primitive-based planning algorithms can be improved using reinforcement learning. Specifically, we describe and evaluate a framework that autonomously improves the performance of a primitive-based motion planner. The improvement process consists of three phases: exploration, extraction, and reward updates. This process can be iterated continuously to provide successive improvement. The exploration step generates new trajectories, and the extraction step identifies new primitives from these trajectories. These primitives are then used to update rewards for continued exploration. This framework required novel shaping rewards, development of a primitive extraction algorithm, and modification of the Hybrid A* algorithm. The framework is tested on a navigation task using a nonlinear F-16 model. The framework autonomously added 91 motion primitives to the primitive library and reduced average path cost by 21.6 s, or 35.75% of the original cost. The learned primitives are applied to an obstacle field navigation task, which was not used in training, and reduced path cost by 16.3 s, or 24.1%. Additionally, two heuristics for the modified Hybrid A* algorithm are designed to improve effective branching factor.
Carbon sequestration is a growing field that requires subsurface monitoring for potential leakage of the sequestered fluids through the casing annulus. Sandia National Laboratories (SNL) is developing a smart collar system for downhole fluid monitoring during carbon sequestration. This technology is part of a collaboration between SNL, University of Texas at Austin (UT Austin) (project lead), California Institute of Technology (Caltech), and Research Triangle Institute (RTI) to obtain real-time monitoring of the movement of fluids in the subsurface through direct formation measurements. Caltech and RTI are developing millimeter-scale radio frequency identification (RFID) sensors that can sense carbon dioxide, pH, and methane. These sensors will be impervious to cement, and as such, can be mixed with cement and poured into the casing annulus. The sensors are powered and communicate via standard RFID protocol at 902-928 MHz. SNL is developing a smart collar system that wirelessly gathers RFID sensor data from the sensors embedded in the cement annulus and relays that data to the surface via a wired pipe that utilizes inductive coupling at the collar to transfer data through each segment of pipe. This system cannot transfer a direct current signal to power the smart collar, and therefore, both power and communications will be implemented using alternating current and electromagnetic signals at different frequencies. The complete system will be evaluated at UT Austin's Devine Test Site, which is a highly characterized and hydraulically fractured site. This is the second year of the three-year effort, and a review of SNL's progress on the design and implementation of the smart collar system is provided.
Measurements that occur within the internal layers of a quantum circuit—midcircuit measurements—are a useful quantum-computing primitive, most notably for quantum error correction. Midcircuit measurements have both classical and quantum outputs, so they can be subject to error modes that do not exist for measurements that terminate quantum circuits. Here we show how to characterize midcircuit measurements, modeled by quantum instruments, using a technique that we call quantum instrument linear gate set tomography (QILGST). We then apply this technique to characterize a dispersive measurement on a superconducting transmon qubit within a multiqubit system. By varying the delay time between the measurement pulse and subsequent gates, we explore the impact of residual cavity photon population on measurement error. QILGST can resolve different error modes and quantify the total error from a measurement; in our experiment, for delay times above 1000ns we measure a total error rate (i.e., half diamond distance) of ϵ⋄=8.1±1.4%, a readout fidelity of 97.0±0.3%, and output quantum-state fidelities of 96.7±0.6% and 93.7±0.7% when measuring 0 and 1, respectively.
This chapter deals with experimental dynamic substructures which are reduced order models that can be coupled with each other or with finite element derived substructures to estimate the system response of the coupled substructures. A unifying theoretical framework in the physical, modal or frequency domain is reviewed with examples. The major issues that have hindered experimental based substructures are addressed. An example is demonstrated with the transmission simulator method that overcomes the major historical difficulties. Guidelines for the transmission simulator design are presented.
Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
Seraj, Esmaeil; Wang, Zheyuan; Paleja, Rohan; Patel, Anirudh; Gombolay, Matthew
High-performing teams learn intelligent and efficient communication and coordination strategies to maximize their joint utility. These teams implicitly understand the different roles of heterogeneous team members and adapt their communication protocols accordingly. Multi-Agent Reinforcement Learning (MARL) seeks to develop computational methods for synthesizing such coordination strategies, but formulating models for heterogeneous teams with different state, action, and observation spaces has remained an open problem. Without properly modeling agent heterogeneity, as in prior MARL work that leverages homogeneous graph networks, communication becomes less helpful and can even deteriorate the cooperativity and team performance. We propose Heterogeneous Policy Networks (HetNet) to learn efficient and diverse communication models for coordinating cooperative heterogeneous teams. Building on heterogeneous graph-attention networks, we show that HetNet not only facilitates learning heterogeneous collaborative policies per existing agent-class but also enables end-to-end training for learning highly efficient binarized messaging. Our empirical evaluation shows that HetNet sets a new state of the art in learning coordination and communication strategies for heterogeneous multi-agent teams by achieving an 8.1% to 434.7% performance improvement over the next-best baseline across multiple domains while simultaneously achieving a 200× reduction in the required communication bandwidth.
A high-speed, two-color pyrometer was developed and employed to characterize the temperature of the ejecta from pyrotechnic igniters. The pyrometer used a single objective lens, beamsplitter, and two high-speed cameras to maximize the spatial and temporal resolutions. The pyrometer used the integrated intensity of under-resolved particles to maintain a large region of interest to capture more particles. The spectral response of the pyrometer was determined based on the response of each optical component and the total system was calibrated using a black body source to ensure accurate intensity ratios over the range of interest.
Beaujean, Pierre P.; Kojimoto, Nigel; Gunawan, Budi; Driscoll, Frederick
A self-synchronizing underwater acoustic network, designed for remote monitoring of mooring loads in Wave Energy Converters (WEC), has been developed and tested. This network uses Time Division Multiple Access and operates self-contained with the ability for users to remotely transmit commands to the network as needed. Each node is a self-contained unit, consisting of a protocol adaptor board, an underwater acoustic modem and a battery pack. A node can be connected to a load cell, to a topside user or to the WEC. Every node is swapable. The protocol adaptor board, named Protocol Adaptor for Digital LOad Cell (PADLOC) supports a variety of digital load cell message formats (CAN, MODBUS, custom ASCII) and underwater acoustic modem serial formats. PADLOC enables topside users to connect to separate load cells through a user-specific command.
Proceedings of ISAV 2022: IEEE/ACM International Workshop on In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization, Held in conjunction with SC 2022: The International Conference for High Performance Computing, Networking, Storage and Analysis
This paper reports on Catalyst usability and initial adoption by SPARC analysts. The use case approach highlights the analysts' perspective. Impediments to adoption can be due to deficiencies in software capabilities, or analysts may identify mundane inconveniences and barriers that prevent them from fully leveraging Catalyst. With that said, for many analyst tasks Catalyst provides enough relative advantage that they have begun applying it in their production work, and they recognize the potential for it to solve problems they currently struggle with. The findings in this report include specific issues and minor bugs in ParaView Python scripting, which are viewed as having straightforward solutions, as well as a broader adoption analysis.
A new small-scale pressure vessel with a 5×5 fuel assembly and axially truncated PWR hardware was created to simulate commercial vacuum drying processes. This test assembly, known as the Dashpot Drying Apparatus, was built to focus on the drying of a single PWR dashpot and surrounding fuel. Drying operations were simulated for three tests with the DDA based on the pressure and temperature histories observed in the HBDP. All three tests were conducted with an empty guide tube. One test was performed with deionized water as the fill fluid. The other two tests used 0.2 M boric acid as the fill fluid to accurately simulate spent fuel pool conditions. These tests proved the capability of the DDA to mimic commercial drying processes on a limited scale and detect the presence of bulk and residual water. Furthermore, for all tests, pressure remained below the 0.4 kPa (3 Torr) rebound threshold for the final evacuation step in the drying procedure. Results indicate that after bulk fluid is removed from the pressure vessel, residual water is verifiably measured through confirmatory measurements of pressure and water content using a mass spectrometer. The final pressure rebound behaviors for the three tests conducted were well below the established regulatory limit of less than 0.4 kPa (3 Torr) within 30 minutes of isolation. The water content measurements across all tests showed that despite observing high water content within the DDA vessel at the beginning of the vacuum isolations, the water content drastically drops to below 1,200 ppmv after the isolations were conducted. The data and operational experience from these tests will guide the next evolution of experiments on a prototypic-length scale with multiple surrogate rods in a full 17×17 PWR assembly. The insight gained through these investigations is expected to support the technical basis for the continued safe storage of spent nuclear fuel into long term operations.
This work presents a high-speed laser-absorption-spectroscopy diagnostic capable of measuring temperature, pressure, and nitric oxide (NO) mole fraction in shock-heated air at a measurement rate of 500 kHz. This diagnostic was demonstrated in the High-Temperature Shock Tube (HST) facility at Sandia National Laboratories. The diagnostic utilizes a quantum-cascade laser to measure the absorbance spectra of two rovibrational transitions near 5.06 µm in the fundamental vibration bands (v" = 0 and 1) of NO in its ground electronic state (X2 Π1/2 ). Gas properties were determined using scanned-wavelength direct absorption and a recently established fitting method that utilizes a modified form of the time-domain molecular free-induction-decay signal (m-FID). This diagnostic was applied to acquire measurements in shock-heated air in the HST at temperatures ranging from approximately 2500 to 5500 K and pressures of 3 to 12 atm behind both incident and reflected shocks. The measurements agree well with the temperature predicted by NASA CEA and the pressure measured simultaneously using PCB pressure sensors. The measurements presented demonstrate that this diagnostic is capable of resolving the formation of NO in shock-heated air and the associated temperature change at the conditions studied.
This chapter focuses on explosives-based threats, the challenges they present, and various means by which these challenges can be overcome. It begins with an introduction to explosive threats, detailing statistics regarding their use, and some overarching challenges associated with properly mitigating the risks they present, before delving deeper into different areas of response by government agencies. These response areas are broadly categorized as deter, prevent, detect, delay/ protect, and respond/analyze. Deterrence refers to trying to discourage people from becoming malefactors, with a focus on anti-radicalization programs and ways by which people can be dissuaded to join extremist movements. The section on prevention discusses means by which access to explosive precursor materials and information can be controlled, with a focus on polices and regulations. This includes examples of current regulations, discussion of why specific chemicals are on controlled chemicals lists, and information campaigns to raise awareness of IED threats. The following section gives a brief understanding of the important aspects to consider in detection and describes different explosives detection methods used. Approaches to delaying the use or impact of an explosive threat, as well as those that provide some sort of protection against the effects of an explosive threat, are then described. Lastly, current approaches to response to explosive threats, either before or after detonation, and the importance of analysis, are discussed before summarizing the chapter and providing a near-future outlook.
The growing x-ray detection burden for vehicles at Ports of Entry in the US requires the development of efficient and reliable algorithms to assist human operator in detecting contraband. Developing algorithms for large-scale non-intrusive inspection (NII) that both meet operational performance requirements and are extensible for use in an evolving environment requires large volumes and varieties of training data, yet collecting and labeling data for these enivornments is prohibitively costly and time consuming. Given these, generating synthetic data to augment algorithm training has been a focus of recent research. Here we discuss the use of synthetic imagery in an object detection framework, and describe a simulation based approach to determining domain-informed threat image projection (TIP) augmentation.
Applications such as counterfeit identification, quality control, and non-destructive material identification benefit from improved spatial and compositional analysis. X-ray Computed Tomography is used in these applications but is limited by the X-ray focal spot size and the lack of energy-resolved data. Recently developed hyperspectral X-ray detectors estimate photon energy, which enables composition analysis but lacks spatial resolution. Moving beyond bulk homogeneous transmission anodes toward multi-metal patterned anodes enables improvements in spatial resolution and signal-to-noise ratios in these hyperspectral X-ray imaging systems. We aim to design and fabricate transmission anodes that facilitate confirmation of previous simulation results. These anodes are fabricated on diamond substrates with conventional photolithography and metal deposition processes. The final transmission anode design consists of a cluster of three disjoint metal bumps selected from molybdenum, silver, samarium, tungsten, and gold. These metals are chosen for their k-lines, which are positioned within distinct energy intervals of interest and are readily available in standard clean rooms. The diamond substrate is chosen for its high thermal conductivity and high transmittance of X-rays. The feature size of the metal bumps is chosen such that the cluster is smaller than the 100 m diameter of the impinging electron beam in the X-ray tube. This effectively shrinks the X-ray focal spot in the selected energy bands. Once fabricated, our transmission anode is packaged in a stainless-steel holder that can be retrofitted into our existing X-ray tube. Innovations in anode design enable an inexpensive and simple method to improve existing X-ray imaging systems.
Modern day processes depend heavily on data-driven techniques that use large datasets clustered into relevant groups help them achieve higher efficiency, better utilization of the operation, and improved decision making. However, building these datasets and clustering by similar products is challenging in research environments that produce many novel and highly complex low-volume technologies. In this work, the author develops an algorithm that calculates the similarity between multiple low-volume products from a research environment using a real-world data set. The algorithm is applied to pulse power operations data, which routinely performs novel experiments for inertial confinement fusion, radiation effects, and nuclear stockpile stewardship. The author shows that the algorithm is successful in calculating similarity between experiments of varying complexity such that comparable shots can be used for further analysis. Furthermore, it has been able to identify experiments not traditionally seen as identical.
Refractory complex concentrated alloys are an emerging class of materials that attracts attention due to their stability and performance at high temperatures. In this study, we investigate the variations in the mechanical and thermal properties across a broad compositional space for the refractory MoNbTaTi quaternary using high-throughput ab-initio calculations and experimental characterization. For all the properties surveyed, we note a good agreement between our modeling predictions and the experimentally measured values. We reveal the particular role of molybdenum (Mo) to achieve high strength when in high concentration. We trace the origin of this phenomenon to a shift from metallic to covalent bonding when the Mo content is increased. Additionally, a mechanistic, dislocation-based description of the yield strength further explains such high strength due to a combination of high bulk and shear moduli, accompanied by the relatively small size of the Mo atom compared to the other atoms in the alloy. Our analysis of the thermodynamics properties shows that regardless of the composition, this class of quaternary alloys shows good stability and low sensitivity to temperature. Taken together, these results pave the way for the design of new high-performance refractory alloys beyond the equimolar composition found in high-entropy alloys.
This paper applies sensitivity and uncertainty analysis to compare two model alternatives for fuel matrix degradation for performance assessment of a generic crystalline repository. The results show that this model choice has little effect on uncertainty in the peak 129I concentration. The small impact of this choice is likely due to the higher importance of uncertainty in the instantaneous release fraction and differences in epistemic uncertainty between the alternatives.
This paper discusses the development and current status of a recommended practice by the members of IEEE Working Group P2688 on Energy Storage Management Systems (ESMS) in grid applications. The intent of this recommended practice is to provide a reference for ESMS designers and ESS integrators regarding the challenges in ESMS development and deployment, and to provide recommendations and best practices to address these challenges. This recommended practice will assist in the selection between design options by supplying the pros and cons for a range of technical solutions.
Deep operator learning has emerged as a promising tool for reduced-order modelling and PDE model discovery. Leveraging the expressive power of deep neural networks, especially in high dimensions, such methods learn the mapping between functional state variables. While proposed methods have assumed noise only in the dependent variables, experimental and numerical data for operator learning typically exhibit noise in the independent variables as well, since both variables represent signals that are subject to measurement error. In regression on scalar data, failure to account for noisy independent variables can lead to biased parameter estimates. With noisy independent variables, linear models fitted via ordinary least squares (OLS) will show attenuation bias, wherein the slope will be underestimated. In this work, we derive an analogue of attenuation bias for linear operator regression with white noise in both the independent and dependent variables, showing that the norm upper bound of the operator learned via OLS decreases with increasing noise in the independent variable. In the nonlinear setting, we computationally demonstrate underprediction of the action of the Burgers operator in the presence of noise in the independent variable. We propose error-in-variables (EiV) models for two operator regression methods, MOR-Physics and DeepONet, and demonstrate that these new models reduce bias in the presence of noisy independent variables for a variety of operator learning problems. Considering the Burgers operator in 1D and 2D, we demonstrate that EiV operator learning robustly recovers operators in high-noise regimes that defeat OLS operator learning. We also introduce an EiV model for time-evolving PDE discovery and show that OLS and EiV perform similarly in learning the Kuramoto-Sivashinsky evolution operator from corrupted data, suggesting that the effect of bias in OLS operator learning depends on the regularity of the target operator.
This paper presents a visualization technique for incorporating eigenvector estimates with geospatial data to create inter-area mode shape maps. For each point of measurement, the method specifies the radius, color, and angular orientation of a circular map marker. These characteristics are determined by the elements of the right eigenvector corresponding to the mode of interest. The markers are then overlaid on a map of the system to create a physically intuitive visualization of the mode shape. This technique serves as a valuable tool for differentiating oscillatory modes that have similar frequencies but different shapes. This work was conducted within the Western Interconnection Modes Review Group (WIMRG) in the Western Electric Coordinating Council (WECC). For testing, we employ the WECC 2021 Heavy Summer base case, which features a high-fidelity, industry standard dynamic model of the North American Western Interconnection. Mode estimates are produced via eigen-decomposition of a reduced-order state matrix identified from simulated ringdown data. The results provide improved physical intuition about the spatial characteristics of the inter-area modes. In addition to offline applications, this visualization technique could also enhance situational awareness for system operators when paired with online mode shape estimates.
Performance assessment is an important tool to estimate the long-term safety for a nuclear waste repository. Performance assessment simulations are subject to multiple kinds of uncertainty including stochastic uncertainty, state of knowledge uncertainty, and model uncertainty. Task F1 of the DECOVALEX project involves comparison of the models and methods used in post-closure performance assessment of deep geologic repositories in fractured crystalline rock, providing an opportunity to compare the effects of different sources of uncertainty. A generic reference case for a mined repository in fractured crystalline rock was put together by participating teams, where each team was responsible for determining how best to represent and implement the model. This work presents the preliminary crystalline reference case results for the Department of Energy (DOE) team.
The III-nitride semiconductors are attractive for on-chip, solid-state vacuum nanoelectronics, having high thermal and chemical stability, low electron affinity, and high breakdown fields. Here we report top-down fabricated, lateral gallium nitride (GaN)-based nanoscale vacuum electron diodes operable in air, with ultra-low turn-on voltages down to ~0.24 V, and stable high field emission currents, tested up to several microamps for single-emitter devices. We present gap-size and pressure dependent studies which provide insights into the design of future nanogap vacuum electron devices. The vacuum nanodiodes also show high resistance to damage from 2.5 MeV proton exposure. Preliminary results on the fabrication and characteristics of lateral GaN nano vacuum transistors will also be presented. The results show promise for a new class of robust, integrated, III-nitride based vacuum nanoelectronics.
Stochasticity is ubiquitous in the world around us. However, our predominant computing paradigm is deterministic. Random number generation (RNG) can be a computationally inefficient operation in this system especially for larger workloads. Our work leverages the underlying physics of emerging devices to develop probabilistic neural circuits for RNGs from a given distribution. However, codesign for novel circuits and systems that leverage inherent device stochasticity is a hard problem. This is mostly due to the large design space and complexity of doing so. It requires concurrent input from multiple areas in the design stack from algorithms, architectures, circuits, to devices. In this paper, we present examples of optimal circuits developed leveraging AI-enhanced codesign techniques using constraints from emerging devices and algorithms. Our AI-enhanced codesign approach accelerated design and enabled interactions between experts from different areas of the micro-electronics design stack including theory, algorithms, circuits, and devices. We demonstrate optimal probabilistic neural circuits using magnetic tunnel junction and tunnel diode devices that generate an RNG from a given distribution.
Integrating recent advancements in resilient algorithms and techniques into existing codes is a singular challenge in fault tolerance - in part due to the underlying complexity of implementing resilience in the first place, but also due to the difficulty introduced when integrating the functionality of a standalone new strategy with the preexisting resilience layers of an application. We propose that the answer is not to build integrated solutions for users, but runtimes designed to integrate into a larger comprehensive resilience system and thereby enable the necessary jump to multi-layered recovery. Our work designs, implements, and verifies one such comprehensive system of runtimes. Utilizing Fenix, a process resilience tool with integration into preexisting resilience systems as a design priority, we update Kokkos Resilience and the use pattern of VeloC to support application-level integration of resilience runtimes. Our work shows that designing integrable systems rather than integrated systems allows for user-designed optimization and upgrading of resilience techniques while maintaining the simplicity and performance of all-in-one resilience solutions. More application-specific choice in resilience strategies allows for better long-term flexibility, performance, and - importantly - simplicity.
Large scale non-intrusive inspection (NII) of commercial vehicles is being adopted in the U.S. at a pace and scale that will result in a commensurate growth in adjudication burdens at land ports of entry. The use of computer vision and machine learning models to augment human operator capabilities is critical in this sector to ensure the flow of commerce and to maintain efficient and reliable security operations. The development of models for this scale and speed requires novel approaches to object detection and novel adjudication pipelines. Here we propose a notional combination of existing object detection tools using a novel ensembling framework to demonstrate the potential for hierarchical and recursive operations. Further, we explore the combination of object detection with image similarity as an adjacent capability to provide post-hoc oversight to the detection framework. The experiments described herein, while notional and intended for illustrative purposes, demonstrate that the judicious combination of diverse algorithms can result in a resilient workflow for the NII environment.
2022 IEEE Texas Power and Energy Conference, TPEC 2022
Biswal, Milan; Pati, Shubhasmita; Ranade, Satish J.; Lavrova, Olga; Reno, Matthew J.
The application of traveling wave principles for fault detection in distribution systems is challenging because of multiple reflections from the laterals and other lumped elements, particularly when we consider communication-free applications. We propose and explore the use of Shapelets to characterize fault signatures and a data-driven machine learning model to accurately classify the faults based on their distance. Studies of a simple 5-bus system suggest that the use of Shapelets for detecting faults is promising. The application to practical three-phase distribution feeders is the subject of continuing research.
The precise estimation of performance loss rate (PLR) of photovoltaic (PV) systems is vital for reducing investment risks and increasing the bankability of the technology. Until recently, the PLR of fielded PV systems was mainly estimated through the extraction of a linear trend from a time series of performance indicators. However, operating PV systems exhibit failures and performance losses that cause variability in the performance and may bias the PLR results obtained from linear trend techniques. Change-point (CP) methods were thus introduced to identify nonlinear trend changes and behaviour. The aim of this work is to perform a comparative analysis among different CP techniques for estimating the annual PLR of eleven grid-connected PV systems installed in Cyprus. Outdoor field measurements over an 8-year period (June 2006-June 2014) were used for the analysis. The obtained results when applying different CP algorithms to the performance ratio time series (aggregated into monthly blocks) demonstrated that the extracted trend may not always be linear but sometimes can exhibit nonlinearities. The application of different CP methods resulted to PLR values that differ by up to 0.85% per year (for the same number of CPs/segments).
Ship emissions can form linear cloud structures, or ship tracks, when atmospheric water vapor condenses on aerosols in the ship exhaust. These structures are of interest because they are observable and traceable examples of MCB, a mechanism that has been studied as a potential approach for solar climate intervention. Ship tracks can be observed throughout the diurnal cycle via space-borne assets like the advanced baseline imagers on the national oceanic and atmospheric administration geostationary operational environmental satellites, the GOES-R series. Due to complex atmospheric dynamics, it can be difficult to track these aerosol perturbations over space and time to precisely characterize how long a single emission source can significantly contribute to indirect radiative forcing. We propose an optical flow approach to estimate the trajectories of ship-emitted aerosols after they begin mixing with low boundary layer clouds using GOES-17 satellite imagery. Most optical flow estimation methods have only been used to estimate large scale atmospheric motion. We demonstrate the ability of our approach to precisely isolate the movement of ship tracks in low-lying clouds from the movement of large swaths of high clouds that often dominate the scene. This efficient approach shows that ship tracks persist as visible, linear features beyond 9 h and sometimes longer than 24 h.
Firmware emulation is useful for finding vulnerabil-ities, performing debugging, and testing functionalities. However, the process of enabling firmware to execute in an emulator (i.e., re-hosting) is difficult. Each piece of the firmware may depend on hardware peripherals outside the microcontroller that are inaccessible during emulation. Current practices involve painstakingly disentangling these dependencies or replacing them with developed models that emulate functions interacting with hardware. Unfortunately, both are highly manual and error-prone. In this paper, we introduce a systematic graph-based approach to analyze firmware binaries and determine which functions need to be replaced. Our approach is customizable to balance the fidelity of the emulation and the amount of effort it would take to achieve the emulation by modeling functions. We run our algorithm across a number of firmware binaries and show its ability to capture and remove a large majority of hardware dependencies.
This work describes the development and testing of a carbon dioxide seeding system for the Sandia Hypersonic Wind Tunnel. The seeder injects liquid carbon dioxide into the tunnel, which evaporates in the nitrogen supply line and then condenses during the nozzle expansion into a fog of particles that scatter light via Rayleigh scattering. A planar laser scattering (PLS) experiment is conducted in the boundary layer and wake of a cone at Mach 8 to evaluate the success of the seeder. Second-mode waves and turbulence transition were well-visualized by the PLS in the boundary layer and wake. PLS in the wake also captured the expansion wave over the base and wake recompression shock. No carbon dioxide appears to survive and condense in the boundary layer or wake, meaning alternative seeding methods must be explored to extract measurements within these regions. The seeding system offers planar flow visualization opportunities and can enable quantitative velocimetry measurements in the future, including filtered Rayleigh scattering.
This paper provides a study of the potential impacts of climate change on intermittent renewable energy resources, battery storage, and resource adequacy in Public Service Company of New Mexico's Integrated Resource Plan for 2020 - 2040. Climate change models and available data were first evaluated to determine uncertainty and potential changes in solar irradiance, temperature, and wind speed in NM in the coming decades. These changes were then implemented in solar and wind energy models to determine impacts on renewable energy resources in NM. Results for the extreme climate-change scenario show that the projected wind power may decrease by ~13% due to projected decreases in wind speed. Projected solar power may decrease by ~4% due to decreases in irradiance and increases in temperature in NM. Uncertainty in these climateinduced changes in wind and solar resources was accommodated in probabilistic models assuming uniform distributions in the annual reductions in solar and wind resources. Uncertainty in battery storage performance was also evaluated based on increased temperature, capacity fade, and degradation in roundtrip efficiency. The hourly energy balance was determined throughout the year given uncertainties in the renewable energy resources and energy storage. The loss of load expectation (LOLE) was evaluated for the 2040 No New Combustion portfolio and found to increase from 0 days/year to a median value of ~2 days/year due to potential reductions in renewable energy resources and battery storage performance and capacity. A rank-regression analyses revealed that battery round-trip efficiency was the most significant parameter that impacted LOLE, followed by solar resource, wind resource, and battery fade. An increase in battery storage capacity to ~30,000 MWh from a baseline value of ~14,000 MWh was required to reduce the median value of LOLE to ~0.2 days/year with consideration of potential climate impacts and battery degradation.
Demonstration of broadband nanosecond output from a burst-mode-pumped noncolinear optical parametric oscillator (NOPO) has been achieved at 40 kHz. The NOPO is pumped by 355-nm output at 50 mJ/pulse for 45 pulses. A bandwidth of 540 cm-1 was achieved from the OPO with a conversion efficiency of 10% for 5 mJ/pulse. Higher bandwidths up to 750 cm-1 were readily achievable at reduced performance and beam quality. The broadband NOPO output was used for a planar BOXCARS phase matching scheme for N2 CARS measurements in a near adiabatic H2/air flame. Single-shot CARS measurements were taken for equivalence ratios of φ=0.52-0.86 for temperatures up to 2200 K.
The Multi-Fidelity Toolkit (MFTK) is a simulation tool being developed at Sandia National Laboratories for aerodynamic predictions of compressible flows over a range of physics fidelities and computational speeds. These models include the Reynolds-Averaged Navier–Stokes (RANS) equations, the Euler equations, and modified Newtonian aerodynamics (MNA) equations, and they can be invoked independently or coupled with hierarchical Kriging to interpolate between high-fidelity simulations using lower-fidelity data. However, as with any new simulation capability, verification and validation are necessary to gather credibility evidence. This work describes formal model validation with uncertainty considerations that leverages experimental data from the HIFiRE-1 wind tunnel tests. The geometry is a multi-conic shape that produces complex flow phenomena under hypersonic conditions. A thorough treatment of the validation comparison with prediction error and validation uncertainty is also presented.