Carbon sequestration is a growing field that requires subsurface monitoring for potential leakage of the sequestered fluids through the casing annulus. Sandia National Laboratories (SNL) is developing a smart collar system for downhole fluid monitoring during carbon sequestration. This technology is part of a collaboration between SNL, University of Texas at Austin (UT Austin) (project lead), California Institute of Technology (Caltech), and Research Triangle Institute (RTI) to obtain real-time monitoring of the movement of fluids in the subsurface through direct formation measurements. Caltech and RTI are developing millimeter-scale radio frequency identification (RFID) sensors that can sense carbon dioxide, pH, and methane. These sensors will be impervious to cement, and as such, can be mixed with cement and poured into the casing annulus. The sensors are powered and communicate via standard RFID protocol at 902-928 MHz. SNL is developing a smart collar system that wirelessly gathers RFID sensor data from the sensors embedded in the cement annulus and relays that data to the surface via a wired pipe that utilizes inductive coupling at the collar to transfer data through each segment of pipe. This system cannot transfer a direct current signal to power the smart collar, and therefore, both power and communications will be implemented using alternating current and electromagnetic signals at different frequencies. The complete system will be evaluated at UT Austin's Devine Test Site, which is a highly characterized and hydraulically fractured site. This is the second year of the three-year effort, and a review of SNL's progress on the design and implementation of the smart collar system is provided.
In high temperature (HT) environments often encountered in geothermal wells, data rate transfers for downhole instrumentation are relatively limited due to transmission line bandwidth and insertion loss and the processing speed of HT microcontrollers. In previous research, Sandia National Laboratory Geothermal Department obtained 3.8 Mbps data rates over 1524 m (5000 ft) for single conductor wireline cable with less than a 1x10-8 bit error rate utilizing low temperature NITM hardware (formerly National InstrumentsTM). Our protocol technique was a combination of orthogonal frequency-division multiplexing and quadrature amplitude modulation across the bandwidth of the single conductor wireline. This showed it is possible to obtain high data rates in low bandwidth wirelines. This paper focuses on commercial HT microcontrollers (µC), rather than low temperature NITM modules, to enable high-speed communication in an HT environment. As part of this effort, four devices were evaluated, and an optimal device (SM320F28335-HT) was selected for its high clock rates, floating-point unit, and on-board analog-to-digital converter. A printed circuit board was assembled with the HT µC, an HT resistor digital-to-analog converter, and an HT line driver. The board was tested at the microcontroller's rated maximum temperature (210°C) for a week while transmitting through a 1524 m (5000 ft) wireline. A final test was conducted to the point of failure at elevated temperatures. This paper will discuss communication methods, achieved data rates, and hardware selection. This effort contributes to the enhancement of HT instrumentation by enabling greater sensor counts and improving data accuracy and transfer rates.
We propose the use of balanced iterative reducing and clustering using hierarchies (BIRCH) combined with linear regression to predict the reduced Young's modulus and hardness of highly heterogeneous materials from a set of nanoindentation experiments. We first use BIRCH to cluster the dataset according to its mineral compositions, which are derived from the spectral matching of energy-dispersive spectroscopy data through the modular automated processing system (MAPS) platform. We observe that grouping our dataset into five clusters yields the best accuracy as well as a reasonable representation of mineralogy in each cluster. Subsequently, we test four types of regression models, namely linear regression, support vector regression, Gaussian process regression, and extreme gradient boosting regression. The linear regression and Gaussian process regression provide the most accurate prediction, and the proposed framework yields R2 = 0.93 for the test set. Although the study is needed more comprehensively, our results shows that machine learning methods such as linear regression or Gaussian process regression can be used to accurately estimate mechanical properties with a proper number of grouping based on compositional data.
We propose a two-stage scenario-based stochastic optimization problem to determine investments that enhance power system resilience. The proposed optimization problem minimizes the Conditional Value at Risk (CVaR) of load loss to target low-probability high-impact events. We provide results in the context of generator winterization investments in Texas using winter storm scenarios generated from historical data collected from Winter Storm Uri. Results illustrate how the CVaR metric can be used to minimize the tail of the distribution of load loss and illustrate how risk-Aversity impacts investment decisions.
Neural networks (NN)s have been increasingly proposed as surrogates for approximation of systems with computationally expensive physics for rapid online evaluation or exploration. As these surrogate models are integrated into larger optimization problems used for decision making, there is a need to verify their behavior to ensure adequate performance over the desired parameter space. We extend the ideas of optimization-based neural network verification to provide guarantees of surrogate performance over the feasible optimization space. In doing so, we present formulations to represent neural networks within decision-making problems, and we develop verification approaches that use model constraints to provide increasingly tight error estimates. We demonstrate the capabilities on a simple steady-state reactor design problem.
Given a graph, finding the distance-2 maximal independent set (MIS-2) of the vertices is a problem that is useful in several contexts such as algebraic multigrid coarsening or multilevel graph partitioning. Such multilevel methods rely on finding the independent vertices so they can be used as seeds for aggregation in a multilevel scheme. We present a parallel MIS-2 algorithm to improve performance on modern accelerator hardware. This algorithm is implemented using the Kokkos programming model to enable performance portability. We demonstrate the portability of the algorithm and the performance on a variety of architectures (x86/ARM CPUs and NVIDIA/AMD GPUs). The resulting algorithm is also deterministic, producing an identical result for a given input across all of these platforms. The new MIS-2 implementation outperforms implementations in state of the art libraries like CUSP and ViennaCL by 3-8x while producing similar quality results. We further demonstrate the benefits of this approach by developing parallel graph coarsening scheme for two different use cases. First, we develop an algebraic multigrid (AMG) aggregation scheme using parallel MIS-2 and demonstrate the benefits as opposed to previous approaches used in the MueLu multigrid package in Trilinos. We also describe an approach for implementing a parallel multicolor 'cluster' Gauss-Seidel preconditioner using this MIS-2 coarsening, and demonstrate better performance with an efficient, parallel, mul-ticolor Gauss-Seidel algorithm.
Applications such as counterfeit identification, quality control, and non-destructive material identification benefit from improved spatial and compositional analysis. X-ray Computed Tomography is used in these applications but is limited by the X-ray focal spot size and the lack of energy-resolved data. Recently developed hyperspectral X-ray detectors estimate photon energy, which enables composition analysis but lacks spatial resolution. Moving beyond bulk homogeneous transmission anodes toward multi-metal patterned anodes enables improvements in spatial resolution and signal-to-noise ratios in these hyperspectral X-ray imaging systems. We aim to design and fabricate transmission anodes that facilitate confirmation of previous simulation results. These anodes are fabricated on diamond substrates with conventional photolithography and metal deposition processes. The final transmission anode design consists of a cluster of three disjoint metal bumps selected from molybdenum, silver, samarium, tungsten, and gold. These metals are chosen for their k-lines, which are positioned within distinct energy intervals of interest and are readily available in standard clean rooms. The diamond substrate is chosen for its high thermal conductivity and high transmittance of X-rays. The feature size of the metal bumps is chosen such that the cluster is smaller than the 100 m diameter of the impinging electron beam in the X-ray tube. This effectively shrinks the X-ray focal spot in the selected energy bands. Once fabricated, our transmission anode is packaged in a stainless-steel holder that can be retrofitted into our existing X-ray tube. Innovations in anode design enable an inexpensive and simple method to improve existing X-ray imaging systems.
Hyperspectral Computed Tomography (HCT) Data is often visualized using dimension reduction algorithms. However, these methods often fail to adequately differentiate between materials with similar spectral signatures. Previous work showed that a combination of image preprocessing, clustering, and dimension reduction techniques can be used to colorize simulated HCT data and enhance the contrast between similar materials. In this work, we evaluate the efficacy of these existing methods on experimental HCT data and propose new improvements to the robustness of these methods. We introduce an automated channel selection method and compare the Feldkamp, Davis, and Kress filtered back-projection (FBP) algorithm with the maximum-likelihood estimation-maximization (MLEM) algorithm in terms of HCT reconstruction image quality and its effect on different colorization methods. Additionally, we propose adaptations to the colorization process that eliminate the need for a priori knowledge of the number distinct materials for material classification. Our results show that these methods generalize to materials in real-world experimental HCT data for both colorization and classification tasks; both tasks have applications in industry, medicine, and security, wherever rapid visualization and identification is needed.
With the increase in penetration of inverter-based resources (IBRs) in the electrical power system, the ability of these devices to provide grid support to the system has become a necessity. With standards previously developed for the interconnection requirements of grid-following inverters (GFLI) (most commonly photovoltaic inverters), it has been well documented how these inverters 'should' respond to changes in voltage and frequency. However, with other IBRs such as grid-forming inverters (GFMIs) (used for energy storage systems, standalone systems, and as uninterruptable power supplies) these requirements are either: not yet documented, or require a more in deep analysis. With the increased interest in microgrids, GFMIs that can be paralleled onto a distribution system have become desired. With the proper control schemes, a GFMI can help maintain grid stability through fast response compared to rotating machines. This paper will present an experimental comparison of commercially available GFMIand GFLI ' responses to voltage and frequency deviation, as well as the GFMIoperating as a standalone system and subjected to various changes in loads.
The Cramér-Rao Lower Bound (CRLB) is used as a classical benchmark to assess estimators. Online algorithms for estimating modal properties from ambient data, i.e., mode meters, can benefit from accurate estimates of forced oscillations. The CRLB provides insight into how well forced oscillation parameters, e.g., frequency and amplitude, can be estimated. Previous works have solved the lower bound under a single-channel PMU measurement; thus, this paper extends works further to study CRLB under two-channel PMU measurements. The goal is to study how correlated/uncorrelated noise can affect estimation accuracy. Interestingly, these studies shows that correlated noise can decrease the CRLB in some cases. This paper derives the CRLB for the two-channel case and discusses factors that affect the bound.
A forward analytic model is required to rapidly simulate the neutron time-of-flight (nToF) signals that result from magnetized liner inertial fusion (MagLIF) experiments at Sandia’s Z Pulsed Power Facility. Various experimental parameters, such as the burn-weighted fuel-ion temperature and liner areal density, determine the shape of the nToF signal and are important for characterizing any given MagLIF experiment. Extracting these parameters from measured nToF signals requires an appropriate analytic model that includes the primary deuterium-deuterium neutron peak, once-scattered neutrons in the beryllium liner of the MagLIF target, and direct beamline attenuation. Mathematical expressions for this model were derived from the general-geometry time- and energy-dependent neutron transport equation with anisotropic scattering. Assumptions consistent with the time-of-flight technique were used to simplify this linear Boltzmann transport equation into a more tractable form. Models of the uncollided and once-collided neutron scalar fluxes were developed for one of the five nToF detector locations at the Z-Machine. Numerical results from these models were produced for a representative MagLIF problem and found to be in good agreement with similar neutron transport simulations. Twenty experimental MagLIF data sets were analyzed using the forward models, which were determined to only be significantly sensitive to the ion temperature. The results of this work were also found to agree with values obtained separately using a zero scatter analytic model and a high-fidelity Monte Carlo simulation. Inherent difficulties in this and similar techniques are identified, and a new approach forward is suggested.
We present a field-deployable microfluidic immunoassay device in response to the need for sensitive, quantitative, and high-throughput protein detection at point-of-need. The portable microfluidic system facilitates eight magnetic bead-based sandwich immunoassays from raw samples in 45 minutes. An innovative bead actuation strategy was incorporated into the system to automate multiple sample process steps with minimal user intervention. The device is capable of quantitative and sensitive protein analysis with a 10 pg/ml detection limit from interleukin 6-spiked human serum samples. We envision the reported device offering ultrasensitive point-of-care immunoassay tests for timely and accurate clinical diagnosis.
Stochasticity is ubiquitous in the world around us. However, our predominant computing paradigm is deterministic. Random number generation (RNG) can be a computationally inefficient operation in this system especially for larger workloads. Our work leverages the underlying physics of emerging devices to develop probabilistic neural circuits for RNGs from a given distribution. However, codesign for novel circuits and systems that leverage inherent device stochasticity is a hard problem. This is mostly due to the large design space and complexity of doing so. It requires concurrent input from multiple areas in the design stack from algorithms, architectures, circuits, to devices. In this paper, we present examples of optimal circuits developed leveraging AI-enhanced codesign techniques using constraints from emerging devices and algorithms. Our AI-enhanced codesign approach accelerated design and enabled interactions between experts from different areas of the micro-electronics design stack including theory, algorithms, circuits, and devices. We demonstrate optimal probabilistic neural circuits using magnetic tunnel junction and tunnel diode devices that generate an RNG from a given distribution.
The focus of this study is on spectral equivalence results for higher-order tensor product finite elements in the H(curl), H(div), and L2 function spaces. For certain choices of the higher-order shape functions, the resulting mass and stiffness matrices are spectrally equivalent to those for an assembly of lowest-order edge-, face- or interior-based elements on the associated Gauss–Lobatto–Legendre (GLL) mesh.
Proceedings - IEEE International Symposium on Circuits and Systems
Brigner, Wesley H.; Hassan, Naimul; Hu, Xuan; Bennett, Christopher; Garcia-Sanchez, Felipe; Marinella, Matthew; Incorvia, Jean A.C.; Friedman, Joseph S.
Neuromorphic computing promises revolutionary improvements over conventional systems for applications that process unstructured information. To fully realize this potential, neuromorphic systems should exploit the biomimetic behavior of emerging nanodevices. In particular, exceptional opportunities are provided by the non-volatility and analog capabilities of spintronic devices. While spintronic devices that emulate neurons have been previously proposed, they require complementary metal-oxide semiconductor (CMOS) technology to function. In turn, this significantly increases the power consumption, fabrication complexity, and device area of a single neuron. This work reviews three previously proposed CMOS-free spintronic neurons designed to resolve this issue.
A 0.2-2 GHz digitally programmable RF delay element based on a time-interleaved multi-stage switched-capacitor (TIMS-SC) approach is presented. The proposed approach enables hundreds of ns of broadband RF delay by employing sample time expansion in multiple stages of switched-capacitor storage elements. The delay element was implemented in a 45 nm SOI CMOS process and achieves a 2.55-448.6 ns programmable delay range with < 0.12% delay variation across 1.8 GHz of bandwidth at maximum delay, 2.42 ns programmable delay steps, and 330 ns/mm2 area efficiency. The device achieves 24 dB gain, 7.1 dB noise figure, and consumes 80 mW from a 1 V supply with an active area of 1.36 mm2.
Metasurface lenses are fabricated using membrane projection lithography following a CMOS-compatible process flow. The lenses are 10-mm in diameter and employ 3-dimensional unit cells designed to function in the mid-infrared spectral range.
Femtosecond laser electronic excitation tagging (FLEET) is a powerful unseeded velocimetry technique typically used to measure one component of velocity along a line, or two or three components from a dot. In this Letter, we demonstrate a dotted-line FLEET technique which combines the dense profile capability of a line with the ability to perform two-component velocimetry with a single camera on a dot. Our set-up uses a single beam path to create multiple simultaneous spots, more than previously achieved in other FLEET spot configurations. We perform dotted-line FLEET measurements downstream of a highly turbulent, supersonic nitrogen free jet. Dotted-line FLEET is created by focusing light transmitted by a periodic mask with rectangular slits of 1.6 × 40 mm2 and an edge-to-edge spacing of 0.5 mm, then focusing the imaged light at the measurement region. Up to seven symmetric dots spaced approximately 0.9 mm apart, with mean full-width at half maximum diameters between 150 and 350 µm, are simultaneously imaged. Both streamwise and radial velocities are computed and presented in this Letter.
Density fluctuations in compressible turbulent boundary layers cause aero-optical distortions that affect the performance of optical systems such as sensors and lasers. The development of models for predicting the aero-optical distortions relies on theory and reference data that can be obtained from experiments and time-resolved simulations. This paper reports on wall-modeled large-eddy simulations of turbulent boundary layers over a flat plate at Mach 3.5, 7.87, and 13.64. The conditions for the Mach 3.5 case match those for the DNS presented by Miller et al.1 The Mach 7.87 simulation match those inside the Hypersonic Wind Tunnel at Sandia National Laboratories. For the Mach 13.64, the conditions inside the Arnold Engineering Development Complex Hypervelocity Tunnel 9 are matched. Overall, adequate agreement of the velocity and temperature as well as Reynolds stress profiles with reference data from direct numerical simulations is obtained for the different Mach numbers. For all three cases, the normalized root-mean-square optical path difference was computed and compared with data obtained from the reference direct numerical simulations and experiments, as well as predictions obtained with a semi-analytical relationship by Notre Dame University. Above Mach five, the normalized path difference obtained from the simulations is above the model prediction. This provides motivation for future work aimed at evaluating the assumptions behind the Notre Dame model for hypersonic boundary layer flows.
Metal additive manufacturing allows for the fabrication of parts at the point of use as well as the manufacture of parts with complex geometries that would be difficult to manufacture via conventional methods (milling, casting, etc.). Additively manufactured parts are likely to contain internal defects due to the melt pool, powder material, and laser velocity conditions when printing. Two different types of defects were present in the CT scans of printed AlSi10Mg dogbones: spherical porosity and irregular porosity. Identification of these pores via a machine learning approach (i.e., support vector machines, convolutional neural networks, k-nearest neighbors’ classifiers) could be helpful with part qualification and inspections. The machine learning approach will aim to label the regions of porosity and label the type of porosity present. The results showed that a combination approach of Canny edge detection and a classification-based machine learning model (k-nearest neighbors or support vector machine) outperformed the convolutional neural network in segmenting and labeling different types of porosity.
Frequent changes in penetration levels of distributed energy resources (DERs) and grid control objectives have caused the maintenance of accurate and reliable grid models for behind-the-meter (BTM) photovoltaic (PV) system impact studies to become an increasingly challenging task. At the same time, high adoption rates of advanced metering infrastructure (AMI) devices have improved load modeling techniques and have enabled the application of machine learning algorithms to a wide variety of model calibration tasks. Therefore, we propose that these algorithms can be applied to improve the quality of the input data and grid models used for PV impact studies. In this paper, these potential improvements were assessed for their ability to improve the accuracy of locational BTM PV hosting capacity analysis (HCA). Specifically, the voltage- and thermal-constrained hosting capacities of every customer location on a distribution feeder (1,379 in total) were calculated every 15 minutes for an entire year before and after each calibration algorithm or load modeling technique was applied. Overall, the HCA results were found to be highly sensitive to the various modeling deficiencies under investigation, illustrating the opportunity for more data-centric/model-free approaches to PV impact studies.
Software sustainability is critical for Computational Science and Engineering (CSE) software. Measuring sustainability is challenging because sustainability consists of many attributes. One factor that impacts software sustainability is the complexity of the source code. This paper introduces an approach for utilizing complexity data, with a focus on hotspots of and changes in complexity, to assist developers in performing code reviews and inform project teams about longer-term changes in sustainability and maintainability from the perspective of cyclomatic complexity. We present an analysis of data associated with four real-world pull requests to demonstrate how the metrics may help guide and inform the code review process and how the data can be used to measure changes in complexity over time.
Centered on modern C++ and the SYCL standard for heterogeneous programming, Data Parallel C++ (dpc++) and Intel's oneAPI software ecosystem aim to lower the barrier to entry for the use of accelerators like FPGAs in diverse applications. In this work, we consider the usage of FPGAs for scientific computing, in particular with a multigrid solver, MueLu. We report on early experiences implementing kernels of the solver in DPC++ for execution on Stratix 10 FPGAs, and we evaluate several algorithmic design and implementation choices. These choices not only impact performance, but also shed light on the capabilities and limitations of DPC++ and oneAPI.
Dynamical systems subject to intermittent contact are often modeled with piecewise-smooth contact forces. However, the discontinuous nature of the contact can cause inaccuracies in numerical results or failure in numerical solvers. Representing the piecewise contact force with a continuous and smooth function can mitigate these problems, but not all continuous representations may be appropriate for this use. In this work, five representations used by previous researchers (polynomial, rational polynomial, hyperbolic tangent, arctangent, and logarithm-arctangent functions) are studied to determine which ones most accurately capture nonlinear behaviors including super- and subharmonic resonances, multiple solutions, and chaos. The test case is a single-DOF forced Duffing oscillator with freeplay nonlinearity, solved using direct time integration. This work intends to expand on past studies by determining the limits of applicability for each representation and what numerical problems may occur.
Integrating recent advancements in resilient algorithms and techniques into existing codes is a singular challenge in fault tolerance - in part due to the underlying complexity of implementing resilience in the first place, but also due to the difficulty introduced when integrating the functionality of a standalone new strategy with the preexisting resilience layers of an application. We propose that the answer is not to build integrated solutions for users, but runtimes designed to integrate into a larger comprehensive resilience system and thereby enable the necessary jump to multi-layered recovery. Our work designs, implements, and verifies one such comprehensive system of runtimes. Utilizing Fenix, a process resilience tool with integration into preexisting resilience systems as a design priority, we update Kokkos Resilience and the use pattern of VeloC to support application-level integration of resilience runtimes. Our work shows that designing integrable systems rather than integrated systems allows for user-designed optimization and upgrading of resilience techniques while maintaining the simplicity and performance of all-in-one resilience solutions. More application-specific choice in resilience strategies allows for better long-term flexibility, performance, and - importantly - simplicity.
Unpredictable disturbances with dynamic trajectories such as extreme weather events and cyber attacks require adaptive, cyber-physical special protection schemes to mitigate cascading impact in the electric grid. A harmonized automatic relay mitigation of nefarious intentional events (HARMONIE) special protection scheme (SPS) is being developed to address that need. However, for evaluating the HARMONIE-SPS performance in classifying system disturbances and mitigating consequences, a cyber-physical testbed is required to further development and validate the methodology. In this paper, we present a design for a co-simulation testbed leveraging the SCEPTRE™ platform and the real-time digital simulator (RTDS). The integration of these two platforms is detailed, as well as the unique, specific needs for testing HARMONIE-SPS within the environment. Results are presented from tests involving a WSCC 9-bus system with different load shedding scenarios with varying cyber-physical impact.
To keep pace with the demand for innovation through scientific computing, modern scientific software development is increasingly reliant upon a rich and diverse ecosystem of software libraries and toolchains. Research software engineers (RSEs) responsible for that infrastructure perform highly integrative work, acting as a bridge between the hardware, the needs of researchers, and the software layers situated between them; relatively little, however, has been written about the role played by RSEs in that work and what support they need to thrive. To that end, we present a two-part report on the development of half-precision floating point support in the Kokkos Ecosystem. Half-precision computation is a promising strategy for increasing performance in numerical computing and is particularly attractive for emerging application areas (e.g., machine learning), but developing practicable, portable, and user-friendly abstractions is a nontrivial task. In the first half of the paper, we conduct an engineering study on the technical implementation of the Kokkos half-precision scalar feature and showcase experimental results; in the second half, we offer an experience report on the challenges and lessons learned during feature development by the first author. We hope our study provides a holistic view on scientific library development and surfaces opportunities for future studies into effective strategies for RSEs engaged in such work.
Proceedings of ISAV 2022: IEEE/ACM International Workshop on In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization, Held in conjunction with SC 2022: The International Conference for High Performance Computing, Networking, Storage and Analysis
This paper reports on Catalyst usability and initial adoption by SPARC analysts. The use case approach highlights the analysts' perspective. Impediments to adoption can be due to deficiencies in software capabilities, or analysts may identify mundane inconveniences and barriers that prevent them from fully leveraging Catalyst. With that said, for many analyst tasks Catalyst provides enough relative advantage that they have begun applying it in their production work, and they recognize the potential for it to solve problems they currently struggle with. The findings in this report include specific issues and minor bugs in ParaView Python scripting, which are viewed as having straightforward solutions, as well as a broader adoption analysis.
The precise estimation of performance loss rate (PLR) of photovoltaic (PV) systems is vital for reducing investment risks and increasing the bankability of the technology. Until recently, the PLR of fielded PV systems was mainly estimated through the extraction of a linear trend from a time series of performance indicators. However, operating PV systems exhibit failures and performance losses that cause variability in the performance and may bias the PLR results obtained from linear trend techniques. Change-point (CP) methods were thus introduced to identify nonlinear trend changes and behaviour. The aim of this work is to perform a comparative analysis among different CP techniques for estimating the annual PLR of eleven grid-connected PV systems installed in Cyprus. Outdoor field measurements over an 8-year period (June 2006-June 2014) were used for the analysis. The obtained results when applying different CP algorithms to the performance ratio time series (aggregated into monthly blocks) demonstrated that the extracted trend may not always be linear but sometimes can exhibit nonlinearities. The application of different CP methods resulted to PLR values that differ by up to 0.85% per year (for the same number of CPs/segments).
Simple but mission-critical internet-based applications that require extremely high reliability, availability, and verifiability (e.g., auditability) could benefit from running on robust public programmable blockchain platforms such as Ethereum. Unfortunately, program code running on such blockchains is normally publicly viewable, rendering these platforms unsuitable for applications requiring strict privacy of application code, data, and results. In this work, we investigate using MPC techniques to protect the privacy of a blockchain computation. While our main goal is to hide both the data and the computed function itself, we also consider the standard MPC setting where the function is public. We describe GABLE (Garbled Autonomous Bots Leveraging Ethereum), a blockchain MPC architecture and system. The GABLE architecture specifies the roles and capabilities of the players. GABLE includes two approaches for implementing MPC over blockchain: Garbled Circuits (GC), evaluating universal circuits, and Garbled Finite State Automata (GFSA). We formally model and prove the security of GABLE implemented over garbling schemes, a popular abstraction of GC and GFSA from (Bellare et al., CCS 2012). We analyze in detail the performance (including Ethereum gas costs) of both approaches and discuss the trade-offs. We implement a simple prototype of GABLE and report on the implementation issues and experience.
Conference Proceedings of the Society for Experimental Mechanics Series
Singh, Aabhas; Wielgus, Kayla M.; Dimino, Ignazio; Kuether, Robert J.; Allen, Matthew S.
Morphing wings have great potential to dramatically improve the efficiency of future generations of aircraft and to reduce noise and emissions. Among many camber morphing wing concepts, shape changing fingerlike mechanisms consist of components, such as torsion bars, bushings, bearings, and joints, all of which exhibit damping and stiffness nonlinearities that are dependent on excitation amplitude. These nonlinearities make the dynamic response difficult to model accurately with traditional simulation approaches. As a result, at high excitation levels, linear finite element models may be inaccurate, and a nonlinear modeling approach is required to capture the necessary physics. This work seeks to better understand the influence of nonlinearity on the effective damping and natural frequency of the morphing wing through the use of quasi-static modal analysis and model reduction techniques that employ multipoint constraints (i.e., spider elements). With over 500,000 elements and 39 frictional contact surfaces, this represents one of the most complicated models to which these methods have been applied to date. The results to date are summarized and lessons learned are highlighted.
In accident scenarios involving release of tritium during handling and storage, the level of risk to human health is dominated by the extent to which radioactive tritium is oxidized to the water form (T2O or THO). At some facilities, tritium inventories consist of very small quantities stored at sub-atmospheric pressure, which means that tritium release accident scenarios will likely produce concentrations in air that are well below the lower flammability limit. It is known that isotope effects on reaction rates should result in slower oxidation rates for heavier isotopes of hydrogen, but this effect has not previously been quantified for oxidation at concentrations well below the lower flammability limit for hydrogen. This work describes hydrogen isotope oxidation measurements in an atmospheric tube furnace reactor. These measurements consist of five concentration levels between 0.01% and 1% protium or deuterium and two residence times. Oxidation is observed to occur between about 550°C and 800°C, with higher levels of conversion achieved at lower temperatures for protium with respect to deuterium at the same volumetric inlet concentration and residence time. Computational fluid dynamics simulations of the experiments were used to customize reaction orders and Arrhenius parameters in a 1-step oxidation mechanism. The trends in the rates for protium and deuterium are extrapolated based on guidance from literature to produce kinetic rate parameters appropriate for tritium oxidation at low concentrations.
The installation of digital sensors, such as advanced meter infrastructure (AMI) meters, has provided the means to implement a wide variety of techniques to increase visibility into the distribution system, including the ability to calibrate the utility models using data-driven algorithms. One challenge in maintaining accurate and up-to-date distribution system models is identifying changes and event occurrences that happen during the year, such as customers who have changed phases due to maintenance or other events. This work proposes a method for the detection of phase change events that utilizes techniques from an existing phase identification algorithm. This work utilizes an ensemble step to obtain predicted phases for windows of data, therefore allowing the predicted phase of customers to be observed over time. The proposed algorithm was tested on four utility datasets as well as a synthetic dataset. The synthetic tests showed the algorithm was capable of accurately detecting true phase change events while limiting the number of false-positive events flagged. In addition, the algorithm was able to identify possible phase change events on two real datasets.
We have extended the computational singular perturbation (CSP) method to differential algebraic equation (DAE) systems and demonstrated its application in a heterogeneous-catalysis problem. The extended method obtains the CSP basis vectors for DAEs from a reduced Jacobian matrix that takes the algebraic constraints into account. We use a canonical problem in heterogeneous catalysis, the transient continuous stirred tank reactor (T-CSTR), for illustration. The T-CSTR problem is modelled fundamentally as an ordinary differential equation (ODE) system, but it can be transformed to a DAE system if one approximates typically fast surface processes using algebraic constraints for the surface species. We demonstrate the application of CSP analysis for both ODE and DAE constructions of a T-CSTR problem, illustrating the dynamical response of the system in each case. We also highlight the utility of the analysis in commenting on the quality of any particular DAE approximation built using the quasi-steady state approximation (QSSA), relative to the ODE reference case.
As transistors have been scaled over the past decade, modern systems have become increasingly susceptible to faults. Increased transistor densities and lower capacitances make a particle strike more likely to cause an upset. At the same time, complex computer systems are increasingly integrated into safety-critical systems such as autonomous vehicles. These two trends make the study of system reliability and fault tolerance essential for modern systems. To analyze and improve system reliability early in the design process, new tools are needed for RTL fault analysis.This paper proposes Eris, a novel framework to identify vulnerable components in hardware designs through fault-injection and fault propagation tracking. Eris builds on ESSENT - a fast C/C++ RTL simulation framework - to provide fault injection, fault tracking, and control-flow deviation detection capabilities for RTL designs. To demonstrate Eris' capabilities, we analyze the reliability of the open source Rocket Chip SoC by randomly injecting faults during thousands of runs on four microbenchmarks. As part of this analysis we measure the sensitivity of different hardware structures to faults based on the likelihood of a random fault causing silent data corruption, unrecoverable data errors, program crashes, and program hangs. We detect control flow deviations and determine whether or not they are benign. Additionally, using Eris' novel fault-tracking capabilities we are able to find 78% more vulnerable components in the same number of simulations compared to RTL-based fault injection techniques without these capabilities. We will release Eris as an open-source tool to aid future research into processor reliability and hardening.
Scalable coherent control hardware for quantum information platforms is rapidly growing in priority as their number of available qubits continues to increase. As these systems scale, more calibration steps are needed, leading to challenges with system instability as calibrated parameters drift. Moreover, the sheer amount of data required to run circuits with large depth tends to balloon, especially when implementing state-of-the-art dynamical-decoupling gates which require advanced modulation techniques. We present a control system that addresses these challenges for trapped-ion systems, through a combination of novel features that eliminate the need for manual bookkeeping, reduction in data transfer bandwidth requirements via gate compression schemes, and other automated error handling techniques. Moreover, we describe an embedded pulse compiler that applies staged optimization, including compressed intermediate representations of parsed output products, performs in-situ mutation of compressed gate data to support high-level algorithmic feedback to account for drift, and can be run entirely on chip.
The penetration of renewable energy resources (RER) and energy storage systems (ESS) into the power grid has been accelerated in recent times due to the aggressive emission and RER penetration targets. The Integrated resource planning (IRP) framework can help in ensuring long-term resource adequacy while satisfying RER integration and emission reduction targets in a cost-effective and reliable manner. In this paper, we present pIRP (probabilistic Integrated Resource Planning), an open-source Python-based software tool designed for optimal portfolio planning for an RER and ESS rich future grid and for addressing the capacity expansion problem. The tool, which is planned to be released publicly, with its ESS and RER modeling capabilities along with enhanced uncertainty handling make it one of the more advanced non-commercial IRP tools available currently. Additionally, the tool is equipped with an intuitive graphical user interface and expansive plotting capabilities. Impacts of uncertainties in the system are captured using Monte Carlo simulations and lets the users analyze hundreds of scenarios with detailed scenario reports. A linear programming based architecture is adopted which ensures sufficiently fast solution time while considering hundreds of scenarios and characterizing profile risks with varying levels of RER and ESS penetration levels. Results for a test case using data from parts of the Eastern Interconnection are provided in this paper to demonstrate the capabilities offered by the tool.
Quantum computers can now run interesting programs, but each processor’s capability—the set of programs that it can run successfully—is limited by hardware errors. These errors can be complicated, making it difficult to accurately predict a processor’s capability. Benchmarks can be used to measure capability directly, but current benchmarks have limited flexibility and scale poorly to many-qubit processors. We show how to construct scalable, efficiently verifiable benchmarks based on any program by using a technique that we call circuit mirroring. With it, we construct two flexible, scalable volumetric benchmarks based on randomized and periodically ordered programs. We use these benchmarks to map out the capabilities of twelve publicly available processors, and to measure the impact of program structure on each one. We find that standard error metrics are poor predictors of whether a program will run successfully on today’s hardware, and that current processors vary widely in their sensitivity to program structure.
This work investigates both avalanche behavior and failure mechanism of 3 kV GaN-on-GaN vertical P-N diodes, that were fabricated and later tested under unclamped inductive switching (UIS) stress. The goal of this study is to use the particular avalanche characteristics and the failure mechanism to identify issues with the field termination and then provide feedback to improve the device design. DC breakdown is measured at the different temperatures to confirm the avalanche breakdown. Diode's avalanche robustness is measured on-wafer using a UIS test set-up which was integrated with a wafer chuck and CCD camera. Post failure analysis of the diode is done using SEM and optical microscopy to gain insight into the device failure physics.
Metal additive manufacturing allows for the fabrication of parts at the point of use as well as the manufacture of parts with complex geometries that would be difficult to manufacture via conventional methods (milling, casting, etc.). Additively manufactured parts are likely to contain internal defects due to the melt pool, powder material, and laser velocity conditions when printing. Two different types of defects were present in the CT scans of printed AlSi10Mg dogbones: spherical porosity and irregular porosity. Identification of these pores via a machine learning approach (i.e., support vector machines, convolutional neural networks, k-nearest neighbors’ classifiers) could be helpful with part qualification and inspections. The machine learning approach will aim to label the regions of porosity and label the type of porosity present. The results showed that a combination approach of Canny edge detection and a classification-based machine learning model (k-nearest neighbors or support vector machine) outperformed the convolutional neural network in segmenting and labeling different types of porosity.
Creation of streaming video stimuli that allow for strict experimental control while providing ease of scene manipulation is difficult to achieve but desired by researchers seeking to approach ecological validity in contexts that involve processing streaming visual information. To that end, we propose leveraging video game modding tools as a method of creating research quality stimuli. As a pilot effort, we used a video game sandbox tool (Garry’s Mod) to create three steaming video scenarios designed to mimic video feeds that physical security personnel might observe. All scenarios required participants to identify the presences of a threat appearing during the video feed. Each scenario differed in level of complexity, in that one scenario required only location monitoring, one required location and action monitoring, and one required location, action, and conjunction monitoring in that when an action was performed it was only considered a threat when performed by a certain character model. While there was no behavioral effect of scenario in terms of accuracy or response times, in all scenarios we found evidence of a P300 when comparing response to threatening stimuli to that of standard stimuli. Results therefore indicate that sufficient levels of experimental control may be achieved to allow for the precise timing required for ERP analysis. Thus, we demonstrate the feasibility of using existing modding tools to create video scenarios amenable to neuroimaging analysis.
The latest high temperature (HT) microcontrollers and memory technology have been investigated for the purpose of enhancing downhole instrumentation capabilities at temperatures above 210°C. As part of the effort, five microcontrollers (Honeywell HT83C51, RelChip RC10001, Texas Instruments SM470R1B1M-HT, SM320F2812-HT, SM320F28335-HT) and one memory chip (RelChip RC2110836) have been evaluated to its rated temperature for a period of one month to determine life expectancy and performance. Pulse rate of the integrated circuit and internal memory scan were performed during testing by remotely located axillary components. This paper will describe challenges encountered in the operation and HT testing of these components. Long-term HT tests results show the variation in power consumption and packaging degradation. The work described in this paper improves downhole instrumentation by enabling greater sensor counts and improving data accuracy and transfer rates at temperatures between 210°C and 300°C.