Quantum computers can now run interesting programs, but each processor’s capability—the set of programs that it can run successfully—is limited by hardware errors. These errors can be complicated, making it difficult to accurately predict a processor’s capability. Benchmarks can be used to measure capability directly, but current benchmarks have limited flexibility and scale poorly to many-qubit processors. We show how to construct scalable, efficiently verifiable benchmarks based on any program by using a technique that we call circuit mirroring. With it, we construct two flexible, scalable volumetric benchmarks based on randomized and periodically ordered programs. We use these benchmarks to map out the capabilities of twelve publicly available processors, and to measure the impact of program structure on each one. We find that standard error metrics are poor predictors of whether a program will run successfully on today’s hardware, and that current processors vary widely in their sensitivity to program structure.
Many, if not all, Waste Management Organisation programs will include criticality safety. As criticality safety in the long-term, i.e. considered over post-closure timescales in dedicated disposal facilities, is a unique challenge for geological disposal there is limited opportunity for sharing of experience within an individual organization/country. Therefore, sharing of experience and knowledge between WMOs to understand any similarities and differences will be beneficial in understanding where the approaches are similar and where they are not, and the reasons for this. To achieve this benefit a project on Post-Closure Criticality Safety has been established through the Implementing Geological Disposal - Technology Platform with the overall aim to facilitate the sharing of this knowledge. This project currently has 11 participating nations, including the United States and this paper presents the current position in the United States.
Simple but mission-critical internet-based applications that require extremely high reliability, availability, and verifiability (e.g., auditability) could benefit from running on robust public programmable blockchain platforms such as Ethereum. Unfortunately, program code running on such blockchains is normally publicly viewable, rendering these platforms unsuitable for applications requiring strict privacy of application code, data, and results. In this work, we investigate using MPC techniques to protect the privacy of a blockchain computation. While our main goal is to hide both the data and the computed function itself, we also consider the standard MPC setting where the function is public. We describe GABLE (Garbled Autonomous Bots Leveraging Ethereum), a blockchain MPC architecture and system. The GABLE architecture specifies the roles and capabilities of the players. GABLE includes two approaches for implementing MPC over blockchain: Garbled Circuits (GC), evaluating universal circuits, and Garbled Finite State Automata (GFSA). We formally model and prove the security of GABLE implemented over garbling schemes, a popular abstraction of GC and GFSA from (Bellare et al., CCS 2012). We analyze in detail the performance (including Ethereum gas costs) of both approaches and discuss the trade-offs. We implement a simple prototype of GABLE and report on the implementation issues and experience.
The National Academy of Sciences, Engineering, and Medicine (NASEM) defines reproducibility as 'obtaining consistent computational results using the same input data, computational steps, methods, code, and conditions of analysis,' and replicability as 'obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data' [1]. Due to an increasing number of applications of artificial intelligence and machine learning (AI/ML) to fields such as healthcare and digital medicine, there is a growing need for verifiable AI/ML results, and therefore reproducible research and replicable experiments. This paper establishes examples of irreproducible AI/ML applications to medical sciences and quantifies the variance of common AI/ML models (Artificial Neural Network, Naive Bayes classifier, and Random Forest classifiers) for tasks on medical data sets.
Neural networks (NN) have become almost ubiquitous with image classification, but in their standard form produce point estimates, with no measure of confidence. Bayesian neural networks (BNN) provide uncertainty quantification (UQ) for NN predictions and estimates through the posterior distribution. As NN are applied in more high-consequence applications, UQ is becoming a requirement. Automating systems can save time and money, but only if the operator can trust what the system outputs. BNN provide a solution to this problem by not only giving accurate predictions and estimates, but also an interval that includes reasonable values within a desired probability. Despite their positive attributes, BNN are notoriously difficult and time consuming to train. Traditional Bayesian methods use Markov Chain Monte Carlo (MCMC), but this is often brushed aside as being too slow. The most common method is variational inference (VI) due to its fast computation, but there are multiple concerns with its efficacy. MCMC is the gold standard and given enough time, will produce the correct result. VI, alternatively, is an approximation that converges asymptotically. Unfortunately (or fortunately), high consequence problems often do not live in the land of asymtopia so solutions like MCMC are preferable to approximations. We apply and compare MCMC-and VI-trained BNN in the context of target detection in hyperspectral imagery (HSI), where materials of interest can be identified by their unique spectral signature. This is a challenging field, due to the numerous permuting effects practical collection of HSI has on measured spectra. Both models are trained using out-of-the-box tools on a high fidelity HSI target detection scene. Both MCMC-and VI-trained BNN perform well overall at target detection on a simulated HSI scene. Splitting the test set predictions into two classes, high confidence and low confidence predictions, presents a path to automation. For the MCMC-trained BNN, the high confidence predictions have a 0.95 probability of detection with a false alarm rate of 0.05 when considering pixels with target abundance of 0.2. VI-trained BNN have a 0.25 probability of detection for the same, but its performance on high confidence sets matched MCMC for abundances >0.4. However, the VI-trained BNN on this scene required significant expert tuning to get these results while MCMC worked immediately. On neither scene was MCMC prohibitively time consuming, as is often assumed, but the networks we used were relatively small. This paper provides an example of how to utilize the benefits of UQ, but also to increase awareness that different training methods can give different results for the same model. If sufficient computational resources are available, the best approach rather than the fastest or most efficient should be used, especially for high consequence problems.
A crucial component of field testing is the utilization of numerical models to better understand the system and the experimental data being collected. Meshing and modeling field tests is a complex and computationally demanding problem. Hexahedral elements cannot always reproduce experimental dimensions leading to grid orientation or geometric errors. Voronoi meshes can match complex geometries without sacrificing orthogonality. As a result, here we present a high-resolution 3D numerical study for the BATS heater test at the WIPP that compares both a standard non-deformed cartesian mesh along with a Voronoi mesh to match field data collected during a salt heater experiment.
A new set of critical experiments exploring the temperature-dependence of the reactivity in a critical assembly is described. In the experiments, the temperature of the critical assembly will be varied to determine the temperature that produces the highest reactivity in the assembly. This temperature is the inversion point of the isothermal reactivity coefficient of the assembly. An analysis of relevant configurations is presented. Existing measurements are described and an analysis of these experiments presented. The overall experimental approach is described as are the modifications to the critical assembly needed to perform the experiments.
This paper applies sensitivity and uncertainty analysis to compare two model alternatives for fuel matrix degradation for performance assessment of a generic crystalline repository. The results show that this model choice has little effect on uncertainty in the peak 129I concentration. The small impact of this choice is likely due to the higher importance of uncertainty in the instantaneous release fraction and differences in epistemic uncertainty between the alternatives.
Proceedings of Correctness 2022: 6th International Workshop on Software Correctness for HPC Applications, Held in conjunction with SC 2022: The International Conference for High Performance Computing, Networking, Storage and Analysis
Iterative methods for solving linear systems serve as a basic building block for computational science. The computational cost of these methods can be significantly influenced by the round-off errors that accumulate as a result of their implementation in finite precision. In the extreme case, round-off errors that occur in practice can completely prevent an implementation from satisfying the accuracy and convergence behavior prescribed by its underlying algorithm. In the exascale era where cost is paramount, a thorough and rigorous analysis of the delay of convergence due to round-off should not be ignored. In this paper, we use a small model problem and the Jacobi iterative method to demonstrate how the Coq proof assistant can be used to formally specify the floating-point behavior of iterative methods, and to rigorously prove the accuracy of these methods.
There is a need to perform offline anomaly detection in count data streams to simultaneously identify both systemic changes and outliers, simultaneously. We propose a new algorithmic method, called the Anomaly Detection Pipeline, which leverages common statistical process control procedures in a novel way to accomplish this. The method we propose does not require user-defined control or phase I training data, automatically identifying regions of stability for improved parameter estimation to support change point detection. The method does not require data to be normally distributed, and it detects outliers relative to the regimes in which they occur. Our proposed method performs comparably to state-of-the-art change point detection methods, provides additional capabilities, and is extendable to a larger set of possible data streams than known methods.
Software is ubiquitous in society, but understanding it, especially without access to source code, is both non-trivial and critical to security. A specialized group of cyber defenders conducts reverse engineering (RE) to analyze software. The expertise-driven process of software RE is not well understood, especially from the perspective of workflows and automated tools. We conducted a task analysis to explore the cognitive processes that analysts follow when using static techniques on binary code. Experienced analysts were asked to statically find a vulnerability in a small binary that could allow for unverified access to root privileges. Results show a highly iterative process with commonly used cognitive states across participants of varying expertise, but little standardization in process order and structure. A goal-centered analysis offers a different perspective about dominant RE states. We discuss implications about the nature of RE expertise and opportunities for new automation to assist analysts using static techniques.
This work presents an experimental investigation of the deformation and breakup of water drops behind conical shock waves. A conical shock is generated by firing a bullet at Mach 4.5 past a vertical column of drops with a mean initial diameter of 192 µm. The time-resolved drop position and maximum transverse dimension are characterized using backlit stereo videos taken at 500 kHz. A Reynolds-Averaged Navier Stokes (RANS) simulation of the bullet is used to estimate the gas density and velocity fields experienced by the drops. Classical correlations for breakup times derived from planar-shock/drop interactions are evaluated. Predicted drop breakup times are found to be in error by a factor of three or more, indicating that existing correlations are inadequate for predicting the response to the three-dimensional relaxation of the velocity and thermodynamic properties downstream of the conical shock. Next, the Taylor Analogy Breakup (TAB) model, which solves a transient equation for drop deformation, is evaluated. TAB predictions for drop diameter calculated using a dimensionless constant of C2 = 2, as compared to the accepted value of C2 = 2/3, are found to agree within the confidence bounds of the ensemble averaged experimental values for all drops studied. These results suggest the three-dimensional relaxation effects behind conical shock waves alter the drop response in comparison to a step change across a planar shock, and that future models describing the interaction between a drop and a non-planar shock wave should account for flow field variations.
This chapter deals with experimental dynamic substructures which are reduced order models that can be coupled with each other or with finite element derived substructures to estimate the system response of the coupled substructures. A unifying theoretical framework in the physical, modal or frequency domain is reviewed with examples. The major issues that have hindered experimental based substructures are addressed. An example is demonstrated with the transmission simulator method that overcomes the major historical difficulties. Guidelines for the transmission simulator design are presented.
Conference Proceedings of the Society for Experimental Mechanics Series
Saunders, Brian E.; Vasconcellos, Rui M.G.; Kuether, Robert J.; Abdelkefi, Abdessattar
Dynamical systems containing contact/impact between parts can be modeled as piecewise-smooth reduced-order models. The most common example is freeplay, which can manifest as a loose support, worn hinges, or backlash. Freeplay causes very complex, nonlinear responses in a system that range from isolated resonances to grazing bifurcations to chaos. This can be an issue because classical solution methods, such as direct time integration (e.g., Runge-Kutta) or harmonic balance methods, can fail to accurately detect some of the nonlinear behavior or fail to run altogether. To deal with this limitation, researchers often approximate piecewise freeplay terms in the equations of motion using continuous, fully smooth functions. While this strategy can be convenient, it may not always be appropriate for use. For example, past investigation on freeplay in an aeroelastic control surface showed that, compared to the exact piecewise representation, some approximations are not as effective at capturing freeplay behavior as other ones. Another potential issue is the effectiveness of continuous representations at capturing grazing contacts and grazing-type bifurcations. These can cause the system to transition to high-amplitude responses with frequent contact/impact and be particularly damaging. In this work, a bifurcation study is performed on a model of a forced Duffing oscillator with freeplay nonlinearity. Various representations are used to approximate the freeplay including polynomial, absolute value, and hyperbolic tangent representations. Bifurcation analysis results for each type are compared to results using the exact piecewise-smooth representation computed using MATLAB® Event Location. The effectiveness of each representation is compared and ranked in terms of numerical accuracy, ability to capture multiple response types, ability to predict chaos, and computation time.
Metal additive manufacturing allows for the fabrication of parts at the point of use as well as the manufacture of parts with complex geometries that would be difficult to manufacture via conventional methods (milling, casting, etc.). Additively manufactured parts are likely to contain internal defects due to the melt pool, powder material, and laser velocity conditions when printing. Two different types of defects were present in the CT scans of printed AlSi10Mg dogbones: spherical porosity and irregular porosity. Identification of these pores via a machine learning approach (i.e., support vector machines, convolutional neural networks, k-nearest neighbors’ classifiers) could be helpful with part qualification and inspections. The machine learning approach will aim to label the regions of porosity and label the type of porosity present. The results showed that a combination approach of Canny edge detection and a classification-based machine learning model (k-nearest neighbors or support vector machine) outperformed the convolutional neural network in segmenting and labeling different types of porosity.
Transportation of sodium-bonded spent fuel appears to present no unique challenges. Storage systems for this fuel should be designed to keep water, both liquid and vapor, from contacting the spent fuel. This fuel is not suitable for geologic disposal; therefore, how the spent sodium bonded fuel will be processed and the characteristics of the final disposal waste form(s) need to be considered. TRISO spent fuel appears to present no unique challenges in terms of transportation, storage, or disposal. If the graphite block is disposed of with the TRISO spent fuel, the 14C and 3H generated would need to be considered in the postclosure performance assessment. Salt waste from the molten salt reactor has yet to be transported or stored and might be a challenge to dispose of in a non-salt repositories. Like sodium-bonded spent fuel, how the salt will be treated and the characteristics of the final disposal waste form(s) need to be considered. In addition, radiolysis in the frozen salt waste form continues to generate gas, which presents a hazard. Both HALEU and high-enriched uranium SNF are currently being stored and transported by the DOE. Disposal of fuels with enrichments greater than 5% was included in the disposal plan for Yucca Mountain. The increased potential for criticality associated with the higher enriched SNF is mitigated by additional criticality control measures. Fuels that are similar to some ATFs were part of the disposal plan for Yucca Mountain. Some of the properties of these fuels (swelling, generation of 14C) would have to be considered as part of a postclosure performance assessment.
Given a graph, finding the distance-2 maximal independent set (MIS-2) of the vertices is a problem that is useful in several contexts such as algebraic multigrid coarsening or multilevel graph partitioning. Such multilevel methods rely on finding the independent vertices so they can be used as seeds for aggregation in a multilevel scheme. We present a parallel MIS-2 algorithm to improve performance on modern accelerator hardware. This algorithm is implemented using the Kokkos programming model to enable performance portability. We demonstrate the portability of the algorithm and the performance on a variety of architectures (x86/ARM CPUs and NVIDIA/AMD GPUs). The resulting algorithm is also deterministic, producing an identical result for a given input across all of these platforms. The new MIS-2 implementation outperforms implementations in state of the art libraries like CUSP and ViennaCL by 3-8x while producing similar quality results. We further demonstrate the benefits of this approach by developing parallel graph coarsening scheme for two different use cases. First, we develop an algebraic multigrid (AMG) aggregation scheme using parallel MIS-2 and demonstrate the benefits as opposed to previous approaches used in the MueLu multigrid package in Trilinos. We also describe an approach for implementing a parallel multicolor 'cluster' Gauss-Seidel preconditioner using this MIS-2 coarsening, and demonstrate better performance with an efficient, parallel, mul-ticolor Gauss-Seidel algorithm.
Wave energy converters have yet to reach broad market viability. Traditionally, levelized cost of energy has been considered the ultimate stage gate through which wave energy developers must pass in order to find success (i.e., the levelized cost of wave energy must be less than that of solar and wind). However, real world energy decisions are not based solely on levelized cost of energy. In this study, we consider the energy mix in California in the year 2045, upon which the state plans to achieve zero carbon energy production. By considering temporal electricity production and consumption, we are able to perform a more informed analysis of the decision process to address this challenge. The results show that, due to high level of ocean wave energy in the winter months, wave energy provides a valuable complement to solar and wind, which have higher production in the summer. Thus, based on this complementary temporal aspect, wave energy appears cost-effective, even when the cost of installation and maintenance is twice that of solar and wind.
With the increase in penetration of inverter-based resources (IBRs) in the electrical power system, the ability of these devices to provide grid support to the system has become a necessity. With standards previously developed for the interconnection requirements of grid-following inverters (GFLI) (most commonly photovoltaic inverters), it has been well documented how these inverters 'should' respond to changes in voltage and frequency. However, with other IBRs such as grid-forming inverters (GFMIs) (used for energy storage systems, standalone systems, and as uninterruptable power supplies) these requirements are either: not yet documented, or require a more in deep analysis. With the increased interest in microgrids, GFMIs that can be paralleled onto a distribution system have become desired. With the proper control schemes, a GFMI can help maintain grid stability through fast response compared to rotating machines. This paper will present an experimental comparison of commercially available GFMIand GFLI ' responses to voltage and frequency deviation, as well as the GFMIoperating as a standalone system and subjected to various changes in loads.
Sandia National Laboratories has developed technology enabling novel downhole electrochemical assessment in extreme downhole environments. High-temperature high-pressure (HTHP) electrodes selectively sensitive to hydrogen (H+), chloride (Cl-), iodide (I-) and overall ionic strength (Reference Electrode+-) have been demonstrated in representative geothermal environments (225°C and 103 bar in surrogate geothermal brine). This 2-year program is a collaboration effort between Sandia and Thermochem, Inc. with the goal of taking the prototype sensors and developing them into a commercial product that is operable up to 300°C and 345 bar. The Sandia-developed prototype HTHP chemical sensor package creates a capability that has never been possible to date. This technology is desired by the geothermal industry to fill a gap in available downhole real-time measurements. Only limited sensors are available that operate at the extreme temperatures and pressures found in geothermal wells. For the purpose of this paper, high temperature is defined as temperatures exceeding 200°C and high pressure is defined as pressures exceeding 35 bar. Chemical sensors exceeding these parameters and sized appropriately for downhole applications do not exist. The current Thermochem two-phase downhole sampling tool (rated to 350 °C) will be re-configured to accept the sensors. A downhole tool with an integrated pH real-time sensor capable of operation at 300°C and 345 bar does not exist and as such, the developed technology will provide the geothermal industry with data that would otherwise not be possible such as vertical in-situ pH-profiling of geothermal wells. The pH measurement was chosen as the first chemical sensor focus since it is one of the fundamental measurements required to understand downhole chemistry, scaling and corrosion processes.
This paper presents a novel approach for fault location and classification based on combining mathematical morphology (MM) with Random Forests (RF). The MM stage of the method is used to pre-process voltage and current data. Signal vector norms on the output signals of the MM stage are then used as the input features for a RF machine learning classifier and regressor. The data used as input for the proposed approach comprises only a window of 50 µs before and after the fault is detected. The proposed method is tested with noisy data from a small simulated system. These results show 100% accuracy for the classification task and prediction errors with an average of ~13 m in the fault location task.
Proceedings of SPIE - The International Society for Optical Engineering
Fredricksen, C.J.; Peale, R.E.; Dhakal, N.; Barrett, C.L.; Boykin II, O.; Maukonen, D.; Davis, L.; Ferarri, B.; Chernyak, L.; Zeidan, O.A.; Hawkins, Samuel D.; Klem, John F.; Krishna, Sanjay; Kazemi, Alireza; Schuler-Sandy, Ted
Effects of gamma and proton irradiation, and of forward bias minority carrier injection, on minority carrier diffusion and photoresponse were investigated for long-wave (LW) and mid-wave (MW) infrared detectors with engineered majoritycarrier barriers. The LWIR detector was a type-II GaSb/InAs strained-layer superlattice pBiBn structure. The MWIR detector was a InAsSb/AlAsSb nBp structure without superlattices. Room temperature gamma irradiations degraded the minority carrier diffusion length of the LWIR structure, and minority carrier injections caused dramatic improvements, though there was little effect from either treatment on photoresponse. For the MWIR detector, effects of room temperature gamma irradiation and injection on minority carrier diffusion and photoresponse were negligible. Subsequently, both types of detectors were subjected to gamma irradiation at 77 K. In-situ photoresponse was unchanged for the LWIR detectors, while that for the MWIR ones decreased 19% after cumulative dose of ~500 krad(Si). Minority carrier injection had no effect on photoresponse for either. The LWIR detector was then subjected to 4 Mrad(Si) of 30 MeV proton irradiation at 77 K, and showed a 35% decrease in photoresponse, but again no effect from forward bias injection. These results suggest that photoresponse of the LWIR detectors is not limited by minority carrier diffusion.
An array of Wave Energy Converters (WEC) is required to supply a significant power level to the grid. However, the control and optimization of such an array is still an open research question. This paper analyzes two aspects that have a significant impact on the power production. First the spacing of the buoys in a WEC array will be analyzed to determine the optimal shift between the buoys in an array. Then the wave force interacting with the buoys will be angled to create additional sequencing between the electrical signals. A cost function is proposed to minimize the power variation and energy storage while maximizing the delivered energy to the onshore point of common coupling to the electrical grid.