Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if - and only if - the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Here we use gate set tomography to completely characterize operations on a trapped-Yb+-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10-4).
We provide the first demonstration that molecular-level methods based on gas kinetic theory and molecular chaos can simulate turbulence and its decay. The direct simulation Monte Carlo (DSMC) method, a molecular-level technique for simulating gas flows that resolves phenomena from molecular to hydrodynamic (continuum) length scales, is applied to simulate the Taylor-Green vortex flow. The DSMC simulations reproduce the Kolmogorov -5/3 law and agree well with the turbulent kinetic energy and energy dissipation rate obtained from direct numerical simulation of the Navier-Stokes equations using a spectral method. This agreement provides strong evidence that molecular-level methods for gases can be used to investigate turbulent flows quantitatively.
International Journal of Computational Fluid Dynamics
Gel, Aytekin; Hu, Jonathan J.; Ould-Ahmed-Vall, El M.; Kalinkin, Alexander A.
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.
The familiar story of Moore's law is actually inaccurate. This article corrects the story, leading to different projections for the future. Moore's law is a fluid idea whose definition changes over time. It thus doesn't have the ability to 'end,' as is popularly reported, but merely takes different forms as the semiconductor and computer industries evolve.
This document provides a detailed overview of the stereo correlation algorithm and triangulation formulation used in the Digital Image Correlation Engine (DICe) to triangulate three dimensional motion in space given the image coordinates and camera calibration parameters.
DRAM technology is the main building block of main memory, however, DRAM scaling is becoming very challenging. The main issues for DRAM scaling are the increasing error rates with each new generation, the geometric and physical constraints of scaling the capacitor part of the DRAM cells, and the high power consumption caused by the continuous need for refreshing cell values. At the same time, emerging Non- Volatile Memory (NVM) technologies, such as Phase-Change Memory (PCM), are emerging as promising replacements for DRAM. NVMs, when compared to current technologies e.g., NAND-based ash, have latencies comparable to DRAM. Additionally, NVMs are non-volatile, which eliminates the need for refresh power and enables persistent memory applications. Finally, NVMs have promising densities and the potential for multi-level cell (MLC) storage.
Remote temperature sensing is essential for applications in enclosed vessels, where feedthroughs or optical access points are not possible. A unique sensing method for measuring the temperature of multiple closely spaced points is proposed using permanent magnets and several three-axis magnetic field sensors. The magnetic field theory for multiple magnets is discussed and a solution technique is presented. Experimental calibration procedures, solution inversion considerations, and methods for optimizing the magnet orientations are described in order to obtain low-noise temperature estimates. The experimental setup and the properties of permanent magnets are shown. Finally, experiments were conducted to determine the temperature of nine magnets in different configurations over a temperature range of 5 °C to 60 °C and for a sensor-to-magnet distance of up to 35 mm. To show the possible applications of this sensing system for measuring temperatures through metal walls, additional experiments were conducted inside an opaque 304 stainless steel cylinder.
Improved validation for models of complex systems has been a primary focus over the past year for the Resilience in Complex Systems Research Challenge. This document describes a set of research directions that are the result of distilling those ideas into three categories of research -- epistemic uncertainty, strong tests, and value of information. The content of this document can be used to transmit valuable information to future research activities, update the Resilience in Complex Systems Research Challenge's roadmap, inform the upcoming FY18 Laboratory Directed Research and Development (LDRD) call and research proposals, and facilitate collaborations between Sandia and external organizations. The recommended research directions can provide topics for collaborative research, development of proposals, workshops, and other opportunities.
This report describes findings from the culminating experiment of the LDRD project entitled, "Analyst-to-Analyst Variability in Simulation-Based Prediction". For this experiment, volunteer participants solving a given test problem in engineering and statistics were interviewed at different points in their solution process. These interviews are used to trace differing solutions to differing solution processes, and differing processes to differences in reasoning, assumptions, and judgments. The issue that the experiment was designed to illuminate -- our paucity of understanding of the ways in which humans themselves have an impact on predictions derived from complex computational simulations -- is a challenging and open one. Although solution of the test problem by analyst participants in this experiment has taken much more time than originally anticipated, and is continuing past the end of this LDRD, this project has provided a rare opportunity to explore analyst-to-analyst variability in significant depth, from which we derive evidence-based insights to guide further explorations in this important area.
Solving sparse linear systems from the discretization of elliptic partial differential equations (PDEs) is an important building block in many engineering applications. Sparse direct solvers can solve general linear systems, but are usually slower and use much more memory than effective iterative solvers. To overcome these two disadvantages, a hierarchical solver (LoRaSp) based on H2-matrices was introduced in [22]. Here, we have developed a parallel version of the algorithm in LoRaSp to solve large sparse matrices on distributed memory machines. On a single processor, the factorization time of our parallel solver scales almost linearly with the problem size for three-dimensional problems, as opposed to the quadratic scalability of many existing sparse direct solvers. Moreover, our solver leads to almost constant numbers of iterations, when used as a preconditioner for Poisson problems. On more than one processor, our algorithm has significant speedups compared to sequential runs. With this parallel algorithm, we are able to solve large problems much faster than many existing packages as demonstrated by the numerical experiments.
Time integration is a central component for most transient simulations. It coordinates many of the major parts of a simulation together, e.g., a residual calculation with a transient solver, solution with the output, various operator-split physics, and forward and adjoint solutions for inversion. Even though there is this variety in these transient simulations, there is still a common set of algorithms and procedures to progress transient solutions for ordinary-differential equations (ODEs) and differential-alegbraic equations (DAEs). Rythmos is a collection of these algorithms that can be used for the solution of transient simulations. It provides common time-integration methods, such as Backward and Forward Euler, Explicit and Implicit Runge-Kutta, and Backward-Difference Formulas. It can also provide sensitivities, and adjoint components for transient simulations. Rythmos is a package within Trilinos, and requires some other packages (e.g., Teuchos and Thrya) to provide basic time-integration capabilities. It also can be coupled with several other Trilinos packages to provide additional capabilities (e.g., AztecOO and Belos for linear solutions, and NOX for non-linear solutions). The documentation is broken down into three parts: Theory Manual, User's Manual, and Developer's Guide. The Theory Manual contains the basic theory of the time integrators, the nomenclature and mathematical structure utilized within Rythmos, and verification results demonstrating that the designed order of accuracy is achieved. The User's Manual provides information on how to use the Rythmos, description of input parameters through Teuchos Parameter Lists, and description of convergence test examples. The Developer's Guide is a high-level discussion of the design and structure of Rythmos to provide information to developers for the continued development of capabilities. Details of individual components can be found in the Doxygen webpages.
Si-MOS based QD qubits are attractive due to their similarity to the current semiconductor industry. We introduce a highly tunable MOS foundry compatible qubit design that couples an electrostatic quantum dot (QD) with an implanted donor. We show for the first time coherent two-axis control of a two-electron spin logical qubit that evolves under the QD-donor exchange interaction and the hyperfine interaction with the donor nucleus. The two interactions are tuned electrically with surface gate voltages to provide control of both qubit axes. Qubit decoherence is influenced by charge noise, which is of similar strength as epitaxial systems like GaAs and Si/SiGe.
Dynamic materials experiments on the Z-machine are beginning to reach a regime where traditional analysis techniques break down. Time dependent phenomena such as strength and phase transition kinetics often make the data obtained in these experiments difficult to interpret. We present an inverse analysis methodology to infer the equation of state (EOS) from velocimetry data in these types of experiments, building on recent advances in the propagation of uncertain EOS information through a hydrocode simulation. An example is given for a shock-ramp experiment in which tantalum was shock compressed to 40 GPa followed by a ramp to 80 GPa. The results are found to be consistent with isothermal compression and Hugoniot data in this regime.
Progressive hedging, though an effective heuristic for solving stochastic mixed integer programs (SMIPs), is not guaranteed to converge in this case. Here, we describe BBPH, a branch and bound algorithm that uses PH at each node in the search tree such that, given sufficient time, it will always converge to a globally optimal solution. In addition to providing a theoretically convergent “wrapper” for PH applied to SMIPs, computational results demonstrate that for some difficult problem instances branch and bound can find improved solutions after exploring only a few nodes.
When solving partial differential equations (PDEs) with random inputs, it is often computationally inefficient to merely propagate samples of the input probability law (or an approximation thereof) because the input law may not accurately capture the behavior of critical system responses that depend on the PDE solution. To further complicate matters, in many applications it is critical to accurately approximate the “risk” associated with the statistical tails of the system responses, not just the statistical moments. In this paper, we develop an adaptive sampling and local reduced basis method for approximately solving PDEs with random inputs. Our method determines a set of parameter atoms and an associated (implicit) Voronoi partition of the parameter domain on which we build local reduced basis approximations of the PDE solution. In addition, we extend our adaptive sampling approach to accurately compute measures of risk evaluated at quantities of interest that depend on the PDE solution.
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress towards optimal engine designs requires both accurate flow simulations as well as uncertainty quantification (UQ). However, performing UQ for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. We address these difficulties by combining UQ algorithms and numerical methods to the large eddy simulation of the HIFiRE scramjet configuration. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, helping reduce the stochastic dimension of the problem and discover sparse representations. Second, as models of different fidelity are available and inevitably used in the overall UQ assessment, a framework for quantifying and propagating the uncertainty due to model error is introduced. These methods are demonstrated on a non-reacting scramjet unit problem with parameter space up to 24 dimensions, using 2D and 3D geometries with static and dynamic treatments of the turbulence subgrid model.
We establish an atomistic view of the high- and low-temperature phases of iron/steel as well as some elements of the phase transition between these phases on cooling. In particular we examine the 4 most common orientation relationships between the high temperature austenite and low-temperature ferrite phases seen in experiment. With a thorough understanding of these relationships we are prepared to set up various atomistic simulations, using techniques such as Density Functional Theory and Molecular Dynamics, to further study the phase transition, in particular, quantities needed for Phase Field Modeling, such as the free energies of bulk phases and the phase transition front propagation velocity.
Extreme-scale computational science increasingly demands multiscale and multiphysics formulations. Combining software developed by independent groups is imperative: no single team has resources for all predictive science and decision support capabilities. Scientific libraries provide high-quality, reusable software components for constructing applications with improved robustness and portability. However, without coordination, many libraries cannot be easily composed. Namespace collisions, inconsistent arguments, lack of third-party software versioning, and additional difficulties make composition costly. The Extreme-scale Scientific Software Development Kit (xSDK) defines community policies to improve code quality and compatibility across independently developed packages (hypre, PETSc, SuperLU, Trilinos, and Alquimia) and provides a foundation for addressing broader issues in software interoperability, performance portability, and sustainability. The xSDK provides turnkey installation of member software and seamless combination of aggregate capabilities, and it marks first steps toward extreme-scale scientific software ecosystems from which future applications can be composed rapidly with assured quality and scalability.
In many settings, multi-tasking and interruption are commonplace. Multi-tasking has been a popular subject of recent research, but a multitasking paradigm normally allows the subject some control over the timing of the task switch. In this paper we focus on interruptions—situations in which the subject has no control over the timing of task switches. We consider three types of task: verbal (reading comprehension), visual search, and monitoring/situation awareness. Using interruptions from 30 s to 2 min in duration, we found a significant effect in each case, but with different effect sizes. For the situation awareness task, we experimented with interruptions of varying duration and found a non-linear relation between the duration of the interruption and its after-effect on performance, which may correspond to a task-dependent interruption threshold, which is lower for more dynamic tasks.