We present a simple and powerful technique for finding a good error model for a quantum processor. The technique iteratively tests a nested sequence of models against data obtained from the processor, and keeps track of the best-fit model and its wildcard error (a metric of the amount of unmodeled error) at each step. Each best-fit model, along with a quantification of its unmodeled error, constitutes a characterization of the processor. We explain how quantum processor models can be compared with experimental data and to each other. We demonstrate the technique by using it to characterize a simulated noisy two-qubit processor.
We adapt the robust phase estimation algorithm to the evaluation of energy differences between two eigenstates using a quantum computer. This approach does not require controlled unitaries between auxiliary and system registers or even a single auxiliary qubit. As a proof of concept, we calculate the energies of the ground state and low-lying electronic excitations of a hydrogen molecule in a minimal basis on a cloud quantum computer. The denominative robustness of our approach is then quantified in terms of a high tolerance to coherent errors in the state preparation and measurement. Conceptually, we note that all quantum phase estimation algorithms ultimately evaluate eigenvalue differences.
We present an extension to the robust phase estimation protocol, which can identify incorrect results that would otherwise lie outside the expected statistical range. Robust phase estimation is increasingly a method of choice for applications such as estimating the effective process parameters of noisy hardware, but its robustness is dependent on the noise satisfying certain threshold assumptions. We provide consistency checks that can indicate when those thresholds have been violated, which can be difficult or impossible to test directly. We test these consistency checks for several common noise models, and identify two possible checks with high accuracy in locating the point in a robust phase estimation run at which further estimates should not be trusted. One of these checks may be chosen based on resource availability, or they can be used together in order to provide additional verification.
Gate set tomography (GST) is a protocol for detailed, predictive characterization of logic operations (gates) on quantum computing processors. Early versions of GST emerged around 2012-13, and since then it has been refined, demonstrated, and used in a large number of experiments. This paper presents the foundations of GST in comprehensive detail. The most important feature of GST, compared to older state and process tomography protocols, is that it is calibration-free. GST does not rely on pre-calibrated state preparations and measurements. Instead, it characterizes all the operations in a gate set simultaneously and self-consistently, relative to each other. Long sequence GST can estimate gates with very high precision and efficiency, achieving Heisenberg scaling in regimes of practical interest. In this paper, we cover GST’s intellectual history, the techniques and experiments used to achieve its intended purpose, data analysis, gauge freedom and fixing, error bars, and the interpretation of gauge-fixed estimates of gate sets. Our focus is fundamental mathematical aspects of GST, rather than implementation details, but we touch on some of the foundational algorithmic tricks used in the pyGSTi implementation.
After decades of R&D, quantum computers comprising more than 2 qubits are appearing. If this progress is to continue, the research community requires a capability for precise characterization (“tomography”) of these enlarged devices, which will enable benchmarking, improvement, and finally certification as mission-ready. As world leaders in characterization -- our gate set tomography (GST) method is the current state of the art – the project team is keenly aware that every existing protocol is either (1) catastrophically inefficient for more than 2 qubits, or (2) not rich enough to predict device behavior. GST scales poorly, while the popular randomized benchmarking technique only measures a single aggregated error probability. This project explored a new insight: that the combinatorial explosion plaguing standard GST could be avoided by using an ansatz of few-qubit interactions to build a complete, efficient model for multi-qubit errors. We developed this approach, prototyped it, and tested it on a cutting-edge quantum processor developed by Rigetti Quantum Computing (RQC), a US-based startup. We implemented our new models within Sandia’s PyGSTi open-source code, and tested them experimentally on the RQC device by probing crosstalk. We found two major results: first, our schema worked and is viable for further development; second, while the Rigetti device is indeed a “real” 8-qubit quantum processor, its behavior fluctuated significantly over time while we were experimenting with it and this drift made it difficult to fit our models of crosstalk to the data.
If quantum information processors are to fulfill their potential, the diverse errors that affect them must be understood and suppressed. But errors typically fluctuate over time, and the most widely used tools for characterizing them assume static error modes and rates. This mismatch can cause unheralded failures, misidentified error modes, and wasted experimental effort. Here, we demonstrate a spectral analysis technique for resolving time dependence in quantum processors. Our method is fast, simple, and statistically sound. It can be applied to time-series data from any quantum processor experiment. We use data from simulations and trapped-ion qubit experiments to show how our method can resolve time dependence when applied to popular characterization protocols, including randomized benchmarking, gate set tomography, and Ramsey spectroscopy. In the experiments, we detect instability and localize its source, implement drift control techniques to compensate for this instability, and then demonstrate that the instability has been suppressed.
PyGSTi is a Python software package for assessing and characterizing the performance of quantum computing processors. It can be used as a standalone application, or as a library, to perform a wide variety of quantum characterization, verification, and validation (QCVV) protocols on as-built quantum processors. We outline pyGSTi's structure, and what it can do, using multiple examples. We cover its main characterization protocols with end-to-end implementations. These include gate set tomography, randomized benchmarking on one or many qubits, and several specialized techniques. We also discuss and demonstrate how power users can customize pyGSTi and leverage its components to create specialized QCVV protocols and solve user-specific problems.
Nearly every protocol used to analyze the performance of quantum information processors is based on an assumption that the errors experienced by the device during logical operations are constant in time and are insensitive to external contexts. This assumption is pervasive, rarely stated, and almost always wrong. Quantum devices that do behave this way are termed "Markovian:' but nearly every system we have ever probed has displayed drift or crosstalk or memory effects they are all non-Markovian. Strong non-Markovianity introduces spurious effects in characterization protocols and violates assumptions of the fault-tolerance threshold theorems. This SAND report details a three year laboratory-directed research and development (LDRD) project entitled, "Diagnosing and Destroying non-Markovian Noise in Quantum Information Processors." This program was initiated to build tools to study non-Markovian dynamics and quantum systems and develop robust methodologies for eliminating it. The program achieved a number of notable successes, including the first statistically rigorous protocol for identifying and characterizing drift in quantum systems, a formalism for modeling memory effects in quantum devices, and the successful suppression of drift in a Sandia trapped-ion quantum processor.
QSCOUT is the Quantum Scientific Computing Open User Testbed, a trapped-ion quantum computer testbed realized at Sandia National Laboratories on behalf of the Department of Energy's Office of Science and its Advanced Scientific Computing Research (ASCR) program.