Publications

Results 26–50 of 66

Search results

Jump to search filters

Measuring the capabilities of quantum computers

Nature Physics

Proctor, Timothy J.; Rudinger, Kenneth M.; Young, Kevin C.; Nielsen, Erik N.; Blume-Kohout, Robin J.

Quantum computers can now run interesting programs, but each processor’s capability—the set of programs that it can run successfully—is limited by hardware errors. These errors can be complicated, making it difficult to accurately predict a processor’s capability. Benchmarks can be used to measure capability directly, but current benchmarks have limited flexibility and scale poorly to many-qubit processors. We show how to construct scalable, efficiently verifiable benchmarks based on any program by using a technique that we call circuit mirroring. With it, we construct two flexible, scalable volumetric benchmarks based on randomized and periodically ordered programs. We use these benchmarks to map out the capabilities of twelve publicly available processors, and to measure the impact of program structure on each one. We find that standard error metrics are poor predictors of whether a program will run successfully on today’s hardware, and that current processors vary widely in their sensitivity to program structure.

More Details

Efficient flexible characterization of quantum processors with nested error models

New Journal of Physics

Nielsen, Erik N.; Rudinger, Kenneth M.; Proctor, Timothy J.; Young, Kevin C.; Blume-Kohout, Robin J.

We present a simple and powerful technique for finding a good error model for a quantum processor. The technique iteratively tests a nested sequence of models against data obtained from the processor, and keeps track of the best-fit model and its wildcard error (a metric of the amount of unmodeled error) at each step. Each best-fit model, along with a quantification of its unmodeled error, constitutes a characterization of the processor. We explain how quantum processor models can be compared with experimental data and to each other. We demonstrate the technique by using it to characterize a simulated noisy two-qubit processor.

More Details

Detecting and tracking drift in quantum information processors

Nature Communications

Proctor, Timothy J.; Revelle, Melissa R.; Nielsen, Erik N.; Rudinger, Kenneth M.; Lobser, Daniel L.; Maunz, Peter; Blume-Kohout, Robin J.; Young, Kevin C.

If quantum information processors are to fulfill their potential, the diverse errors that affect them must be understood and suppressed. But errors typically fluctuate over time, and the most widely used tools for characterizing them assume static error modes and rates. This mismatch can cause unheralded failures, misidentified error modes, and wasted experimental effort. Here, we demonstrate a spectral analysis technique for resolving time dependence in quantum processors. Our method is fast, simple, and statistically sound. It can be applied to time-series data from any quantum processor experiment. We use data from simulations and trapped-ion qubit experiments to show how our method can resolve time dependence when applied to popular characterization protocols, including randomized benchmarking, gate set tomography, and Ramsey spectroscopy. In the experiments, we detect instability and localize its source, implement drift control techniques to compensate for this instability, and then demonstrate that the instability has been suppressed.

More Details

Probing quantum processor performance with pyGSTi

Quantum Science and Technology

Nielsen, Erik N.; Rudinger, Kenneth M.; Proctor, Timothy J.; Russo, Antonio R.; Young, Kevin C.; Blume-Kohout, Robin J.

PyGSTi is a Python software package for assessing and characterizing the performance of quantum computing processors. It can be used as a standalone application, or as a library, to perform a wide variety of quantum characterization, verification, and validation (QCVV) protocols on as-built quantum processors. We outline pyGSTi's structure, and what it can do, using multiple examples. We cover its main characterization protocols with end-to-end implementations. These include gate set tomography, randomized benchmarking on one or many qubits, and several specialized techniques. We also discuss and demonstrate how power users can customize pyGSTi and leverage its components to create specialized QCVV protocols and solve user-specific problems.

More Details

Diagnosing and Destroying Non-Markovian Noise

Young, Kevin; Bartlett, Stephen; Blume-Kohout, Robin J.; Gamble, John K.; Lobser, Daniel L.; Maunz, Peter; Nielsen, Erik N.; Proctor, Timothy J.; Revelle, Melissa R.; Rudinger, Kenneth M.

Nearly every protocol used to analyze the performance of quantum information processors is based on an assumption that the errors experienced by the device during logical operations are constant in time and are insensitive to external contexts. This assumption is pervasive, rarely stated, and almost always wrong. Quantum devices that do behave this way are termed "Markovian:' but nearly every system we have ever probed has displayed drift or crosstalk or memory effects they are all non-Markovian. Strong non-Markovianity introduces spurious effects in characterization protocols and violates assumptions of the fault-tolerance threshold theorems. This SAND report details a three year laboratory-directed research and development (LDRD) project entitled, "Diagnosing and Destroying non-Markovian Noise in Quantum Information Processors." This program was initiated to build tools to study non-Markovian dynamics and quantum systems and develop robust methodologies for eliminating it. The program achieved a number of notable successes, including the first statistically rigorous protocol for identifying and characterizing drift in quantum systems, a formalism for modeling memory effects in quantum devices, and the successful suppression of drift in a Sandia trapped-ion quantum processor.

More Details

Direct Randomized Benchmarking for Multiqubit Devices

Physical Review Letters

Proctor, Timothy J.; Carignan-Dugas, Arnaud; Rudinger, Kenneth M.; Nielsen, Erik N.; Blume-Kohout, Robin J.; Young, Kevin C.

Benchmarking methods that can be adapted to multiqubit systems are essential for assessing the overall or "holistic" performance of nascent quantum processors. The current industry standard is Clifford randomized benchmarking (RB), which measures a single error rate that quantifies overall performance. But, scaling Clifford RB to many qubits is surprisingly hard. It has only been performed on one, two, and three qubits as of this writing. This reflects a fundamental inefficiency in Clifford RB: the n-qubit Clifford gates at its core have to be compiled into large circuits over the one- and two-qubit gates native to a device. As n grows, the quality of these Clifford gates quickly degrades, making Clifford RB impractical at relatively low n. In this Letter, we propose a direct RB protocol that mostly avoids compiling. Instead, it uses random circuits over the native gates in a device, which are seeded by an initial layer of Clifford-like randomization. We demonstrate this protocol experimentally on two to five qubits using the publicly available ibmqx5. We believe this to be the greatest number of qubits holistically benchmarked, and this was achieved on a freely available device without any special tuning up. Our protocol retains the simplicity and convenient properties of Clifford RB: it estimates an error rate from an exponential decay. But, it can be extended to processors with more qubits - we present simulations on 10+ qubits - and it reports a more directly informative and flexible error rate than the one reported by Clifford RB. We show how to use this flexibility to measure separate error rates for distinct sets of gates, and we use this method to estimate the average error rate of a set of cnot gates.

More Details
Results 26–50 of 66
Results 26–50 of 66