Publications

Results 26–50 of 147
Skip to search filters

Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

Nature Communications

Blume-Kohout, Robin J.; Gamble, John K.; Nielsen, Erik N.; Rudinger, Kenneth M.; Mizrahi, Jonathan; Fortier, Kevin M.; Maunz, Peter

Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if - and only if - the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Here we use gate set tomography to completely characterize operations on a trapped-Yb+-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10-4).

More Details

Detecting and tracking drift in quantum information processors

Nature Communications

Proctor, Timothy J.; Revelle, Melissa R.; Nielsen, Erik N.; Rudinger, Kenneth M.; Lobser, Daniel L.; Maunz, Peter; Blume-Kohout, Robin J.; Young, Kevin

If quantum information processors are to fulfill their potential, the diverse errors that affect them must be understood and suppressed. But errors typically fluctuate over time, and the most widely used tools for characterizing them assume static error modes and rates. This mismatch can cause unheralded failures, misidentified error modes, and wasted experimental effort. Here, we demonstrate a spectral analysis technique for resolving time dependence in quantum processors. Our method is fast, simple, and statistically sound. It can be applied to time-series data from any quantum processor experiment. We use data from simulations and trapped-ion qubit experiments to show how our method can resolve time dependence when applied to popular characterization protocols, including randomized benchmarking, gate set tomography, and Ramsey spectroscopy. In the experiments, we detect instability and localize its source, implement drift control techniques to compensate for this instability, and then demonstrate that the instability has been suppressed.

More Details

Direct Randomized Benchmarking for Multiqubit Devices

Physical Review Letters

Proctor, Timothy J.; Carignan-Dugas, Arnaud; Rudinger, Kenneth M.; Nielsen, Erik N.; Blume-Kohout, Robin J.; Young, Kevin

Benchmarking methods that can be adapted to multiqubit systems are essential for assessing the overall or "holistic" performance of nascent quantum processors. The current industry standard is Clifford randomized benchmarking (RB), which measures a single error rate that quantifies overall performance. But, scaling Clifford RB to many qubits is surprisingly hard. It has only been performed on one, two, and three qubits as of this writing. This reflects a fundamental inefficiency in Clifford RB: the n-qubit Clifford gates at its core have to be compiled into large circuits over the one- and two-qubit gates native to a device. As n grows, the quality of these Clifford gates quickly degrades, making Clifford RB impractical at relatively low n. In this Letter, we propose a direct RB protocol that mostly avoids compiling. Instead, it uses random circuits over the native gates in a device, which are seeded by an initial layer of Clifford-like randomization. We demonstrate this protocol experimentally on two to five qubits using the publicly available ibmqx5. We believe this to be the greatest number of qubits holistically benchmarked, and this was achieved on a freely available device without any special tuning up. Our protocol retains the simplicity and convenient properties of Clifford RB: it estimates an error rate from an exponential decay. But, it can be extended to processors with more qubits - we present simulations on 10+ qubits - and it reports a more directly informative and flexible error rate than the one reported by Clifford RB. We show how to use this flexibility to measure separate error rates for distinct sets of gates, and we use this method to estimate the average error rate of a set of cnot gates.

More Details

Efficient flexible characterization of quantum processors with nested error models

New Journal of Physics

Nielsen, Erik N.; Rudinger, Kenneth M.; Proctor, Timothy J.; Young, Kevin; Blume-Kohout, Robin J.

We present a simple and powerful technique for finding a good error model for a quantum processor. The technique iteratively tests a nested sequence of models against data obtained from the processor, and keeps track of the best-fit model and its wildcard error (a metric of the amount of unmodeled error) at each step. Each best-fit model, along with a quantification of its unmodeled error, constitutes a characterization of the processor. We explain how quantum processor models can be compared with experimental data and to each other. We demonstrate the technique by using it to characterize a simulated noisy two-qubit processor.

More Details

Efficient Gate Set Tomography on a Multi-Qubit Superconducting Processor

Nielsen, Erik N.; Rudinger, Kenneth M.; Blume-Kohout, Robin J.; Bestwick, Andrew B.; Bloom, Benjamin B.; Block, Maxwell B.; Caldwell, Shane M.; Curtis, Michael J.; Hudson, Alex H.; Orgiazzi, Jean-Luc O.; Papageorge, Alexander P.; Polloreno, Anthony P.; Reagor, Matt R.; Rubin, Nicholas R.; Scheer, Michael S.; Selvanayagam, Michael S.; Sete, Eyob S.; Sinclair, Rodney S.; Smith, Robert S.; Vahidpour, Mehrnoosh V.; Villiers, Marius V.; Zeng, William J.; Rigetti, Chad R.

Abstract not provided.

Efficient, Predictive Tomography of Multi-Qubit Quantum Processors

Blume-Kohout, Robin J.; Nielsen, Erik N.; Rudinger, Kenneth M.; Sarovar, Mohan S.; Young, Kevin C.

After decades of R&D, quantum computers comprising more than 2 qubits are appearing. If this progress is to continue, the research community requires a capability for precise characterization (“tomography”) of these enlarged devices, which will enable benchmarking, improvement, and finally certification as mission-ready. As world leaders in characterization -- our gate set tomography (GST) method is the current state of the art – the project team is keenly aware that every existing protocol is either (1) catastrophically inefficient for more than 2 qubits, or (2) not rich enough to predict device behavior. GST scales poorly, while the popular randomized benchmarking technique only measures a single aggregated error probability. This project explored a new insight: that the combinatorial explosion plaguing standard GST could be avoided by using an ansatz of few-qubit interactions to build a complete, efficient model for multi-qubit errors. We developed this approach, prototyped it, and tested it on a cutting-edge quantum processor developed by Rigetti Quantum Computing (RQC), a US-based startup. We implemented our new models within Sandia’s PyGSTi open-source code, and tested them experimentally on the RQC device by probing crosstalk. We found two major results: first, our schema worked and is viable for further development; second, while the Rigetti device is indeed a “real” 8-qubit quantum processor, its behavior fluctuated significantly over time while we were experimenting with it and this drift made it difficult to fit our models of crosstalk to the data.

More Details

Gate Set Tomography

Quantum

Nielsen, Erik N.; Gamble, John K.; Rudinger, Kenneth M.; Scholten, Travis; Young, Kevin; Blume-Kohout, Robin J.

Gate set tomography (GST) is a protocol for detailed, predictive characterization of logic operations (gates) on quantum computing processors. Early versions of GST emerged around 2012-13, and since then it has been refined, demonstrated, and used in a large number of experiments. This paper presents the foundations of GST in comprehensive detail. The most important feature of GST, compared to older state and process tomography protocols, is that it is calibration-free. GST does not rely on pre-calibrated state preparations and measurements. Instead, it characterizes all the operations in a gate set simultaneously and self-consistently, relative to each other. Long sequence GST can estimate gates with very high precision and efficiency, achieving Heisenberg scaling in regimes of practical interest. In this paper, we cover GST’s intellectual history, the techniques and experiments used to achieve its intended purpose, data analysis, gauge freedom and fixing, error bars, and the interpretation of gauge-fixed estimates of gate sets. Our focus is fundamental mathematical aspects of GST, rather than implementation details, but we touch on some of the foundational algorithmic tricks used in the pyGSTi implementation.

More Details
Results 26–50 of 147
Results 26–50 of 147