Publications

Results 51–75 of 100

Search results

Jump to search filters

Direct Randomized Benchmarking for Multiqubit Devices

Physical Review Letters

Proctor, Timothy J.; Carignan-Dugas, Arnaud; Rudinger, Kenneth M.; Nielsen, Erik N.; Blume-Kohout, Robin J.; Young, Kevin C.

Benchmarking methods that can be adapted to multiqubit systems are essential for assessing the overall or "holistic" performance of nascent quantum processors. The current industry standard is Clifford randomized benchmarking (RB), which measures a single error rate that quantifies overall performance. But, scaling Clifford RB to many qubits is surprisingly hard. It has only been performed on one, two, and three qubits as of this writing. This reflects a fundamental inefficiency in Clifford RB: the n-qubit Clifford gates at its core have to be compiled into large circuits over the one- and two-qubit gates native to a device. As n grows, the quality of these Clifford gates quickly degrades, making Clifford RB impractical at relatively low n. In this Letter, we propose a direct RB protocol that mostly avoids compiling. Instead, it uses random circuits over the native gates in a device, which are seeded by an initial layer of Clifford-like randomization. We demonstrate this protocol experimentally on two to five qubits using the publicly available ibmqx5. We believe this to be the greatest number of qubits holistically benchmarked, and this was achieved on a freely available device without any special tuning up. Our protocol retains the simplicity and convenient properties of Clifford RB: it estimates an error rate from an exponential decay. But, it can be extended to processors with more qubits - we present simulations on 10+ qubits - and it reports a more directly informative and flexible error rate than the one reported by Clifford RB. We show how to use this flexibility to measure separate error rates for distinct sets of gates, and we use this method to estimate the average error rate of a set of cnot gates.

More Details

Metrics and Benchmarks for Quantum Processors: State of Play

Blume-Kohout, Robin J.; Young, Kevin C.

A compelling narrative has taken hold as quantum computing explodes into the commercial sector: Quantum computing in 2018 is like classical computing in 1965. In 1965 Gordon Moore wrote his famous paper about integrated circuits, saying: "At present, [minimum cost] is reached when 50 components are used per circuit. But... the complexity for minimum component costs has increased at a rate of roughly a factor of two per year... by 1975, the number of components per integrated circuit for minimum cost will be 65,000." This narrative is both appealing (we want to believe that quantum computing will follow the incredibly successful path of classical computing!) and plausible (2018 saw IBM, Intel, and Google announce 50-qubit integrated chips). But it is also deeply misleading. Here is an alternative: Quantum computing in 2018 is like classical computing in 1938. In 1938, John Atanasoff and Clifford Berry built the very first electronic digital computer. It had no program, and was not Turing-complete. Vacuum tubes — the standard "bit" for 20 years — were still 5 years in the future. ENIAC and the achievement of "computational supremacy" (over hand calculation) wouldn't arrive for 8 years, despite the accelerative effect of WWII. Integrated circuits and the information age were more than 20 years away. Neither of these analogies is perfect. Quantum computing technology is more like 1938, while the level of funding and excitement suggest 1965 (or later!). But the point of the cautionary analogy to 1938 is simple: Quantum computing in 2018 is a research field. It is far too early to establish metrics or benchmarks for performance. The best role for neutral organizations like IEEE is to encourage and shape research into metrics and benchmarks, so as to be ready when they become necessary. This white paper presents the evidence and reasoning for this claim. We explain what it means to say that quantum computing is a "research field", and why metrics and benchmarks for quantum processors also constitute a research field. We discuss the potential for harmful consequences of prematurely establishing standards or frameworks. We conclude by suggesting specific actions that IEEE or similar organizations can take to accelerate the development of good metrics and benchmarks for quantum computing.

More Details

What Randomized Benchmarking Actually Measures

Physical Review Letters

Proctor, Timothy J.; Rudinger, Kenneth M.; Young, Kevin C.; Sarovar, Mohan S.; Blume-Kohout, Robin J.

Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not a well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. These theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.

More Details
Results 51–75 of 100
Results 51–75 of 100