Publications

Results 26–50 of 147

Search results

Jump to search filters

A volumetric framework for quantum computer benchmarks

Quantum

Blume-Kohout, Robin J.; Young, Kevin C.

We propose a very large family of benchmarks for probing the performance of quantum computers. We call them volumetric benchmarks (VBs) because they generalize IBM's benchmark for measuring quantum volume [1]. The quantum volume benchmark defines a family of square circuits whose depth d and width w are the same. A volumetric benchmark defines a family of rectangular quantum circuits, for which d and w are uncoupled to allow the study of time/space performance trade-offs. Each VB defines a mapping from circuit shapes - (w, d) pairs - to test suites C(w, d). A test suite is an ensemble of test circuits that share a common structure. The test suite C for a given circuit shape may be a single circuit C, a specific list of circuits {C1... CN} that must all be run, or a large set of possible circuits equipped with a distribution Pr(C). The circuits in a given VB share a structure, which is limited only by designers' creativity. We list some known benchmarks, and other circuit families, that fit into the VB framework: several families of random circuits, periodic circuits, and algorithm-inspired circuits. The last ingredient defining a benchmark is a success criterion that defines when a processor is judged to have “passed” a given test circuit. We discuss several options. Benchmark data can be analyzed in many ways to extract many properties, but we propose a simple, universal graphical summary of results that illustrates the Pareto frontier of the d vs w trade-off for the processor being benchmarked.

More Details

Direct Randomized Benchmarking for Multiqubit Devices

Physical Review Letters

Proctor, Timothy J.; Carignan-Dugas, Arnaud; Rudinger, Kenneth M.; Nielsen, Erik N.; Blume-Kohout, Robin J.; Young, Kevin C.

Benchmarking methods that can be adapted to multiqubit systems are essential for assessing the overall or "holistic" performance of nascent quantum processors. The current industry standard is Clifford randomized benchmarking (RB), which measures a single error rate that quantifies overall performance. But, scaling Clifford RB to many qubits is surprisingly hard. It has only been performed on one, two, and three qubits as of this writing. This reflects a fundamental inefficiency in Clifford RB: the n-qubit Clifford gates at its core have to be compiled into large circuits over the one- and two-qubit gates native to a device. As n grows, the quality of these Clifford gates quickly degrades, making Clifford RB impractical at relatively low n. In this Letter, we propose a direct RB protocol that mostly avoids compiling. Instead, it uses random circuits over the native gates in a device, which are seeded by an initial layer of Clifford-like randomization. We demonstrate this protocol experimentally on two to five qubits using the publicly available ibmqx5. We believe this to be the greatest number of qubits holistically benchmarked, and this was achieved on a freely available device without any special tuning up. Our protocol retains the simplicity and convenient properties of Clifford RB: it estimates an error rate from an exponential decay. But, it can be extended to processors with more qubits - we present simulations on 10+ qubits - and it reports a more directly informative and flexible error rate than the one reported by Clifford RB. We show how to use this flexibility to measure separate error rates for distinct sets of gates, and we use this method to estimate the average error rate of a set of cnot gates.

More Details

Compressed optimization of device architectures for semiconductor quantum devices compressed optimization of device architectures... ADAM FREES et al

Physical Review Applied

Ward, Daniel R.; Frees, Adam; Gamble, John K.; Blume-Kohout, Robin J.; Eriksson, M.A.; Friesen, Mark; Coppersmith, S.N.

Recent advances in nanotechnology have enabled researchers to manipulate small collections of quantum-mechanical objects with unprecedented accuracy. In semiconductor quantum-dot qubits, this manipulation requires controlling the dot orbital energies, the tunnel couplings, and the electron occupations. These properties all depend on the voltages placed on the metallic electrodes that define the device, the positions of which are fixed once the device is fabricated. While there has been much success with small numbers of dots, as the number of dots grows, it will be increasingly useful to control these systems with as few electrode voltage changes as possible. Here, we introduce a protocol, which we call the "compressed optimization of device architectures" (CODA), in order both to efficiently identify sparse sets of voltage changes that control quantum systems and to introduce a metric that can be used to compare device designs. As an example of the former, we apply this method to simulated devices with up to 100 quantum dots and show that CODA automatically tunes devices more efficiently than other common nonlinear optimizers. To demonstrate the latter, we determine the optimal lateral scale for a triple quantum dot, yielding a simulated device that can be tuned with small voltage changes on a limited number of electrodes.

More Details

Metrics and Benchmarks for Quantum Processors: State of Play

Blume-Kohout, Robin J.; Young, Kevin C.

A compelling narrative has taken hold as quantum computing explodes into the commercial sector: Quantum computing in 2018 is like classical computing in 1965. In 1965 Gordon Moore wrote his famous paper about integrated circuits, saying: "At present, [minimum cost] is reached when 50 components are used per circuit. But... the complexity for minimum component costs has increased at a rate of roughly a factor of two per year... by 1975, the number of components per integrated circuit for minimum cost will be 65,000." This narrative is both appealing (we want to believe that quantum computing will follow the incredibly successful path of classical computing!) and plausible (2018 saw IBM, Intel, and Google announce 50-qubit integrated chips). But it is also deeply misleading. Here is an alternative: Quantum computing in 2018 is like classical computing in 1938. In 1938, John Atanasoff and Clifford Berry built the very first electronic digital computer. It had no program, and was not Turing-complete. Vacuum tubes — the standard "bit" for 20 years — were still 5 years in the future. ENIAC and the achievement of "computational supremacy" (over hand calculation) wouldn't arrive for 8 years, despite the accelerative effect of WWII. Integrated circuits and the information age were more than 20 years away. Neither of these analogies is perfect. Quantum computing technology is more like 1938, while the level of funding and excitement suggest 1965 (or later!). But the point of the cautionary analogy to 1938 is simple: Quantum computing in 2018 is a research field. It is far too early to establish metrics or benchmarks for performance. The best role for neutral organizations like IEEE is to encourage and shape research into metrics and benchmarks, so as to be ready when they become necessary. This white paper presents the evidence and reasoning for this claim. We explain what it means to say that quantum computing is a "research field", and why metrics and benchmarks for quantum processors also constitute a research field. We discuss the potential for harmful consequences of prematurely establishing standards or frameworks. We conclude by suggesting specific actions that IEEE or similar organizations can take to accelerate the development of good metrics and benchmarks for quantum computing.

More Details
Results 26–50 of 147
Results 26–50 of 147