Probing logical error models with gate set tomography
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
As noise limits the performance of quantum processors, the ability to characterize this noise and develop methods to overcome it is essential for the future of quantum computing. In this report, we develop a complete set of tools for improving quantum processor performance at the application level, including low-level physical models of quantum gates, a numerically efficient method of producing process matrices that span a wide range of model parameters, and full-channel quantum simulations. We then provide a few examples of how to use these tools to study the effects of noise on quantum circuits.
Work performed under this one-year LDRD was concerned with estimating resource requirements for small quantum test beds that are expected to be available in the near future. This work represents a preliminary demonstration of our ability to leverage quantum hardware for solving small quantum simulation problems in areas of interest to the DOE. The algorithms enabling such studies are hybrid quantum-classical variational algorithms, in particular the widely-used variational quantum eigensolver (VQE). Employing this hybrid algorithm, in which the quantum computer complements the classical one, we implemented an end-to-end application-level toolchain that allows the user to specify a molecule of interest and compute the ground state energy using the VQE approach. We found significant limitations attributable to the classical portion of the hybrid system, including a greater than greater-than-quartic power scaling of the classical memory requirements with the system size. Current VQE approaches would require an exascale machine for solving any molecule with size greater than 150 nuclei. Our findings include several improvements that we implemented into the VQE toolchain, including a new classical optimizer that is decades old but hadn't been considered before in the VQE ecosystem. Our findings suggest limitations to variational hybrid approaches to simulation that further motivate the need for a gate-based fault-tolerant quantum processor that can implement larger problems using the fully digital quantum phase estimation algorithm.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings of the 2015 IEEE/ACM International Symposium on Nanoscale Architectures, NANOARCH 2015
We discuss a new approach to computing that retains the possibility of exponential growth while making substantial use of the existing technology. The exponential improvement path of Moore's Law has been the driver behind the computing approach of Turing, von Neumann, and FORTRAN-like languages. Performance growth is slowing at the system level, even though further exponential growth should be possible. We propose two technology shifts as a remedy, the first being the formulation of a scaling rule for scaling into the third dimension. This involves use of circuit-level energy efficiency increases using adiabatic circuits to avoid overheating. However, this scaling rule is incompatible with the von Neumann architecture. The second technology shift is a computer architecture and programming change to an extremely aggressive form of Processor-In-Memory (PIM) architecture, which we call Processor-In-Memory-and-Storage (PIMS). Theoretical analysis shows that the PIMS architecture is compatible with the 3D scaling rule, suggesting both immediate benefit and a long-term improvement path.
Abstract not provided.
Abstract not provided.
Abstract not provided.