It is essential to Sandia National Laboratory’s continued success in scientific and technological advances and mission delivery to embrace a hybrid workforce culture under which current and future employees can thrive. This report focuses on the findings of the Hybrid Work Team for the Center for Computing Research, which met weekly from March to June 2023 and conducted a survey across the Center at Sandia. Conclusions in this report are drawn from the 9 authors of this report, which comprises the Hybrid Work Team, and 15 responses to a center-wide survey, as well as numerous conversations with colleagues. A major finding was widespread dissatisfaction with the quantity, execution, and tooling surrounding formal meetings with remote participants. While there was consensus that remote work enables people to produce high quality individual and technical work, there was also consensus that there was widespread social disconnect, with particular concern about hires that were made after the onset of the Covid-19 pandemic. There were many concerns about tooling and policy to facilitate remote collaboration both within Sandia and with its external collaborators. This report includes recommendations for mitigating these problems. For problems for which obvious recommendations cannot be made, ideas of what a successful solution might look like are presented.
As noise limits the performance of quantum processors, the ability to characterize this noise and develop methods to overcome it is essential for the future of quantum computing. In this report, we develop a complete set of tools for improving quantum processor performance at the application level, including low-level physical models of quantum gates, a numerically efficient method of producing process matrices that span a wide range of model parameters, and full-channel quantum simulations. We then provide a few examples of how to use these tools to study the effects of noise on quantum circuits.
Work performed under this one-year LDRD was concerned with estimating resource requirements for small quantum test beds that are expected to be available in the near future. This work represents a preliminary demonstration of our ability to leverage quantum hardware for solving small quantum simulation problems in areas of interest to the DOE. The algorithms enabling such studies are hybrid quantum-classical variational algorithms, in particular the widely-used variational quantum eigensolver (VQE). Employing this hybrid algorithm, in which the quantum computer complements the classical one, we implemented an end-to-end application-level toolchain that allows the user to specify a molecule of interest and compute the ground state energy using the VQE approach. We found significant limitations attributable to the classical portion of the hybrid system, including a greater than greater-than-quartic power scaling of the classical memory requirements with the system size. Current VQE approaches would require an exascale machine for solving any molecule with size greater than 150 nuclei. Our findings include several improvements that we implemented into the VQE toolchain, including a new classical optimizer that is decades old but hadn't been considered before in the VQE ecosystem. Our findings suggest limitations to variational hybrid approaches to simulation that further motivate the need for a gate-based fault-tolerant quantum processor that can implement larger problems using the fully digital quantum phase estimation algorithm.
We discuss a new approach to computing that retains the possibility of exponential growth while making substantial use of the existing technology. The exponential improvement path of Moore's Law has been the driver behind the computing approach of Turing, von Neumann, and FORTRAN-like languages. Performance growth is slowing at the system level, even though further exponential growth should be possible. We propose two technology shifts as a remedy, the first being the formulation of a scaling rule for scaling into the third dimension. This involves use of circuit-level energy efficiency increases using adiabatic circuits to avoid overheating. However, this scaling rule is incompatible with the von Neumann architecture. The second technology shift is a computer architecture and programming change to an extremely aggressive form of Processor-In-Memory (PIM) architecture, which we call Processor-In-Memory-and-Storage (PIMS). Theoretical analysis shows that the PIMS architecture is compatible with the 3D scaling rule, suggesting both immediate benefit and a long-term improvement path.