Why reversible computing is the only way forward for general digital computing
Abstract not provided.
Abstract not provided.
IEEE Transactions on Applied Superconductivity
In a previous study, we described a new abstract circuit model for reversible computation called Asynchronous Ballistic Reversible Computing (ABRC), in which localized information bearing pulses propagate ballistically along signal paths between stateful abstract devices, and elastically scatter off those devices serially, while updating the device state in a logically-reversible and deterministic fashion. The ABRC model has been shown to be capable of universal computation. In the research reported here, we begin exploring how the ABRC model might be realized in practice using single flux quantum (SFQ) solitons (fluxons) in superconducting Josephson junction (JJ) circuits. One natural family of realizations could utilize fluxon polarity to represent binary data in individual pulses propagating near-ballistically along discrete or continuous long Josephson junctions (LJJs) or microstrip passive transmission lines (PTLs), and utilize the flux charge (-1, 0, +1) of a JJ-containing superconducting loop with Φ0 < IcL < 2Φ0 to encode a ternary state variable internal to a device. A natural question then arises as to which of the definable abstract ABRC device functionalities using this data representation might be implementable using a JJ circuit that dissipates only a small fraction of the input fluxon energy. We discuss conservation rules and symmetries considered as constraints to be obeyed in these circuits, and begin the process of classifying the possible ABRC devices in this family having up to 3 bidirectional I/O terminals, and up to 3 internal states.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Computer
The U.S. National Quantum Initiative places quantum computer scaling in the same category as Moore's law. While the technical basis of semiconductor scale up is well known, the equivalent principle for quantum computers is still being developed. Let's explore these new ideas.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
We review the physical foundations of Landauer’s Principle, which relates the loss of information from a computational process to an increase in thermodynamic entropy. Despite the long history of the Principle, its fundamental rationale and proper interpretation remain frequently misunderstood. Contrary to some misinterpretations of the Principle, the mere transfer of entropy between computational and non-computational subsystems can occur in a thermodynamically reversible way without increasing total entropy. However, Landauer’s Principle is not about general entropy transfers; rather, it more specifically concerns the ejection of (all or part of) some correlated information from a controlled, digital form (e.g., a computed bit) to an uncontrolled, non-computational form, i.e., as part of a thermal environment. Any uncontrolled thermal system will, by definition, continually re-randomize the physical information in its thermal state, from our perspective as observers who cannot predict the exact dynamical evolution of the microstates of such environments. Thus, any correlations involving information that is ejected into and subsequently thermalized by the environment will be lost from our perspective, resulting directly in an irreversible increase in thermodynamic entropy. Avoiding the ejection and thermalization of correlated computational information motivates the reversible computing paradigm, although the requirements for computations to be thermodynamically reversible are less restrictive than frequently described, particularly in the case of stochastic computational operations. There are interesting possibilities for the design of computational processes that utilize stochastic, many-to-one computational operations while nevertheless avoiding net entropy increase that remain to be fully explored.
Simulating HPC systems is a difficult task and the emergence of “Beyond CMOS” architectures and execution models will increase that difficulty. This document presents a “tutorial” on some of the simulation challenges faced by conventional and non-conventional architectures (Section 1) and goals and requirements for simulating Beyond CMOS systems (Section 2). These provide background for proposed short- and long-term roadmaps for simulation efforts at Sandia (Sections 3 and 4). Additionally, a brief explanation of a proof-of-concept integration of a Beyond CMOS architectural simulator is presented (Section 2.3).
2017 IEEE International Conference on Wireless for Space and Extreme Environments, WiSEE 2017
Conventional wisdom in the spacecraft domain is that on-orbit computation is expensive, and thus, information is traditionally funneled to the ground as directly as possible. The explosion of information due to larger sensors, the advancements of Moore's law, and other considerations lead us to revisit this practice. In this article, we consider the trade-off between computation, storage, and transmission, viewed as an energy minimization problem.
2017 IEEE International Conference on Rebooting Computing, ICRC 2017 - Proceedings
Most existing concepts for hardware implementation of reversible computing invoke an adiabatic computing paradigm, in which individual degrees of freedom (e.g., node voltages) are synchronously transformed under the influence of externallysupplied driving signals. But distributing these "power/clock" signals to all gates within a design while efficiently recovering their energy is difficult. Can we reduce clocking overhead using a ballistic approach, wherein data signals self-propagating between devices drive most state transitions? Traditional concepts of ballistic computing, such as the classic Billiard-Ball Model, typically rely on a precise synchronization of interacting signals, which can fail due to exponential amplification of timing differences when signals interact. In this paper, we develop a general model of Asynchronous Ballistic Reversible Computing (ABRC) that aims to address these problems by eliminating the requirement for precise synchronization between signals. Asynchronous reversible devices in this model are isomorphic to a restricted set of Mealy finite-state machines. We explore ABRC devices having up to 3 bidirectional I/O terminals and up to 2 internal states, identifying a simple pair of such devices that comprises a computationally universal set of primitives. We also briefly discuss how ABRC might be implemented using single flux quanta in superconducting circuits.
2017 IEEE International Conference on Rebooting Computing, ICRC 2017 - Proceedings
Most existing concepts for hardware implementation of reversible computing invoke an adiabatic computing paradigm, in which individual degrees of freedom (e.g., node voltages) are synchronously transformed under the influence of externallysupplied driving signals. But distributing these "power/clock" signals to all gates within a design while efficiently recovering their energy is difficult. Can we reduce clocking overhead using a ballistic approach, wherein data signals self-propagating between devices drive most state transitions? Traditional concepts of ballistic computing, such as the classic Billiard-Ball Model, typically rely on a precise synchronization of interacting signals, which can fail due to exponential amplification of timing differences when signals interact. In this paper, we develop a general model of Asynchronous Ballistic Reversible Computing (ABRC) that aims to address these problems by eliminating the requirement for precise synchronization between signals. Asynchronous reversible devices in this model are isomorphic to a restricted set of Mealy finite-state machines. We explore ABRC devices having up to 3 bidirectional I/O terminals and up to 2 internal states, identifying a simple pair of such devices that comprises a computationally universal set of primitives. We also briefly discuss how ABRC might be implemented using single flux quanta in superconducting circuits.
Abstract not provided.
Abstract not provided.
IEEE Spectrum
For more than 50 years, computers have made steady and dramatic improvements, all thanks to Moore’s Law—the exponential increase over time in the number of transistors that can be fabricated on an integrated circuit of a given size. Moore’s Law owed its success to the fact that as transistors were made smaller, they became simultaneously cheaper, faster, and more energy efficient. The payoff from this win-win-win scenario enabled reinvestment in semiconductor fabrication technology that could make even smaller, more densely-packed transistors. And so this virtuous cycle continued, decade after decade. Now though, experts in industry, academia, and government laboratories anticipate that semiconductor miniaturization won’t continue much longer—maybe 10 years or so, at best. Making transistors smaller no longer yields the improvements it used to. The physical characteristics of small transistors forced clock speeds to cease getting faster more than a decade ago, which drove the industry to start building chips with multiple cores. But even multi-core architectures must contend with increasing amounts of “dark silicon,” areas of the chip that must be powered off to avoid overheating.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.