Researchers are now considering alternatives to the von Neumann computer architecture as a way to improve performance. The current approach of simulating benchmark applications favors continued use of the von Neumann architecture, but architects can help overcome this bias.
DeBenedictis, Erik; Badaroglu, Mustafa; Chen, An; Conte, Thomas M.; Gargini, Paolo
Rather than continue the expensive and time-consuming quest for transistor replacement, the authors argue that 3D chips coupled with new computer architectures can keep Moore's law on its traditional scaling path.
Industry's inability to reduce logic gates' energy consumption is slowing growth in an important part of the worldwide economy. Some scientists argue that alternative approaches could greatly reduce energy consumption. These approaches entail myriad technical and political issues.
We address practical limits of energy efficiency scaling for logic and memory. Scaling of logic will end with unreliable operation, making computers probabilistic as a side effect. The errors can be corrected or tolerated, but overhead will increase with further scaling. We address the tradeoff between scaling and error correction that yields minimum energy per operation, finding new error correction methods with energy consumption limits about 2× below current approaches. The maximum energy efficiency for memory depends on several other factors. Adiabatic and reversible methods applied to logic have promise, but overheads have precluded practical use. However, the regular array structure of memory arrays tends to reduce overhead and makes adiabatic memory a viable option. This paper reports an adiabatic memory that has been tested at about 85× improvement over standard designs for energy efficiency. Combining these approaches could set energy efficiency expectations for processor-in-memory computing systems.
Continuing to improve computational energy efficiency will soon require developing and deploying new operational paradigms for computation that circumvent the fundamental thermodynamic limits that apply to conventionally-implemented Boolean logic circuits. In particular, Landauer's principle tells us that irreversible information erasure requires a minimum energy dissipation of kT ln 2 per bit erased, where k is Boltzmann's constant and T is the temperature of the available heat sink. However, correctly applying this principle requires carefully characterizing what actually constitutes "information erasure" within a given physical computing mechanism. In this paper, we show that abstract combinational logic networks can validly be considered to contain no information beyond that specified in their input, and that, because of this, appropriately-designed physical implementations of even multi-layer networks can in fact be updated in a single step while incurring no greater theoretical minimum energy dissipation than is required to update their inputs. Furthermore, this energy can approach zero if the network state is updated adiabatically via a reversible transition process. Our novel operational paradigm for updating logic networks suggests an entirely new class of hardware devices and circuits that can be used to reversibly implement Boolean logic with energy dissipation far below the Landauer limit.
2016 IEEE International Conference on Rebooting Computing, ICRC 2016 - Conference Proceedings
DeBenedictis, Erik; Frank, Michael P.; Ganesh, Natesh; Anderson, Neal G.
At roughly kT energy dissipation per operation, the thermodynamic energy efficiency "limits" of Moore's Law were unimaginably far off in the 1960s. However, current computers operate at only 100-10,000 times this limit, forming an argument that historical rates of efficiency scaling must soon slow. This paper reviews the justification for the ∼kT per operation limit in the context of processors for von Neumann-class computer architectures of the 1960s. We then reapply the fundamental arguments to contemporary applications and identify a new direction for future computing in which the ultimate efficiency limits would be much further out. New nanodevices with high-level functions that aggregate the functionality of several logic gates and some local memory may be the right building blocks for much more energy efficient execution of emerging applications - such as neural networks.
Artificial neural networks could become the technological driver that replaces Moore's law, boosting computers' utlity through a process akin to automatic programming-although physics and computer architecture would also factor in.