Using Neuromorphic Computing Methods for General Computer Performance Growth
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Computer
The U.S. National Quantum Initiative places quantum computer scaling in the same category as Moore's law. While the technical basis of semiconductor scale up is well known, the equivalent principle for quantum computers is still being developed. Let's explore these new ideas.
Abstract not provided.
Abstract not provided.
Computer
Will quantum computation become an important milestone in human progress? Passionate advocates and equally passionate skeptics abound. IEEE already provides useful, neutral forums for state-of-the-art science and engineering knowledge as well as practical benchmarks for quantum computation evaluation. But could the organization do more.
Abstract not provided.
2018 IEEE International Conference on Rebooting Computing, ICRC 2018
Logic-memory integration helps mitigate the von Neumann bottleneck, and this has enabled a new class of architectures that helps accelerate graph analytics and operations on sparse data streams. These utilize merge networks as a key unit of computation. Such networks are highly parallel and their performance increases with tighter coupling between logic and memory when a bitonic algorithm is used. This paper presents energy-efficient on-chip network architectures for merging key-value pairs using both word-parallel and bit-serial paradigms. The proposed architectures are capable of merging two rows of high bandwidth memory (HBM)worth of data in a manner that is completely overlapped with the reading from and writing back to such a row. Furthermore, their energy consumption is about an order of magnitude lower when compared to a naive crossbar based design.
Abstract not provided.
Computer
In the early 2000s, industry switched to multicore microprocessors to address semiconductors' speed and power limits. However, the change was unsuccessful, leading to dire claims that 'Moore's law is ending.' This column suggests that while the approach was sound, it needed a deeper architectural transformation. Industry has since discovered a suitable architecture, but work remains on software to support it.
Moore's law is driving an information revolution, worldwide economic growth, and is a tool for national security. This report explains how dire proclamations that "Moore's law is ending" are due to a natural redefinition of the phrase, but computing remains positioned to both drive economic growth and support national security. The computer industry used to be led by the semiconductor companies that made ever faster microprocessors with larger memories. However, control is shifting to new ways of designing computers, notably based on 3D chips and new analog and digital architectures. While artificial intelligence and quantum computing research have become mainstream pursuits, these latter two areas seem destined split off from Moore's law rather than become a part of it. We include a discussion of recent developments and opportunities in optical communications and computing.
Computer
Security vulnerabilities such as Meltdown and Spectre demonstrate how chip complexity grew faster than our ability to manage unintended consequences. Attention to security from the outset should be part of the rememdy, yet complexity must be controlled at a more fundamental level.
Computer
Could combining quantum computing and machine learning with Moore's law produce a true 'rebooted computer'? This article posits that a three-technology hybrid-computing approach might yield sufficiently improved answers to a broad class of problems such that energy efficiency will no longer be the dominant concern.
2017 IEEE International Conference on Wireless for Space and Extreme Environments, WiSEE 2017
Conventional wisdom in the spacecraft domain is that on-orbit computation is expensive, and thus, information is traditionally funneled to the ground as directly as possible. The explosion of information due to larger sensors, the advancements of Moore's law, and other considerations lead us to revisit this practice. In this article, we consider the trade-off between computation, storage, and transmission, viewed as an energy minimization problem.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Computer
The familiar story of Moore's law is actually inaccurate. This article corrects the story, leading to different projections for the future. Moore's law is a fluid idea whose definition changes over time. It thus doesn't have the ability to 'end,' as is popularly reported, but merely takes different forms as the semiconductor and computer industries evolve.
Computer
Industry's inability to reduce logic gates' energy consumption is slowing growth in an important part of the worldwide economy. Some scientists argue that alternative approaches could greatly reduce energy consumption. These approaches entail myriad technical and political issues.
Computer
Researchers are now considering alternatives to the von Neumann computer architecture as a way to improve performance. The current approach of simulating benchmark applications favors continued use of the von Neumann architecture, but architects can help overcome this bias.
Computer
Rather than continue the expensive and time-consuming quest for transistor replacement, the authors argue that 3D chips coupled with new computer architectures can keep Moore's law on its traditional scaling path.
IEEE Micro
In this column, the guest editors introduce the six articles in the special issue on Architectures for the Post-Moore Era.
Abstract not provided.
Computer
Computational complexity analysis allows us to quantify energy-efficiency scaling potential - an important task for assessing research options.
2016 IEEE International Conference on Rebooting Computing, ICRC 2016 - Conference Proceedings
We address practical limits of energy efficiency scaling for logic and memory. Scaling of logic will end with unreliable operation, making computers probabilistic as a side effect. The errors can be corrected or tolerated, but overhead will increase with further scaling. We address the tradeoff between scaling and error correction that yields minimum energy per operation, finding new error correction methods with energy consumption limits about 2× below current approaches. The maximum energy efficiency for memory depends on several other factors. Adiabatic and reversible methods applied to logic have promise, but overheads have precluded practical use. However, the regular array structure of memory arrays tends to reduce overhead and makes adiabatic memory a viable option. This paper reports an adiabatic memory that has been tested at about 85× improvement over standard designs for energy efficiency. Combining these approaches could set energy efficiency expectations for processor-in-memory computing systems.
2016 IEEE International Conference on Rebooting Computing, ICRC 2016 - Conference Proceedings
At roughly kT energy dissipation per operation, the thermodynamic energy efficiency "limits" of Moore's Law were unimaginably far off in the 1960s. However, current computers operate at only 100-10,000 times this limit, forming an argument that historical rates of efficiency scaling must soon slow. This paper reviews the justification for the ∼kT per operation limit in the context of processors for von Neumann-class computer architectures of the 1960s. We then reapply the fundamental arguments to contemporary applications and identify a new direction for future computing in which the ultimate efficiency limits would be much further out. New nanodevices with high-level functions that aggregate the functionality of several logic gates and some local memory may be the right building blocks for much more energy efficient execution of emerging applications - such as neural networks.
2016 IEEE International Conference on Rebooting Computing, ICRC 2016 - Conference Proceedings
Continuing to improve computational energy efficiency will soon require developing and deploying new operational paradigms for computation that circumvent the fundamental thermodynamic limits that apply to conventionally-implemented Boolean logic circuits. In particular, Landauer's principle tells us that irreversible information erasure requires a minimum energy dissipation of kT ln 2 per bit erased, where k is Boltzmann's constant and T is the temperature of the available heat sink. However, correctly applying this principle requires carefully characterizing what actually constitutes "information erasure" within a given physical computing mechanism. In this paper, we show that abstract combinational logic networks can validly be considered to contain no information beyond that specified in their input, and that, because of this, appropriately-designed physical implementations of even multi-layer networks can in fact be updated in a single step while incurring no greater theoretical minimum energy dissipation than is required to update their inputs. Furthermore, this energy can approach zero if the network state is updated adiabatically via a reversible transition process. Our novel operational paradigm for updating logic networks suggests an entirely new class of hardware devices and circuits that can be used to reversibly implement Boolean logic with energy dissipation far below the Landauer limit.
Computer
The National Strategic Computing Initiative will inevitably produce new computer hardware, but what about software?.
Abstract not provided.
Computer
The slowing of Moore's law offers IEEE and its members the unique opportunity to influence research toward continued growth in computing performance.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Computer
Artificial neural networks could become the technological driver that replaces Moore's law, boosting computers' utlity through a process akin to automatic programming-although physics and computer architecture would also factor in.
Abstract not provided.
Abstract not provided.
Abstract not provided.