Will quantum computation become an important milestone in human progress? Passionate advocates and equally passionate skeptics abound. IEEE already provides useful, neutral forums for state-of-the-art science and engineering knowledge as well as practical benchmarks for quantum computation evaluation. But could the organization do more.
Logic-memory integration helps mitigate the von Neumann bottleneck, and this has enabled a new class of architectures that helps accelerate graph analytics and operations on sparse data streams. These utilize merge networks as a key unit of computation. Such networks are highly parallel and their performance increases with tighter coupling between logic and memory when a bitonic algorithm is used. This paper presents energy-efficient on-chip network architectures for merging key-value pairs using both word-parallel and bit-serial paradigms. The proposed architectures are capable of merging two rows of high bandwidth memory (HBM)worth of data in a manner that is completely overlapped with the reading from and writing back to such a row. Furthermore, their energy consumption is about an order of magnitude lower when compared to a naive crossbar based design.
In the early 2000s, industry switched to multicore microprocessors to address semiconductors' speed and power limits. However, the change was unsuccessful, leading to dire claims that 'Moore's law is ending.' This column suggests that while the approach was sound, it needed a deeper architectural transformation. Industry has since discovered a suitable architecture, but work remains on software to support it.
Security vulnerabilities such as Meltdown and Spectre demonstrate how chip complexity grew faster than our ability to manage unintended consequences. Attention to security from the outset should be part of the rememdy, yet complexity must be controlled at a more fundamental level.
Could combining quantum computing and machine learning with Moore's law produce a true 'rebooted computer'? This article posits that a three-technology hybrid-computing approach might yield sufficiently improved answers to a broad class of problems such that energy efficiency will no longer be the dominant concern.
Conventional wisdom in the spacecraft domain is that on-orbit computation is expensive, and thus, information is traditionally funneled to the ground as directly as possible. The explosion of information due to larger sensors, the advancements of Moore's law, and other considerations lead us to revisit this practice. In this article, we consider the trade-off between computation, storage, and transmission, viewed as an energy minimization problem.
The familiar story of Moore's law is actually inaccurate. This article corrects the story, leading to different projections for the future. Moore's law is a fluid idea whose definition changes over time. It thus doesn't have the ability to 'end,' as is popularly reported, but merely takes different forms as the semiconductor and computer industries evolve.