Publications

Results 26–50 of 218

Search results

Jump to search filters

Neuromorphic scaling advantages for energy-efficient random walk computations

Nature Electronics

Smith, J.D.; Hill, Aaron; Reeder, Leah E.; Franke, Brian C.; Lehoucq, Rich; Parekh, Ojas D.; Severa, William M.; Aimone, James B.

Neuromorphic computing, which aims to replicate the computational structure and architecture of the brain in synthetic hardware, has typically focused on artificial intelligence applications. What is less explored is whether such brain-inspired hardware can provide value beyond cognitive tasks. Here we show that the high degree of parallelism and configurability of spiking neuromorphic architectures makes them well suited to implement random walks via discrete-time Markov chains. These random walks are useful in Monte Carlo methods, which represent a fundamental computational tool for solving a wide range of numerical computing tasks. Using IBM’s TrueNorth and Intel’s Loihi neuromorphic computing platforms, we show that our neuromorphic computing algorithm for generating random walk approximations of diffusion offers advantages in energy-efficient computation compared with conventional approaches. We also show that our neuromorphic computing algorithm can be extended to more sophisticated jump-diffusion processes that are useful in a range of applications, including financial economics, particle physics and machine learning.

More Details

Probabilistic Neural Circuits leveraging AI-Enhanced Codesign for Random Number Generation

Proceedings - 2022 IEEE International Conference on Rebooting Computing, ICRC 2022

Cardwell, Suma G.; Schuman, Catherine D.; Smith, J.D.; Patel, Karan; Kwon, Jaesuk; Liu, Samuel; Allemang, Christopher R.; Misra, Shashank; Incorvia, Jean A.; Aimone, James B.

Stochasticity is ubiquitous in the world around us. However, our predominant computing paradigm is deterministic. Random number generation (RNG) can be a computationally inefficient operation in this system especially for larger workloads. Our work leverages the underlying physics of emerging devices to develop probabilistic neural circuits for RNGs from a given distribution. However, codesign for novel circuits and systems that leverage inherent device stochasticity is a hard problem. This is mostly due to the large design space and complexity of doing so. It requires concurrent input from multiple areas in the design stack from algorithms, architectures, circuits, to devices. In this paper, we present examples of optimal circuits developed leveraging AI-enhanced codesign techniques using constraints from emerging devices and algorithms. Our AI-enhanced codesign approach accelerated design and enabled interactions between experts from different areas of the micro-electronics design stack including theory, algorithms, circuits, and devices. We demonstrate optimal probabilistic neural circuits using magnetic tunnel junction and tunnel diode devices that generate an RNG from a given distribution.

More Details

Probabilistic Neural Circuits leveraging AI-Enhanced Codesign for Random Number Generation

Proceedings - 2022 IEEE International Conference on Rebooting Computing, ICRC 2022

Cardwell, Suma G.; Smith, J.D.; Allemang, Christopher R.; Misra, Shashank; Aimone, James B.

Stochasticity is ubiquitous in the world around us. However, our predominant computing paradigm is deterministic. Random number generation (RNG) can be a computationally inefficient operation in this system especially for larger workloads. Our work leverages the underlying physics of emerging devices to develop probabilistic neural circuits for RNGs from a given distribution. However, codesign for novel circuits and systems that leverage inherent device stochasticity is a hard problem. This is mostly due to the large design space and complexity of doing so. It requires concurrent input from multiple areas in the design stack from algorithms, architectures, circuits, to devices. In this paper, we present examples of optimal circuits developed leveraging AI-enhanced codesign techniques using constraints from emerging devices and algorithms. Our AI-enhanced codesign approach accelerated design and enabled interactions between experts from different areas of the micro-electronics design stack including theory, algorithms, circuits, and devices. We demonstrate optimal probabilistic neural circuits using magnetic tunnel junction and tunnel diode devices that generate an RNG from a given distribution.

More Details

Neuromorphic Graph Algorithms

Parekh, Ojas D.; Wang, Yipu; Ho, Yang; Phillips, Cynthia A.; Pinar, Ali P.; Aimone, James B.; Severa, William M.

Graph algorithms enable myriad large-scale applications including cybersecurity, social network analysis, resource allocation, and routing. The scalability of current graph algorithm implementations on conventional computing architectures are hampered by the demise of Moore’s law. We present a theoretical framework for designing and assessing the performance of graph algorithms executing in networks of spiking artificial neurons. Although spiking neural networks (SNNs) are capable of general-purpose computation, few algorithmic results with rigorous asymptotic performance analysis are known. SNNs are exceptionally well-motivated practically, as neuromorphic computing systems with 100 million spiking neurons are available, and systems with a billion neurons are anticipated in the next few years. Beyond massive parallelism and scalability, neuromorphic computing systems offer energy consumption orders of magnitude lower than conventional high-performance computing systems. We employ our framework to design and analyze new spiking algorithms for shortest path and dynamic programming problems. Our neuromorphic algorithms are message-passing algorithms relying critically on data movement for computation. For fair and rigorous comparison with conventional algorithms and architectures, which is challenging but paramount, we develop new models of data-movement in conventional computing architectures. This allows us to prove polynomial-factor advantages, even when we assume a SNN consisting of a simple grid-like network of neurons. To the best of our knowledge, this is one of the first examples of a rigorous asymptotic computational advantage for neuromorphic computing.

More Details

Mapping Stochastic Devices to Probabilistic Algorithms

Aimone, James B.; Safonov, Alexander

Probabilistic and Bayesian neural networks have long been proposed as a method to incorporate uncertainty about the world (both in training data and operation) into artificial intelligence applications. One approach to making a neural network probabilistic is to leverage a Monte Carlo sampling approach that samples a trained network while incorporating noise. Such sampling approaches for neural networks have not been extensively studied due to the prohibitive requirement of many computationally expensive samples. While the development of future microelectronics platforms that make this sampling more efficient is an attractive option, it has not been immediately clear how to sample a neural network and what the quality of random number generation should be. This research aimed to start addressing these two fundamental questions by examining basic “off the shelf” neural networks can be sampled through a few different mechanisms (including synapse “dropout” and neuron “dropout”) and examine how these sampling approaches can be evaluated both in terms of evaluating algorithm effectiveness and the required quality of random numbers.

More Details

Provable advantages for graph algorithms in spiking neural networks

Annual ACM Symposium on Parallelism in Algorithms and Architectures

Aimone, James B.; Ho, Yang; Parekh, Ojas D.; Phillips, Cynthia A.; Pinar, Ali P.; Severa, William M.; Wang, Yipu

We present a theoretical framework for designing and assessing the performance of algorithms executing in networks consisting of spiking artificial neurons. Although spiking neural networks (SNNs) are capable of general-purpose computation, few algorithmic results with rigorous asymptotic performance analysis are known. SNNs are exceptionally well-motivated practically, as neuromorphic computing systems with 100 million spiking neurons are available, and systems with a billion neurons are anticipated in the next few years. Beyond massive parallelism and scalability, neuromorphic computing systems offer energy consumption orders of magnitude lower than conventional high-performance computing systems. We employ our framework to design and analyze neuromorphic graph algorithms, focusing on shortest path problems. Our neuromorphic algorithms are message-passing algorithms relying critically on data movement for computation, and we develop data-movement lower bounds for conventional algorithms. A fair and rigorous comparison with conventional algorithms and architectures is challenging but paramount. We prove a polynomial-factor advantage even when we assume an SNN consisting of a simple grid-like network of neurons. To the best of our knowledge, this is one of the first examples of a provable asymptotic computational advantage for neuromorphic computing.

More Details

Low-Power Deep Learning Inference using the SpiNNaker Neuromorphic Platform

Vineyard, Craig M.; Dellana, Ryan; Aimone, James B.; Severa, William M.

n this presentation we will discuss recent results on using the SpiNNaker neuromorphic platform (48-chip model) for deep learning neural network inference. We use the Sandia Labs developed Whet stone spiking deep learning library to train deep multi-layer perceptrons and convolutional neural networks suitable for the spiking substrate on the neural hardware architecture. By using the massively parallel nature of SpiNNaker, we are able to achieve, under certain network topologies, substantial network tiling and consequentially impressive inference throughput. Such high-throughput systems may have eventual application in remote sensing applications where large images need to be chipped, scanned, and processed quickly. Additionally, we explore complex topologies that push the limits of the SpiNNaker routing hardware and investigate how that impacts mapping software-implemented networks to on-hardware instantiations.

More Details

Spiking Neural Streaming Binary Arithmetic

Proceedings - 2021 International Conference on Rebooting Computing, ICRC 2021

Aimone, James B.; Hill, Aaron; Severa, William M.; Vineyard, Craig M.

Boolean functions and binary arithmetic operations are central to standard computing paradigms. Accordingly, many advances in computing have focused upon how to make these operations more efficient as well as exploring what they can compute. To best leverage the advantages of novel computing paradigms it is important to consider what unique computing approaches they offer. However, for any special-purpose co-processor, Boolean functions and binary arithmetic operations are useful for, among other things, avoiding unnecessary I/O on-and-off the co-processor by pre- and post-processing data on-device. This is especially true for spiking neuromorphic architectures where these basic operations are not fundamental low-level operations. Instead, these functions require specific implementation. Here we discuss the implications of an advantageous streaming binary encoding method as well as a handful of circuits designed to exactly compute elementary Boolean and binary operations.

More Details

A Roadmap for Reaching the Potential of Brain-Derived Computing

Advanced Intelligent Systems

Aimone, James B.

Neuromorphic computing is a critical future technology for the computing industry, but it has yet to achieve its promise and has struggled to establish a cohesive research community. A large part of the challenge is that full realization of the potential of brain inspiration requires advances in both device hardware, computing architectures, and algorithms. This simultaneous development across technology scales is unprecedented in the computing field. This article presents a strategy, framed by market and policy pressures, for moving past these current technological and cultural hurdles to realize its full impact across technology. Achieving the full potential of brain-derived algorithms as well as post-complementary metal-oxide-semiconductor (CMOS) scaling neuromorphic hardware requires appropriately balancing the near-term opportunities of deep learning applications with the long-term potential of less understood opportunities in neural computing.

More Details

Mosaics, The Best of Both Worlds: Analog devices with Digital Spiking Communication to build a Hybrid Neural Network Accelerator

Aimone, James B.; Bennett, Christopher; Cardwell, Suma G.; Dellana, Ryan; Xiao, Tianyao P.

Neuromorphic architectures have seen a resurgence of interest in the past decade owing to 100x-1000x efficiency gain over conventional Von Neumann architectures. Digital neuromorphic chips like Intel's Loihi have shown efficiency gains compared to GPUs and CPUs and can be scaled to build larger systems. Analog neuromorphic architectures promise even further savings in energy efficiency, area, and latency than their digital counterparts. Neuromorphic analog and digital technologies provide both low-power and configurable acceleration of challenging artificial intelligence (AI) algorithms. We present a hybrid analog-digital neuromorphic architecture that can amplify the advantages of both high-density analog memory and spike-based digital communication while mitigating each of the other approaches' limitations.

More Details

Neuromorphic scaling advantages for energy-efficient random walk computations

Smith, J.D.; Hill, Aaron; Reeder, Leah; Franke, Brian C.; Lehoucq, Rich; Parekh, Ojas D.; Severa, William M.; Aimone, James B.

Computing stands to be radically improved by neuromorphic computing (NMC) approaches inspired by the brain's incredible efficiency and capabilities. Most NMC research, which aims to replicate the brain's computational structure and architecture in man-made hardware, has focused on artificial intelligence; however, less explored is whether this brain-inspired hardware can provide value beyond cognitive tasks. We demonstrate that high-degree parallelism and configurability of spiking neuromorphic architectures makes them well-suited to implement random walks via discrete time Markov chains. Such random walks are useful in Monte Carlo methods, which represent a fundamental computational tool for solving a wide range of numerical computing tasks. Additionally, we show how the mathematical basis for a probabilistic solution involving a class of stochastic differential equations can leverage those simulations to provide solutions for a range of broadly applicable computational tasks. Despite being in an early development stage, we find that NMC platforms, at a sufficient scale, can drastically reduce the energy demands of high-performance computing platforms.

More Details

Solving a steady-state PDE using spiking networks and neuromorphic hardware

ACM International Conference Proceeding Series

Smith, J.D.; Severa, William M.; Hill, Aaron; Reeder, Leah; Franke, Brian C.; Lehoucq, Rich; Parekh, Ojas D.; Aimone, James B.

The widely parallel, spiking neural networks of neuromorphic processors can enable computationally powerful formulations. While recent interest has focused on primarily machine learning tasks, the space of appropriate applications is wide and continually expanding. Here, we leverage the parallel and event-driven structure to solve a steady state heat equation using a random walk method. The random walk can be executed fully within a spiking neural network using stochastic neuron behavior, and we provide results from both IBM TrueNorth and Intel Loihi implementations. Additionally, we position this algorithm as a potential scalable benchmark for neuromorphic systems.

More Details
Results 26–50 of 218
Results 26–50 of 218