Publications

5 Results

Search results

Jump to search filters

Scaling neural simulations in STACS

Neuromorphic Computing and Engineering

Wang, Felix W.; Kulkarni, Shruti; Theilman, Bradley; Rothganger, Fredrick R.; Schuman, Catherine; Lim, Seung H.; Aimone, James B.

Abstract As modern neuroscience tools acquire more details about the brain, the need to move towards biological-scale neural simulations continues to grow. However, effective simulations at scale remain a challenge. Beyond just the tooling required to enable parallel execution, there is also the unique structure of the synaptic interconnectivity, which is globally sparse but has relatively high connection density and non-local interactions per neuron. There are also various practicalities to consider in high performance computing applications, such as the need for serializing neural networks to support potentially long-running simulations that require checkpoint-restart. Although acceleration on neuromorphic hardware is also a possibility, development in this space can be difficult as hardware support tends to vary between platforms and software support for larger scale models also tends to be limited. In this paper, we focus our attention on Simulation Tool for Asynchronous Cortical Streams (STACS), a spiking neural network simulator that leverages the Charm++ parallel programming framework, with the goal of supporting biological-scale simulations as well as interoperability between platforms. Central to these goals is the implementation of scalable data structures suitable for efficiently distributing a network across parallel partitions. Here, we discuss a straightforward extension of a parallel data format with a history of use in graph partitioners, which also serves as a portable intermediate representation for different neuromorphic backends. We perform scaling studies on the Summit supercomputer, examining the capabilities of STACS in terms of network build and storage, partitioning, and execution. We highlight how a suitably partitioned, spatially dependent synaptic structure introduces a communication workload well-suited to the multicast communication supported by Charm++. We evaluate the strong and weak scaling behavior for networks on the order of millions of neurons and billions of synapses, and show that STACS achieves competitive levels of parallel efficiency.

More Details

Goemans-Williamson MAXCUT approximation algorithm on Loihi

ACM International Conference Proceeding Series

Theilman, Bradley; Aimone, James B.

Approximation algorithms for computationally complex problems are of significant importance in computing as they provide computational guarantees of obtaining practically useful results for otherwise computationally intractable problems. The demonstration of implementing formal approximation algorithms on spiking neuromorphic hardware is a critical step in establishing that neuromorphic computing can offer cost-effective solutions to significant optimization problems while retaining important computational guarantees on the quality of solutions. Here, we demonstrate that the Loihi platform is capable of effectively implementing the Goemans-Williamson (GW) approximation algorithm for MAXCUT, an NP-hard problem that has applications ranging from VLSI design to network analysis. We show that a Loihi implementation of the approximation step of the GW algorithm obtains equivalent maximum cuts of graphs as conventional algorithms, and we describe how different aspects of architecture precision impacts the algorithm performance.

More Details

Stochastic Neuromorphic Circuits for Solving MAXCUT

Proceedings - 2023 IEEE International Parallel and Distributed Processing Symposium, IPDPS 2023

Theilman, Bradley; Wang, Yipu W.; Parekh, Ojas D.; Severa, William M.; Smith, John D.; Aimone, James B.

Finding the maximum cut of a graph (MAXCUT) is a classic optimization problem that has motivated parallel algorithm development. While approximate algorithms to MAXCUT offer attractive theoretical guarantees and demonstrate compelling empirical performance, such approximation approaches can shift the dominant computational cost to the stochastic sampling operations. Neuromorphic computing, which uses the organizing principles of the nervous system to inspire new parallel computing architectures, offers a possible solution. One ubiquitous feature of natural brains is stochasticity: the individual elements of biological neural networks possess an intrinsic randomness that serves as a resource enabling their unique computational capacities. By designing circuits and algorithms that make use of randomness similarly to natural brains, we hypothesize that the intrinsic randomness in microelectronics devices could be turned into a valuable component of a neuromorphic architecture enabling more efficient computations. Here, we present neuromorphic circuits that transform the stochastic behavior of a pool of random devices into useful correlations that drive stochastic solutions to MAXCUT. We show that these circuits perform favorably in comparison to software solvers and argue that this neuromorphic hardware implementation provides a path for scaling advantages. This work demonstrates the utility of combining neuromorphic principles with intrinsic randomness as a computational resource for new computational architectures.

More Details

Stochastic Neuromorphic Circuits for Solving MAXCUT

Proceedings - 2023 IEEE International Parallel and Distributed Processing Symposium, IPDPS 2023

Theilman, Bradley; Wang, Yipu W.; Parekh, Ojas D.; Severa, William M.; Smith, John D.; Aimone, James B.

Finding the maximum cut of a graph (MAXCUT) is a classic optimization problem that has motivated parallel algorithm development. While approximate algorithms to MAXCUT offer attractive theoretical guarantees and demonstrate compelling empirical performance, such approximation approaches can shift the dominant computational cost to the stochastic sampling operations. Neuromorphic computing, which uses the organizing principles of the nervous system to inspire new parallel computing architectures, offers a possible solution. One ubiquitous feature of natural brains is stochasticity: the individual elements of biological neural networks possess an intrinsic randomness that serves as a resource enabling their unique computational capacities. By designing circuits and algorithms that make use of randomness similarly to natural brains, we hypothesize that the intrinsic randomness in microelectronics devices could be turned into a valuable component of a neuromorphic architecture enabling more efficient computations. Here, we present neuromorphic circuits that transform the stochastic behavior of a pool of random devices into useful correlations that drive stochastic solutions to MAXCUT. We show that these circuits perform favorably in comparison to software solvers and argue that this neuromorphic hardware implementation provides a path for scaling advantages. This work demonstrates the utility of combining neuromorphic principles with intrinsic randomness as a computational resource for new computational architectures.

More Details
5 Results
5 Results