Neuromorphic Computing: Towards Brain-like Energy Efficiency
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
2024 IEEE Neuro Inspired Computational Elements Conference, NICE 2024 - Proceedings
As deep learning networks increase in size and performance, so do associated computational costs, approaching prohibitive levels. Dendrites offer powerful nonlinear "on-The-wire"computational capabilities, increasing the expressivity of the point neuron while preserving many of the advantages of SNNs. We seek to demonstrate the potential of dendritic computations by combining them with the low-power event-driven computation of Spiking Neural Networks (SNNs) for deep learning applications. To this end, we have developed a library that adds dendritic computation to SNNs within the PyTorch framework, enabling complex deep learning networks that still retain the low power advantages of SNNs. Our library leverages a dendrite CMOS hardware model to inform the software model, which enables nonlinear computation integrated with snnTorch at scale. By leveraging dendrites in a deep learning framework, we examine the capabilities of dendrites via coincidence detection and comparison in a machine learning task with a SNN. Finally, we discuss potential deep learning applications in the context of current state-of-The-Art deep learning methods and energy-efficient neuromorphic hardware.
Proceedings - IEEE International Symposium on Circuits and Systems
We demonstrate device codesign using reinforcement learning for probabilistic computing applications. We use a spin orbit torque magnetic tunnel junction model (SOT-MTJ) as the device exemplar. We leverage reinforcement learning (RL) to vary key device and material properties of the SOT-MTJ device for stochastic operation. Our RL method generated different candidate devices capable of generating stochastic samples for a given exponential distribution.
Proceedings - 2024 International Conference on Neuromorphic Systems, ICONS 2024
Dendrites enable neurons to perform nonlinear operations. Existing silicon dendrite circuits sufficiently model passive and active characteristics, but do not exploit shunting inhibition as an active mechanism. We present a dendrite circuit implemented on a reconfigurable analog platform that uses active inhibitory conductance signals to modulate the circuit's membrane potential. We explore the potential use of this circuit for direction selectivity by emulating recent observations demonstrating a role for shunting inhibition in a directionally-selective Drosophila (Fruit Fly) neuron.
As Moore’s Law and Dennard Scaling come to an end, it is becoming increasingly important to develop non-von Neumann computing architectures that can perform low-power computing in the domains of scientific computing, artificial intelligence, embedded systems, and edge computing. Next-generation computing technologies, such as neuromorphic computing and quantum computing, have the potential to revolutionize computing. However, in order to make progress in these fields, it is necessary to fundamentally change the current computing paradigm by codesigning systems across all system level, from materials to software. Because skilled labor is limited in the field of next-generation computing, we are developing artificial intelligence-enhanced tools to automate the codesign and co-discovery of next-generation computers. Here, we develop a method called Modular and Multi-level MAchine Learning (MAMMAL) which is able to perform analog codesign and co-discovery across multiple system levels, spanning devices to circuits. We prototype MAMMAL by using it to design simple passive analog low-pass filters. We also explore methods to incorporate uncertainty quantification into MAMMAL and to accelerate MAMMAL by using emerging technologies, such as crossbar arrays. Ultimately, we believe that MAMMAL will enable rapid progress in developing next-generation computers by automating the codesign and co-discovery of electronic systems.
Abstract not provided.
ACM International Conference Proceeding Series
Recent work in neuromorphic computing has proposed a range of new architectures for Spiking Neural Network (SNN)-based systems. However, neuromorphic design lacks a framework to facilitate exploration of different SNN-based architectures and aid with early design decisions. While there are various SNN simulators, none can be used to rapidly estimate latency and energy of different spiking architectures. We show that while current spiking designs differ in implementation, they have common features which can be represented as a generic architecture template. We describe an initial version of a framework that simulates a range of neuromorphic architectures at an abstract time-step granularity. We demonstrate our simulator by modeling Intel's Loihi platform, estimating time-varying energy and latency with less than 10% mean error for various sizes of a two-layer SNN.
ACM International Conference Proceeding Series
Recent work in neuromorphic computing has proposed a range of new architectures for Spiking Neural Network (SNN)-based systems. However, neuromorphic design lacks a framework to facilitate exploration of different SNN-based architectures and aid with early design decisions. While there are various SNN simulators, none can be used to rapidly estimate latency and energy of different spiking architectures. We show that while current spiking designs differ in implementation, they have common features which can be represented as a generic architecture template. We describe an initial version of a framework that simulates a range of neuromorphic architectures at an abstract time-step granularity. We demonstrate our simulator by modeling Intel's Loihi platform, estimating time-varying energy and latency with less than 10% mean error for various sizes of a two-layer SNN.
Abstract not provided.
ACM International Conference Proceeding Series
In this paper, we highlight how computational properties of biological dendrites can be leveraged for neuromorphic applications. Specifically, we demonstrate analog silicon dendrites that support multiplication mediated by conductance-based input in an interception model inspired by the biological dragonfly. We also demonstrate spatiotemporal pattern recognition and direction selectivity using dendrites on the Loihi neuromorphic platform. These dendritic circuits can be assembled hierarchically as building blocks for classifying complex spatiotemporal patterns.
ACM International Conference Proceeding Series
Evolutionary algorithms have been shown to be an effective method for training (or configuring) spiking neural networks. There are, however, challenges to developing accessible, scalable, and portable solutions. We present an extension to the Fugu framework that wraps the NEAT framework, bringing evolutionary algorithms to Fugu. This approach provides a flexible and customizable platform for optimizing network architectures, independent of fitness functions and input data structures. We leverage Fugu's computational graph approach to evaluate all members of a population in parallel. Additionally, as Fugu is platform-agnostic, this population can be evaluated in simulation or on neuromorphic hardware. We demonstrate our extension using several classification and agent-based tasks. One task illustrates how Fugu integration allows for spiking pre-processing to lower the search space dimensionality. We also provide some benchmark results using the Intel Loihi platform.
ACM International Conference Proceeding Series
Shunting inhibition is a potential mechanism by which biological systems multiply two time-varying signals, most recently proposed in single neurons of the fly visual system. Our work demonstrates this effect in a biological neuron model and the equivalent circuit in neuromorphic hardware modeling dendrites. We present a multi-compartment neuromorphic dendritic model that produces a multiplication-like effect using the shunting inhibition mechanism by varying leakage along the dendritic cable. Dendritic computation in neuromorphic architectures has the potential to increase complexity in single neurons and reduce the energy footprint for neural networks by enabling computation in the interconnect.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.