Publications

Results 26–50 of 107

Search results

Jump to search filters

Spiking Neural Streaming Binary Arithmetic

Proceedings - 2021 International Conference on Rebooting Computing, ICRC 2021

Aimone, James B.; Hill, Aaron J.; Severa, William M.; Vineyard, Craig M.

Boolean functions and binary arithmetic operations are central to standard computing paradigms. Accordingly, many advances in computing have focused upon how to make these operations more efficient as well as exploring what they can compute. To best leverage the advantages of novel computing paradigms it is important to consider what unique computing approaches they offer. However, for any special-purpose co-processor, Boolean functions and binary arithmetic operations are useful for, among other things, avoiding unnecessary I/O on-and-off the co-processor by pre- and post-processing data on-device. This is especially true for spiking neuromorphic architectures where these basic operations are not fundamental low-level operations. Instead, these functions require specific implementation. Here we discuss the implications of an advantageous streaming binary encoding method as well as a handful of circuits designed to exactly compute elementary Boolean and binary operations.

More Details

Exploring characteristics of neural network architecture computation for enabling SAR ATR

Proceedings of SPIE - The International Society for Optical Engineering

Melzer, Ryan D.; Severa, William M.; Plagge, Mark P.; Vineyard, Craig M.

Neural network approaches have periodically been explored in the pursuit of high performing SAR ATR solutions. With deep neural networks (DNNs) now offering many state-of-The-Art solutions to computer vision tasks, neural networks are once again being revisited for ATR processing. Here, we characterize and explore a suite of neural network architectural topologies. In doing so, we assess how different architectural approaches impact performance and consider the associated computational costs. This includes characterizing network depth, width, scale, connectivity patterns, as well as convolution layer optimizations. We have explored a suite of architectural topologies applied to both the canonical MSTAR dataset, as well as the more operationally realistic Synthetic and Measured Paired and Labeled Experiment (SAMPLE) dataset. The latter pairs high fidelity computational models of targets with actual measured SAR data. Effectively, this dataset offers the ability to train a DNN on simulated data and test the network performance on measured data. Not only does our in-depth architecture topology analysis offer insight into how different architectural approaches impact performance, but we also have trained DNNs attaining state-of-The-Art performance on both datasets. Furthermore, beyond just accuracy, we also assess how efficiently an accelerator architecture executes these neural networks. Specifically, Using an analytical assessment tool, we forecast energy and latency for an edge TPU like architecture. Taken together, this tradespace exploration offers insight into the interplay of accuracy, energy, and latency for executing these networks.

More Details

Neuromorphic scaling advantages for energy-efficient random walk computations

Smith, John D.; Hill, Aaron J.; Reeder, Leah; Franke, Brian C.; Lehoucq, Richard B.; Parekh, Ojas D.; Severa, William M.; Aimone, James B.

Computing stands to be radically improved by neuromorphic computing (NMC) approaches inspired by the brain's incredible efficiency and capabilities. Most NMC research, which aims to replicate the brain's computational structure and architecture in man-made hardware, has focused on artificial intelligence; however, less explored is whether this brain-inspired hardware can provide value beyond cognitive tasks. We demonstrate that high-degree parallelism and configurability of spiking neuromorphic architectures makes them well-suited to implement random walks via discrete time Markov chains. Such random walks are useful in Monte Carlo methods, which represent a fundamental computational tool for solving a wide range of numerical computing tasks. Additionally, we show how the mathematical basis for a probabilistic solution involving a class of stochastic differential equations can leverage those simulations to provide solutions for a range of broadly applicable computational tasks. Despite being in an early development stage, we find that NMC platforms, at a sufficient scale, can drastically reduce the energy demands of high-performance computing platforms.

More Details

Effective Pruning of Binary Activation Neural Networks

ACM International Conference Proceeding Series

Severa, William M.; Dellana, Ryan A.; Vineyard, Craig M.

Deep learning networks have become a vital tool for image and data processing tasks for deployed and edge applications. Resource constraints, particularly low power budgets, have motivated methods and devices for efficient on-edge inference. Two promising methods are reduced precision communication networks (e.g. binary activation spiking neural networks) and weight pruning. In this paper, we provide a preliminary exploration for combining these two methods, specifically in-training weight pruning of whetstone networks, to achieve deep networks with both sparse weights and binary activations.

More Details

Solving a steady-state PDE using spiking networks and neuromorphic hardware

ACM International Conference Proceeding Series

Smith, John D.; Severa, William M.; Hill, Aaron J.; Reeder, Leah E.; Franke, Brian C.; Lehoucq, Richard B.; Parekh, Ojas D.; Aimone, James B.

The widely parallel, spiking neural networks of neuromorphic processors can enable computationally powerful formulations. While recent interest has focused on primarily machine learning tasks, the space of appropriate applications is wide and continually expanding. Here, we leverage the parallel and event-driven structure to solve a steady state heat equation using a random walk method. The random walk can be executed fully within a spiking neural network using stochastic neuron behavior, and we provide results from both IBM TrueNorth and Intel Loihi implementations. Additionally, we position this algorithm as a potential scalable benchmark for neuromorphic systems.

More Details

The Future of Computing: Integrating Scientific Computation on Neuromorphic Systems

Reeder, Leah E.; Aimone, James B.; Severa, William M.

Neuromorphic computing is known for its integration of algorithms and hardware elements that are inspired by the brain. Conventionally, this nontraditional method of computing is used for many neural or learning inspired applications. Unfortunately, this has resulted in the field of neuromorphic computing being relatively narrow in scope. In this paper we discuss two research areas actively trying to widen the impact of neuromorphic systems. The first is Fugu, a high-level programming interface designed to bridge the gap between general computer scientists and those who specialize in neuromorphic areas. The second aims to map classical scientific computing problems onto these frameworks through the example of random walks. This elucidates a class of scientific applications that are conducive to neuromorphic algorithms.

More Details

Workshop on Advanced Computing for Connected and Automated Vehicles

Mailhiot, Christian M.; Severa, William M.; Moen, Christopher D.; Jones, Troy

To safely and reliably operate without a human driver, connected and automated vehicles (CAVs) require more advanced computing hardware and software solutions than are implemented today in vehicles that provide driver-assistance features. A workshop was held to discuss advanced microelectronics and computing approaches that can help meet future energy and computational requirements for CAVs. Workshop questions were posed as follows: will highly automated vehicles be viable with conventional computing approaches or will they require a step-change in computing; what are the energy requirements to support on-board sensing and computing; and what advanced computing approaches could reduce the energy requirements while meeting their computational requirements? At present, there is no clear convergence in the computing architecture for highly automated vehicles. However, workshop participants generally agreed that there is a need to improve the computing performance per watt by at least 10x to advance the degree of automation. Participants suggested that DOE and the national laboratories could play a near-term role by developing benchmarks for determining and comparing CAV computing performance, developing public data sets to support algorithm and software development, and contributing precompetitive advancements in energy efficient computing.

More Details
Results 26–50 of 107
Results 26–50 of 107