Evolutionary algorithms have been shown to be an effective method for training (or configuring) spiking neural networks. There are, however, challenges to developing accessible, scalable, and portable solutions. We present an extension to the Fugu framework that wraps the NEAT framework, bringing evolutionary algorithms to Fugu. This approach provides a flexible and customizable platform for optimizing network architectures, independent of fitness functions and input data structures. We leverage Fugu's computational graph approach to evaluate all members of a population in parallel. Additionally, as Fugu is platform-agnostic, this population can be evaluated in simulation or on neuromorphic hardware. We demonstrate our extension using several classification and agent-based tasks. One task illustrates how Fugu integration allows for spiking pre-processing to lower the search space dimensionality. We also provide some benchmark results using the Intel Loihi platform.
Classification of features in a scene typically requires conversion of the incoming photonic field int the electronic domain. Recently, an alternative approach has emerged whereby passive structured materials can perform classification tasks by directly using free-space propagation and diffraction of light. In this manuscript, we present a theoretical and computational study of such systems and establish the basic features that govern their performance. We show that system architecture, material structure, and input light field are intertwined and need to be co-designed to maximize classification accuracy. Our simulations show that a single layer metasurface can achieve classification accuracy better than conventional linear classifiers, with an order of magnitude fewer diffractive features than previously reported. For a wavelength λ, single layer metasurfaces of size 100λ x 100λ with aperture density λ-2 achieve ~96% testing accuracy on the MNIST dataset, for an optimized distance ~100λ to the output plane. This is enabled by an intrinsic nonlinearity in photodetection, despite the use of linear optical metamaterials. Furthermore, we find that once the system is optimized, the number of diffractive features is the main determinant of classification performance. The slow asymptotic scaling with the number of apertures suggests a reason why such systems may benefit from multiple layer designs. Finally, we show a trade-off between the number of apertures and fabrication noise.
Neuromorphic computing (NMC) is an exciting paradigm seeking to incorporate principles from biological brains to enable advanced computing capabilities. Not only does this encompass algorithms, such as neural networks, but also the consideration of how to structure the enabling computational architectures for executing such workloads. Assessing the merits of NMC is more nuanced than simply comparing singular, historical performance metrics from traditional approaches versus that of NMC. The novel computational architectures require new algorithms to make use of their differing computational approaches. And neural algorithms themselves are emerging across increasing application domains. Accordingly, we propose following the example high performance computing has employed using context capturing mini-apps and abstraction tools to explore the merits of computational architectures. Here we present Neural Mini-Apps in a neural circuit tool called Fugu as a means of NMC insight.
Deep neural networks have recently demonstrated state-of-the-art accuracy on public Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) benchmark datasets. While attaining competitive accuracy on benchmark datasets is a necessary feature, it is important to characterize other facets of new SAR ATR algorithms. We extend this recent work by demonstrating not only improved state-of-the-art accuracy, but that contemporary deep neural networks can achieve several algorithmic traits beyond competitive accuracy which are necessitated by operational deployment scenarios. First, we employ several saliency map algorithms to provide explainability and insight into understanding black-box classiffer decisions. Second, we collect and implement numerous data augmentation routines and training improvements both from the computer vision literature and specffc to SAR ATR data in order to further improve model domain adaptation performance from synthetic to measured data, achieving a 99.26% accuracy on SAMPLE validation with a simple network architecture. Finally, we survey model reproducibility and performance variability under domain adaptation from synthetic to measured data, demonstrating potential consequences of training on only synthetic data.
Neuromorphic computers are hardware systems that mimic the brain’s computational process phenomenology. This is in contrast to neural network accelerators, such as the Google TPU or the Intel Neural Compute Stick, which seek to accelerate the fundamental computation and data flows of neural network models used in the field of machine learning. Neuromorphic computers emulate the integrate and fire neuron dynamics of the brain to achieve a spiking communication architecture for computation. While neural networks are brain-inspired, they drastically oversimplify the brain’s computation model. Neuromorphic architectures are closer to the true computation model of the brain (albeit, still simplified). Neuromorphic computing models herald a 1000x power improvement over conventional CPU architectures. Sandia National Labs is a major contributor to the research community on neuromorphic systems by performing design analysis, evaluation, and algorithm development for neuromorphic computers. Space-based remote sensing development has been a focused target of funding for exploratory research into neuromorphic systems for their potential advantage in that program area; SNL has led some of these efforts. Recently, neuromorphic application evaluation has reached the NA-22 program area. This same exploratory research and algorithm development should penetrate the unattended ground sensor space for SNL’s mission partners and program areas. Neuromorphic computing paradigms offer a distinct advantage for the SWaP-constrained embedded systems of our diverse sponsor-driven program areas.
Typical approaches to classify scenes from light convert the light field to electrons to perform the computation in the digital electronic domain. This conversion and downstream computational analysis require significant power and time. Diffractive neural networks have recently emerged as unique systems to classify optical fields at lower energy and high speeds. Previous work has shown that a single layer of diffractive metamaterial can achieve high performance on classification tasks. In analogy with electronic neural networks, it is anticipated that multilayer diffractive systems would provide better performance, but the fundamental reasons for the potential improvement have not been established. In this work, we present extensive computational simulations of two - layer diffractive neural networks and show that they can achieve high performance with fewer diffractive features than single layer systems.
Automated vehicles (AV) hold great promise for improving safety, as well as reducing congestion and emissions. In order to make automated vehicles commercially viable, a reliable and highperformance vehicle-based computing platform that meets ever-increasing computational demands will be key. Given the state of existing digital computing technology, designers will face significant challenges in meeting the needs of highly automated vehicles without exceeding thermal constraints or consuming a large portion of the energy available on vehicles, thus reducing range between charges or refills. The accompanying increases in energy for AV use will place increased demand on energy production and distribution infrastructure, which also motivates increasing computational energy efficiency.
Classification of features in a scene typically requires conversion of the incoming photonic field into the electronic domain. Recently, an alternative approach has emerged whereby passive structured materials can perform classification tasks by directly using free-space propagation and diffraction of light. In this manuscript, we present a theoretical and computational study of such systems and establish the basic features that govern their performance. We show that system architecture, material structure, and input light field are intertwined and need to be co-designed to maximize classification accuracy. Our simulations show that a single layer metasurface can achieve classification accuracy better than conventional linear classifiers, with an order of magnitude fewer diffractive features than previously reported. For a wavelength λ, single layer metasurfaces of size 100λ × 100λ with an aperture density λ-2 achieve ∼96% testing accuracy on the MNIST data set, for an optimized distance ∼100λ to the output plane. This is enabled by an intrinsic nonlinearity in photodetection, despite the use of linear optical metamaterials. Furthermore, we find that once the system is optimized, the number of diffractive features is the main determinant of classification performance. The slow asymptotic scaling with the number of apertures suggests a reason why such systems may benefit from multiple layer designs. Finally, we show a trade-off between the number of apertures and fabrication noise.
n this presentation we will discuss recent results on using the SpiNNaker neuromorphic platform (48-chip model) for deep learning neural network inference. We use the Sandia Labs developed Whet stone spiking deep learning library to train deep multi-layer perceptrons and convolutional neural networks suitable for the spiking substrate on the neural hardware architecture. By using the massively parallel nature of SpiNNaker, we are able to achieve, under certain network topologies, substantial network tiling and consequentially impressive inference throughput. Such high-throughput systems may have eventual application in remote sensing applications where large images need to be chipped, scanned, and processed quickly. Additionally, we explore complex topologies that push the limits of the SpiNNaker routing hardware and investigate how that impacts mapping software-implemented networks to on-hardware instantiations.
Neural network approaches have periodically been explored in the pursuit of high performing SAR ATR solutions. With deep neural networks (DNNs) now offering many state-of-The-Art solutions to computer vision tasks, neural networks are once again being revisited for ATR processing. Here, we characterize and explore a suite of neural network architectural topologies. In doing so, we assess how different architectural approaches impact performance and consider the associated computational costs. This includes characterizing network depth, width, scale, connectivity patterns, as well as convolution layer optimizations. We have explored a suite of architectural topologies applied to both the canonical MSTAR dataset, as well as the more operationally realistic Synthetic and Measured Paired and Labeled Experiment (SAMPLE) dataset. The latter pairs high fidelity computational models of targets with actual measured SAR data. Effectively, this dataset offers the ability to train a DNN on simulated data and test the network performance on measured data. Not only does our in-depth architecture topology analysis offer insight into how different architectural approaches impact performance, but we also have trained DNNs attaining state-of-The-Art performance on both datasets. Furthermore, beyond just accuracy, we also assess how efficiently an accelerator architecture executes these neural networks. Specifically, Using an analytical assessment tool, we forecast energy and latency for an edge TPU like architecture. Taken together, this tradespace exploration offers insight into the interplay of accuracy, energy, and latency for executing these networks.
Boolean functions and binary arithmetic operations are central to standard computing paradigms. Accordingly, many advances in computing have focused upon how to make these operations more efficient as well as exploring what they can compute. To best leverage the advantages of novel computing paradigms it is important to consider what unique computing approaches they offer. However, for any special-purpose co-processor, Boolean functions and binary arithmetic operations are useful for, among other things, avoiding unnecessary I/O on-and-off the co-processor by pre- and post-processing data on-device. This is especially true for spiking neuromorphic architectures where these basic operations are not fundamental low-level operations. Instead, these functions require specific implementation. Here we discuss the implications of an advantageous streaming binary encoding method as well as a handful of circuits designed to exactly compute elementary Boolean and binary operations.
Deep learning networks have become a vital tool for image and data processing tasks for deployed and edge applications. Resource constraints, particularly low power budgets, have motivated methods and devices for efficient on-edge inference. Two promising methods are reduced precision communication networks (e.g. binary activation spiking neural networks) and weight pruning. In this paper, we provide a preliminary exploration for combining these two methods, specifically in-training weight pruning of whetstone networks, to achieve deep networks with both sparse weights and binary activations.
Historically, neuroscience principles have heavily influenced artificial intelligence (AI), for example the influence of the perceptron model, essentially a simple model of a biological neuron, on artificial neural networks. More recently, notable recent AI advances, for example the growing popularity of reinforcement learning, often appear more aligned with cognitive neuroscience or psychology, focusing on function at a relatively abstract level. At the same time, neuroscience stands poised to enter a new era of large-scale high-resolution data and appears more focused on underlying neural mechanisms or architectures that can, at times, seem rather removed from functional descriptions. While this might seem to foretell a new generation of AI approaches arising from a deeper exploration of neuroscience specifically for AI, the most direct path for achieving this is unclear. Here we discuss cultural differences between the two fields, including divergent priorities that should be considered when leveraging modern-day neuroscience for AI. For example, the two fields feed two very different applications that at times require potentially conflicting perspectives. We highlight small but significant cultural shifts that we feel would greatly facilitate increased synergy between the two fields.
Remote sensing (RS) data collection capabilities are rapidly evolving hyper-spectrally (sensing more spectral bands), hyper-temporally (faster sampling rates) and hyper-spatially (increasing number of smaller pixels). Accordingly, sensor technologies have outpaced transmission capa- bilities introducing a need to process more data at the sensor. While many sophisticated data processing capabilities are emerging, power and other hardware requirements for these approaches on conventional electronic systems place them out of context for resource constrained operational environments. To address these limitations, in this research effort we have investigated and char- acterized neural-inspired architectures to determine suitability for implementing RS algorithms In doing so, we have been able to highlight a 100x performance per watt improvement using neu- romorphic computing as well as developed an algorithmic architecture co-design and exploration capability.
Neuromorphic architectures are represented by a broad class of hardware, with artificial neural network (ANN) architectures at one extreme and event-driven spiking architectures at another. Algorithms and applications efficiently processed by one neuromorphic architecture may be unsuitable for another, but it is challenging to compare various neuromorphic architectures among themselves and with traditional computer architectures. In this position paper, we take inspiration from architectural characterizations in scientific computing and motivate the need for neuromorphic architecture comparison techniques, outline relevant performance metrics and analysis tools, and describe cognitive workloads to meaningfully exercise neuromorphic architectures. Additionally, we propose a simulation-based framework for benchmarking a wide range of neuromorphic workloads. While this work is applicable to neuromorphic development in general, we focus on event-driven architectures, as they offer both unique performance characteristics and evaluation challenges.
Neuromorphic hardware architectures represent a growing family of potential post-Moore's Law Era platforms. Largely due to event-driving processing inspired by the human brain, these computer platforms can offer significant energy benefits compared to traditional von Neumann processors. Unfortunately there still remains considerable difficulty in successfully programming, configuring and deploying neuromorphic systems. We present the Fugu framework as an answer to this need. Rather than necessitating a developer attain intricate knowledge of how to program and exploit spiking neural dynamics to utilize the potential benefits of neuromorphic computing, Fugu is designed to provide a higher level abstraction as a hardware-independent mechanism for linking a variety of scalable spiking neural algorithms from a variety of sources. Individual kernels linked together provide sophisticated processing through compositionality. Fugu is intended to be suitable for a wide-range of neuromorphic applications, including machine learning, scientific computing, and more brain-inspired neural algorithms. Ultimately, we hope the community adopts this and other open standardization attempts allowing for free exchange and easy implementations of the ever-growing list of spiking neural algorithms.
Technological advances have enabled exponential growth in both sensor data collection, as well as computational processing. However, as a limiting factor, the transmission bandwidth in between a space-based sensor and a ground station processing center has not seen the same growth. A resolution to this bandwidth limitation is to move the processing to the sensor, but doing so faces size, weight, and power operational constraints. Different physical constraints on processor manufacturing are spurring a resurgence in neuromorphic approaches amenable to the space-based operational environment. Here we describe historical trends in computer architecture and the implications for neuromorphic computing, as well as give an overview of how remote sensing applications may be impacted by this emerging direction for computing.