Publications

21 Results
Skip to search filters

Combining Spike Time Dependent Plasticity (STDP) and Backpropagation (BP) for Robust and Data Efficient Spiking Neural Networks (SNN)

Wang, Felix W.; Teeter, Corinne M.

National security applications require artificial neural networks (ANNs) that consume less power, are fast and dynamic online learners, are fault tolerant, and can learn from unlabeled and imbalanced data. We explore whether two fundamentally different, traditional learning algorithms from artificial intelligence and the biological brain can be merged. We tackle this problem from two directions. First, we start from a theoretical point of view and show that the spike time dependent plasticity (STDP) learning curve observed in biological networks can be derived using the mathematical framework of backpropagation through time. Second, we show that transmission delays, as observed in biological networks, improve the ability of spiking networks to perform classification when trained using a backpropagation of error (BP) method. These results provide evidence that STDP could be compatible with a BP learning rule. Combining these learning algorithms will likely lead to networks more capable of meeting our national security missions.

More Details

Crossing the Cleft: Communication Challenges Between Neuroscience and Artificial Intelligence

Frontiers in Computational Neuroscience

Chance, Frances S.; Aimone, James B.; Musuvathy, Srideep M.; Smith, Michael R.; Vineyard, Craig M.; Wang, Felix W.

Historically, neuroscience principles have heavily influenced artificial intelligence (AI), for example the influence of the perceptron model, essentially a simple model of a biological neuron, on artificial neural networks. More recently, notable recent AI advances, for example the growing popularity of reinforcement learning, often appear more aligned with cognitive neuroscience or psychology, focusing on function at a relatively abstract level. At the same time, neuroscience stands poised to enter a new era of large-scale high-resolution data and appears more focused on underlying neural mechanisms or architectures that can, at times, seem rather removed from functional descriptions. While this might seem to foretell a new generation of AI approaches arising from a deeper exploration of neuroscience specifically for AI, the most direct path for achieving this is unclear. Here we discuss cultural differences between the two fields, including divergent priorities that should be considered when leveraging modern-day neuroscience for AI. For example, the two fields feed two very different applications that at times require potentially conflicting perspectives. We highlight small but significant cultural shifts that we feel would greatly facilitate increased synergy between the two fields.

More Details

The insect brain as a model system for low power electronics and edge processing applications

Proceedings - 2019 IEEE Space Computing Conference, SCC 2019

Yanguas-Gil, Angel; Mane, Anil; Elam, Jeffrey W.; Wang, Felix W.; Severa, William M.; Daram, Anurag R.; Kudithipudi, Dhireesha

The insect brain is a great model system for low power electronics: Insects carry out multisensory integration and are able to change the way the process information, learn, and adapt to changes in their environment with a very limited power budget. This context-dependent processing allows them to implement multiple functionalities within the same network, as well as to minimize power consumption by having context-dependent gains in their first layers of input processing. The combination of low power consumption, adaptability and online learning, and robustness makes them particularly appealing for a number of space applications, from rovers and probes to satellites, all having to deal with the progressive degradation of their capabilities in remote environments. In this work, we explore architectures inspired in the insect brain capable of context-dependent processing and learning. Starting from algorithms, we have explored three different implementations: A spiking implementation in a neuromorphic chip, a custom implementation in an FPGA, and finally hybrid analog/digital implementations based on cross-bar arrays. For the latter, we found that the development of novel resistive materials is crucial in order to enhance the energy efficiency of analog devices while maintaining an adequate footprint. Metal-oxide nanocomposite materials, fabricated using ALD with processes compatible with semiconductor processing, are promising candidates to fill in that role.

More Details

Sparse coding for N-gram feature extraction and training for file fragment classification

IEEE Transactions on Information Forensics and Security

Wang, Felix W.; Quach, Tu-Thach Q.; Wheeler, Jason W.; Aimone, James B.; James, Conrad D.

File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features, such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used to reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers, such as support vector machines over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.

More Details

A spike-Timing neuromorphic architecture

2017 IEEE International Conference on Rebooting Computing, ICRC 2017 - Proceedings

Hill, Aaron J.; Donaldson, Jonathon W.; Rothganger, Fredrick R.; Vineyard, Craig M.; Follett, David R.; Follett, Pamela L.; Smith, Michael R.; Verzi, Stephen J.; Severa, William M.; Wang, Felix W.; Aimone, James B.; Naegle, John H.; James, Conrad D.

Unlike general purpose computer architectures that are comprised of complex processor cores and sequential computation, the brain is innately parallel and contains highly complex connections between computational units (neurons). Key to the architecture of the brain is a functionality enabled by the combined effect of spiking communication and sparse connectivity with unique variable efficacies and temporal latencies. Utilizing these neuroscience principles, we have developed the Spiking Temporal Processing Unit (STPU) architecture which is well-suited for areas such as pattern recognition and natural language processing. In this paper, we formally describe the STPU, implement the STPU on a field programmable gate array, and show measured performance data.

More Details
21 Results
21 Results