SAR ATR Analysis and Implications for Learning
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings of SPIE - The International Society for Optical Engineering
The lack of large, relevant and labeled datasets for synthetic aperture radar (SAR) automatic target recognition (ATR) poses a challenge for deep neural network approaches. In the case of SAR ATR, transfer learning offers promise where models are pre-trained on either synthetic SAR, alternatively collected SAR, or non-SAR source data and then fine-tuned on a smaller target SAR dataset. The concept being that the neural network can learn fundamental features from the more abundant source domain resulting in high accuracy and robust models when fine-tuned on a smaller target domain. One open question with this transfer learning strategy is how to choose source datasets that will improve accuracy of a target SAR dataset when the model is fine-tuned. Here, we apply a set of model and dataset transferability analysis techniques to investigate the efficacy of transfer learning for SAR ATR. In particular, we examine Optimal Transport Dataset Distance (OTDD), Log Maximum Evidence (LogMe), Log Expected Empirical Prediction (LEEP), Gaussian Bhattacharyya Coefficient (GBC), and H-Score. These methods consider properties such as task relatedness, statistical analysis of learned embedding properties, as well as distribution distances of the source and target domains. We apply these transferability metrics to ResNet18 models trained on a set of Non-SAR as well as SAR datasets. Overall, we present an investigation into quantitatively analyzing transferability for SAR ATR.
Proceedings of SPIE - The International Society for Optical Engineering
Deep neural networks for automatic target recognition (ATR) have been shown to be highly successful for a large variety of Synthetic Aperture Radar (SAR) benchmark datasets. However, the black box nature of neural network approaches raises concerns about how models come to their decisions, especially when in high-stake scenarios. Accordingly, a variety of techniques are being pursued seeking to offer understanding of machine learning algorithms. In this paper, we first provide an overview of explainability and interpretability techniques introducing their concepts and the insights they produce. Next we summarize several methods for computing specific approaches to explainability and interpretability as well as analyzing their outputs. Finally, we demonstrate the application of several attribution map methods and apply both attribution analysis metrics as well as localization interpretability analysis to six neural network models trained on the Synthetic and Measured Paired Labeled Experiment (SAMPLE) dataset to illustrate the insights these methods offer for analyzing SAR ATR performance.
Abstract not provided.
Abstract not provided.
ACM International Conference Proceeding Series
Evolutionary algorithms have been shown to be an effective method for training (or configuring) spiking neural networks. There are, however, challenges to developing accessible, scalable, and portable solutions. We present an extension to the Fugu framework that wraps the NEAT framework, bringing evolutionary algorithms to Fugu. This approach provides a flexible and customizable platform for optimizing network architectures, independent of fitness functions and input data structures. We leverage Fugu's computational graph approach to evaluate all members of a population in parallel. Additionally, as Fugu is platform-agnostic, this population can be evaluated in simulation or on neuromorphic hardware. We demonstrate our extension using several classification and agent-based tasks. One task illustrates how Fugu integration allows for spiking pre-processing to lower the search space dimensionality. We also provide some benchmark results using the Intel Loihi platform.
Classification of features in a scene typically requires conversion of the incoming photonic field int the electronic domain. Recently, an alternative approach has emerged whereby passive structured materials can perform classification tasks by directly using free-space propagation and diffraction of light. In this manuscript, we present a theoretical and computational study of such systems and establish the basic features that govern their performance. We show that system architecture, material structure, and input light field are intertwined and need to be co-designed to maximize classification accuracy. Our simulations show that a single layer metasurface can achieve classification accuracy better than conventional linear classifiers, with an order of magnitude fewer diffractive features than previously reported. For a wavelength λ, single layer metasurfaces of size 100λ x 100λ with aperture density λ-2 achieve ~96% testing accuracy on the MNIST dataset, for an optimized distance ~100λ to the output plane. This is enabled by an intrinsic nonlinearity in photodetection, despite the use of linear optical metamaterials. Furthermore, we find that once the system is optimized, the number of diffractive features is the main determinant of classification performance. The slow asymptotic scaling with the number of apertures suggests a reason why such systems may benefit from multiple layer designs. Finally, we show a trade-off between the number of apertures and fabrication noise.
ACM International Conference Proceeding Series
Neuromorphic computing (NMC) is an exciting paradigm seeking to incorporate principles from biological brains to enable advanced computing capabilities. Not only does this encompass algorithms, such as neural networks, but also the consideration of how to structure the enabling computational architectures for executing such workloads. Assessing the merits of NMC is more nuanced than simply comparing singular, historical performance metrics from traditional approaches versus that of NMC. The novel computational architectures require new algorithms to make use of their differing computational approaches. And neural algorithms themselves are emerging across increasing application domains. Accordingly, we propose following the example high performance computing has employed using context capturing mini-apps and abstraction tools to explore the merits of computational architectures. Here we present Neural Mini-Apps in a neural circuit tool called Fugu as a means of NMC insight.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Neuromorphic computers are hardware systems that mimic the brain’s computational process phenomenology. This is in contrast to neural network accelerators, such as the Google TPU or the Intel Neural Compute Stick, which seek to accelerate the fundamental computation and data flows of neural network models used in the field of machine learning. Neuromorphic computers emulate the integrate and fire neuron dynamics of the brain to achieve a spiking communication architecture for computation. While neural networks are brain-inspired, they drastically oversimplify the brain’s computation model. Neuromorphic architectures are closer to the true computation model of the brain (albeit, still simplified). Neuromorphic computing models herald a 1000x power improvement over conventional CPU architectures. Sandia National Labs is a major contributor to the research community on neuromorphic systems by performing design analysis, evaluation, and algorithm development for neuromorphic computers. Space-based remote sensing development has been a focused target of funding for exploratory research into neuromorphic systems for their potential advantage in that program area; SNL has led some of these efforts. Recently, neuromorphic application evaluation has reached the NA-22 program area. This same exploratory research and algorithm development should penetrate the unattended ground sensor space for SNL’s mission partners and program areas. Neuromorphic computing paradigms offer a distinct advantage for the SWaP-constrained embedded systems of our diverse sponsor-driven program areas.
Typical approaches to classify scenes from light convert the light field to electrons to perform the computation in the digital electronic domain. This conversion and downstream computational analysis require significant power and time. Diffractive neural networks have recently emerged as unique systems to classify optical fields at lower energy and high speeds. Previous work has shown that a single layer of diffractive metamaterial can achieve high performance on classification tasks. In analogy with electronic neural networks, it is anticipated that multilayer diffractive systems would provide better performance, but the fundamental reasons for the potential improvement have not been established. In this work, we present extensive computational simulations of two - layer diffractive neural networks and show that they can achieve high performance with fewer diffractive features than single layer systems.
Automated vehicles (AV) hold great promise for improving safety, as well as reducing congestion and emissions. In order to make automated vehicles commercially viable, a reliable and highperformance vehicle-based computing platform that meets ever-increasing computational demands will be key. Given the state of existing digital computing technology, designers will face significant challenges in meeting the needs of highly automated vehicles without exceeding thermal constraints or consuming a large portion of the energy available on vehicles, thus reducing range between charges or refills. The accompanying increases in energy for AV use will place increased demand on energy production and distribution infrastructure, which also motivates increasing computational energy efficiency.
ACS Photonics
Classification of features in a scene typically requires conversion of the incoming photonic field into the electronic domain. Recently, an alternative approach has emerged whereby passive structured materials can perform classification tasks by directly using free-space propagation and diffraction of light. In this manuscript, we present a theoretical and computational study of such systems and establish the basic features that govern their performance. We show that system architecture, material structure, and input light field are intertwined and need to be co-designed to maximize classification accuracy. Our simulations show that a single layer metasurface can achieve classification accuracy better than conventional linear classifiers, with an order of magnitude fewer diffractive features than previously reported. For a wavelength λ, single layer metasurfaces of size 100λ × 100λ with an aperture density λ-2 achieve ∼96% testing accuracy on the MNIST data set, for an optimized distance ∼100λ to the output plane. This is enabled by an intrinsic nonlinearity in photodetection, despite the use of linear optical metamaterials. Furthermore, we find that once the system is optimized, the number of diffractive features is the main determinant of classification performance. The slow asymptotic scaling with the number of apertures suggests a reason why such systems may benefit from multiple layer designs. Finally, we show a trade-off between the number of apertures and fabrication noise.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
n this presentation we will discuss recent results on using the SpiNNaker neuromorphic platform (48-chip model) for deep learning neural network inference. We use the Sandia Labs developed Whet stone spiking deep learning library to train deep multi-layer perceptrons and convolutional neural networks suitable for the spiking substrate on the neural hardware architecture. By using the massively parallel nature of SpiNNaker, we are able to achieve, under certain network topologies, substantial network tiling and consequentially impressive inference throughput. Such high-throughput systems may have eventual application in remote sensing applications where large images need to be chipped, scanned, and processed quickly. Additionally, we explore complex topologies that push the limits of the SpiNNaker routing hardware and investigate how that impacts mapping software-implemented networks to on-hardware instantiations.
Proceedings - 2021 International Conference on Rebooting Computing, ICRC 2021
Boolean functions and binary arithmetic operations are central to standard computing paradigms. Accordingly, many advances in computing have focused upon how to make these operations more efficient as well as exploring what they can compute. To best leverage the advantages of novel computing paradigms it is important to consider what unique computing approaches they offer. However, for any special-purpose co-processor, Boolean functions and binary arithmetic operations are useful for, among other things, avoiding unnecessary I/O on-and-off the co-processor by pre- and post-processing data on-device. This is especially true for spiking neuromorphic architectures where these basic operations are not fundamental low-level operations. Instead, these functions require specific implementation. Here we discuss the implications of an advantageous streaming binary encoding method as well as a handful of circuits designed to exactly compute elementary Boolean and binary operations.
Proceedings of SPIE - The International Society for Optical Engineering
Neural network approaches have periodically been explored in the pursuit of high performing SAR ATR solutions. With deep neural networks (DNNs) now offering many state-of-The-Art solutions to computer vision tasks, neural networks are once again being revisited for ATR processing. Here, we characterize and explore a suite of neural network architectural topologies. In doing so, we assess how different architectural approaches impact performance and consider the associated computational costs. This includes characterizing network depth, width, scale, connectivity patterns, as well as convolution layer optimizations. We have explored a suite of architectural topologies applied to both the canonical MSTAR dataset, as well as the more operationally realistic Synthetic and Measured Paired and Labeled Experiment (SAMPLE) dataset. The latter pairs high fidelity computational models of targets with actual measured SAR data. Effectively, this dataset offers the ability to train a DNN on simulated data and test the network performance on measured data. Not only does our in-depth architecture topology analysis offer insight into how different architectural approaches impact performance, but we also have trained DNNs attaining state-of-The-Art performance on both datasets. Furthermore, beyond just accuracy, we also assess how efficiently an accelerator architecture executes these neural networks. Specifically, Using an analytical assessment tool, we forecast energy and latency for an edge TPU like architecture. Taken together, this tradespace exploration offers insight into the interplay of accuracy, energy, and latency for executing these networks.
Abstract not provided.
Abstract not provided.
Abstract not provided.
ACM International Conference Proceeding Series
Deep learning networks have become a vital tool for image and data processing tasks for deployed and edge applications. Resource constraints, particularly low power budgets, have motivated methods and devices for efficient on-edge inference. Two promising methods are reduced precision communication networks (e.g. binary activation spiking neural networks) and weight pruning. In this paper, we provide a preliminary exploration for combining these two methods, specifically in-training weight pruning of whetstone networks, to achieve deep networks with both sparse weights and binary activations.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Frontiers in Computational Neuroscience
Historically, neuroscience principles have heavily influenced artificial intelligence (AI), for example the influence of the perceptron model, essentially a simple model of a biological neuron, on artificial neural networks. More recently, notable recent AI advances, for example the growing popularity of reinforcement learning, often appear more aligned with cognitive neuroscience or psychology, focusing on function at a relatively abstract level. At the same time, neuroscience stands poised to enter a new era of large-scale high-resolution data and appears more focused on underlying neural mechanisms or architectures that can, at times, seem rather removed from functional descriptions. While this might seem to foretell a new generation of AI approaches arising from a deeper exploration of neuroscience specifically for AI, the most direct path for achieving this is unclear. Here we discuss cultural differences between the two fields, including divergent priorities that should be considered when leveraging modern-day neuroscience for AI. For example, the two fields feed two very different applications that at times require potentially conflicting perspectives. We highlight small but significant cultural shifts that we feel would greatly facilitate increased synergy between the two fields.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Remote sensing (RS) data collection capabilities are rapidly evolving hyper-spectrally (sensing more spectral bands), hyper-temporally (faster sampling rates) and hyper-spatially (increasing number of smaller pixels). Accordingly, sensor technologies have outpaced transmission capa- bilities introducing a need to process more data at the sensor. While many sophisticated data processing capabilities are emerging, power and other hardware requirements for these approaches on conventional electronic systems place them out of context for resource constrained operational environments. To address these limitations, in this research effort we have investigated and char- acterized neural-inspired architectures to determine suitability for implementing RS algorithms In doing so, we have been able to highlight a 100x performance per watt improvement using neu- romorphic computing as well as developed an algorithmic architecture co-design and exploration capability.
Abstract not provided.
Abstract not provided.
ACM International Conference Proceeding Series
Neuromorphic hardware architectures represent a growing family of potential post-Moore's Law Era platforms. Largely due to event-driving processing inspired by the human brain, these computer platforms can offer significant energy benefits compared to traditional von Neumann processors. Unfortunately there still remains considerable difficulty in successfully programming, configuring and deploying neuromorphic systems. We present the Fugu framework as an answer to this need. Rather than necessitating a developer attain intricate knowledge of how to program and exploit spiking neural dynamics to utilize the potential benefits of neuromorphic computing, Fugu is designed to provide a higher level abstraction as a hardware-independent mechanism for linking a variety of scalable spiking neural algorithms from a variety of sources. Individual kernels linked together provide sophisticated processing through compositionality. Fugu is intended to be suitable for a wide-range of neuromorphic applications, including machine learning, scientific computing, and more brain-inspired neural algorithms. Ultimately, we hope the community adopts this and other open standardization attempts allowing for free exchange and easy implementations of the ever-growing list of spiking neural algorithms.
ACM International Conference Proceeding Series
Neuromorphic architectures are represented by a broad class of hardware, with artificial neural network (ANN) architectures at one extreme and event-driven spiking architectures at another. Algorithms and applications efficiently processed by one neuromorphic architecture may be unsuitable for another, but it is challenging to compare various neuromorphic architectures among themselves and with traditional computer architectures. In this position paper, we take inspiration from architectural characterizations in scientific computing and motivate the need for neuromorphic architecture comparison techniques, outline relevant performance metrics and analysis tools, and describe cognitive workloads to meaningfully exercise neuromorphic architectures. Additionally, we propose a simulation-based framework for benchmarking a wide range of neuromorphic workloads. While this work is applicable to neuromorphic development in general, we focus on event-driven architectures, as they offer both unique performance characteristics and evaluation challenges.
Abstract not provided.
Proceedings - 2019 IEEE Space Computing Conference, SCC 2019
Technological advances have enabled exponential growth in both sensor data collection, as well as computational processing. However, as a limiting factor, the transmission bandwidth in between a space-based sensor and a ground station processing center has not seen the same growth. A resolution to this bandwidth limitation is to move the processing to the sensor, but doing so faces size, weight, and power operational constraints. Different physical constraints on processor manufacturing are spurring a resurgence in neuromorphic approaches amenable to the space-based operational environment. Here we describe historical trends in computer architecture and the implications for neuromorphic computing, as well as give an overview of how remote sensing applications may be impacted by this emerging direction for computing.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Cyber-Physical Systems Security
Deep neural networks are often computationally expensive, during both the training stage and inference stage. Training is always expensive, because back-propagation requires high-precision floating-pointmultiplication and addition. However, various mathematical optimizations may be employed to reduce the computational cost of inference. Optimized inference is important for reducing power consumption and latency and for increasing throughput. This chapter introduces the central approaches for optimizing deep neural network inference: pruning "unnecessary" weights, quantizing weights and inputs, sharing weights between layer units, compressing weights before transferring from main memory, distilling large high-performance models into smaller models, and decomposing convolutional filters to reduce multiply and accumulate operations. In this chapter, using a unified notation, we provide a mathematical and algorithmic description of the aforementioned deep neural network inference optimization methods.
Proceedings of the International Joint Conference on Neural Networks
Deep neural networks (DNN) now outperform competing methods in many academic and industrial domains. These high-capacity universal function approximators have recently been leveraged by deep reinforcement learning (RL) algorithms to obtain impressive results for many control and decision making problems. During the past three years, research toward pruning, quantization, and compression of DNNs has reduced the mathematical, and therefore time and energy, requirements of DNN-based inference. For example, DNN optimization techniques have been developed which reduce storage requirements of VGG-16 from 552MB to 11.3MB, while maintaining the full-model accuracy for image classification. Building from DNN optimization results, the computer architecture community is taking increasing interest in exploring DNN hardware accelerator designs. Based on recent deep RL performance, we expect hardware designers to begin considering architectures appropriate for accelerating these algorithms too. However, it is currently unknown how, when, or if the 'noise' introduced by DNN optimization techniques will degrade deep RL performance. This work measures these impacts, using standard OpenAI Gym benchmarks. Our results show that mathematically optimized RL policies can perform equally to full-precision RL, while requiring substantially less computation. We also observe that different optimizations are better suited than others for different problem domains. By beginning to understand the impacts of mathematical optimizations on RL policy performance, this work serves as a starting point toward the development of low power or high performance deep RL accelerators.
Neural Computation
Neural-inspired spike-based computing machines often claim to achieve considerable advantages in terms of energy and time efficiency by using spikes for computation and communication. However, fundamental questions about spike-based computation remain unanswered. For instance, how much advantage do spike-based approaches have over conventionalmethods, and underwhat circumstances does spike-based computing provide a comparative advantage? Simply implementing existing algorithms using spikes as the medium of computation and communication is not guaranteed to yield an advantage. Here, we demonstrate that spike-based communication and computation within algorithms can increase throughput, and they can decrease energy cost in some cases. We present several spiking algorithms, including sorting a set of numbers in ascending/descending order, as well as finding the maximum or minimum ormedian of a set of numbers.We also provide an example application: a spiking median-filtering approach for image processing providing a low-energy, parallel implementation. The algorithms and analyses presented here demonstrate that spiking algorithms can provide performance advantages and offer efficient computation of fundamental operations useful in more complex algorithms.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Springer Proceedings in Complexity
Anomaly detection is an important problem in various fields of complex systems research including image processing, data analysis, physical security and cybersecurity. In image processing, it is used for removing noise while preserving image quality, and in data analysis, physical security and cybersecurity, it is used to find interesting data points, objects or events in a vast sea of information. Anomaly detection will continue to be an important problem in domains intersecting with “Big Data”. In this paper we provide a novel algorithm for anomaly detection that uses phase-coded spiking neurons as basic computational elements.
2017 IEEE International Conference on Rebooting Computing, ICRC 2017 - Proceedings
Unlike general purpose computer architectures that are comprised of complex processor cores and sequential computation, the brain is innately parallel and contains highly complex connections between computational units (neurons). Key to the architecture of the brain is a functionality enabled by the combined effect of spiking communication and sparse connectivity with unique variable efficacies and temporal latencies. Utilizing these neuroscience principles, we have developed the Spiking Temporal Processing Unit (STPU) architecture which is well-suited for areas such as pattern recognition and natural language processing. In this paper, we formally describe the STPU, implement the STPU on a field programmable gate array, and show measured performance data.
Abstract not provided.
As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilize memory.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings of the International Joint Conference on Neural Networks
Considerable effort is currently being spent designing neuromorphic hardware for addressing challenging problems in a variety of pattern-matching applications. These neuromorphic systems offer low power architectures with intrinsically parallel and simple spiking neuron processing elements. Unfortunately, these new hardware architectures have been largely developed without a clear justification for using spiking neurons to compute quantities for problems of interest. Specifically, the use of spiking for encoding information in time has not been explored theoretically with complexity analysis to examine the operating conditions under which neuromorphic computing provides a computational advantage (time, space, power, etc.) In this paper, we present and formally analyze the use of temporal coding in a neural-inspired algorithm for optimization-based computation in neural spiking architectures.
Proceedings of the International Joint Conference on Neural Networks
Neural machine learning methods, such as deep neural networks (DNN), have achieved remarkable success in a number of complex data processing tasks. These methods have arguably had their strongest impact on tasks such as image and audio processing - data processing domains in which humans have long held clear advantages over conventional algorithms. In contrast to biological neural systems, which are capable of learning continuously, deep artificial networks have a limited ability for incorporating new information in an already trained network. As a result, methods for continuous learning are potentially highly impactful in enabling the application of deep networks to dynamic data sets. Here, inspired by the process of adult neurogenesis in the hippocampus, we explore the potential for adding new neurons to deep layers of artificial neural networks in order to facilitate their acquisition of novel information while preserving previously trained data representations. Our results on the MNIST handwritten digit dataset and the NIST SD 19 dataset, which includes lower and upper case letters and digits, demonstrate that neurogenesis is well suited for addressing the stability-plasticity dilemma that has long challenged adaptive machine learning algorithms.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Biologically Inspired Cognitive Architectures
Biological neural networks continue to inspire new developments in algorithms and microelectronic hardware to solve challenging data processing and classification problems. Here, we survey the history of neural-inspired and neuromorphic computing in order to examine the complex and intertwined trajectories of the mathematical theory and hardware developed in this field. Early research focused on adapting existing hardware to emulate the pattern recognition capabilities of living organisms. Contributions from psychologists, mathematicians, engineers, neuroscientists, and other professions were crucial to maturing the field from narrowly-tailored demonstrations to more generalizable systems capable of addressing difficult problem classes such as object detection and speech recognition. Algorithms that leverage fundamental principles found in neuroscience such as hierarchical structure, temporal integration, and robustness to error have been developed, and some of these approaches are achieving world-leading performance on particular data classification tasks. In addition, novel microelectronic hardware is being developed to perform logic and to serve as memory in neuromorphic computing systems with optimized system integration and improved energy efficiency. Key to such advancements was the incorporation of new discoveries in neuroscience research, the transition away from strict structural replication and towards the functional replication of neural systems, and the use of mathematical theory frameworks to guide algorithm and hardware developments.
Abstract not provided.
Neural machine learning methods, such as deep neural networks (DNN), have achieved remarkable success in a number of complex data processing tasks. These methods have arguably had their strongest impact on tasks such as image and audio processing – data processing domains in which humans have long held clear advantages over conventional algorithms. In contrast to biological neural systems, which are capable of learning continuously, deep artificial networks have a limited ability for incorporating new information in an already trained network. As a result, methods for continuous learning are potentially highly impactful in enabling the application of deep networks to dynamic data sets. Here, inspired by the process of adult neurogenesis in the hippocampus, we explore the potential for adding new neurons to deep layers of artificial neural networks in order to facilitate their acquisition of novel information while preserving previously trained data representations. Our results on the MNIST handwritten digit dataset and the NIST SD 19 dataset, which includes lower and upper case letters and digits, demonstrate that neurogenesis is well suited for addressing the stability-plasticity dilemma that has long challenged adaptive machine learning algorithms.
2016 IEEE International Conference on Rebooting Computing, ICRC 2016 - Conference Proceedings
Amidst the rising impact of machine learning and the popularity of deep neural networks, learning theory is not a solved problem. With the emergence of neuromorphic computing as a means of addressing the von Neumann bottleneck, it is not simply a matter of employing existing algorithms on new hardware technology, but rather richer theory is needed to guide advances. In particular, there is a need for a richer understanding of the role of adaptivity in neural learning to provide a foundation upon which architectures and devices may be built. Modern machine learning algorithms lack adaptive learning, in that they are dominated by a costly training phase after which they no longer learn. The brain on the other hand is continuously learning and provides a basis for which new mathematical theories may be developed to greatly enrich the computational capabilities of learning systems. Game theory provides one alternative mathematical perspective analyzing strategic interactions and as such is well suited to learning theory.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings of the International Joint Conference on Neural Networks
Through various means of structural and synaptic plasticity enabling online learning, neural networks are constantly reconfiguring their computational functionality. Neural information content is embodied within the configurations, representations, and computations of neural networks. To explore neural information content, we have developed metrics and computational paradigms to quantify neural information content. We have observed that conventional compression methods may help overcome some of the limiting factors of standard information theoretic techniques employed in neuroscience, and allows us to approximate information in neural data. To do so we have used compressibility as a measure of complexity in order to estimate entropy to quantitatively assess information content of neural ensembles. Using Lempel-Ziv compression we are able to assess the rate of generation of new patterns across a neural ensemble's firing activity over time to approximate the information content encoded by a neural circuit. As a specific case study, we have been investigating the effect of neural mixed coding schemes due to hippocampal adult neurogenesis.
Abstract not provided.
Abstract not provided.
Abstract not provided.