Accelerating Codesign in Emerging Computing
Abstract not provided.
Abstract not provided.
Neuromorphic Computing and Engineering
Abstract As modern neuroscience tools acquire more details about the brain, the need to move towards biological-scale neural simulations continues to grow. However, effective simulations at scale remain a challenge. Beyond just the tooling required to enable parallel execution, there is also the unique structure of the synaptic interconnectivity, which is globally sparse but has relatively high connection density and non-local interactions per neuron. There are also various practicalities to consider in high performance computing applications, such as the need for serializing neural networks to support potentially long-running simulations that require checkpoint-restart. Although acceleration on neuromorphic hardware is also a possibility, development in this space can be difficult as hardware support tends to vary between platforms and software support for larger scale models also tends to be limited. In this paper, we focus our attention on Simulation Tool for Asynchronous Cortical Streams (STACS), a spiking neural network simulator that leverages the Charm++ parallel programming framework, with the goal of supporting biological-scale simulations as well as interoperability between platforms. Central to these goals is the implementation of scalable data structures suitable for efficiently distributing a network across parallel partitions. Here, we discuss a straightforward extension of a parallel data format with a history of use in graph partitioners, which also serves as a portable intermediate representation for different neuromorphic backends. We perform scaling studies on the Summit supercomputer, examining the capabilities of STACS in terms of network build and storage, partitioning, and execution. We highlight how a suitably partitioned, spatially dependent synaptic structure introduces a communication workload well-suited to the multicast communication supported by Charm++. We evaluate the strong and weak scaling behavior for networks on the order of millions of neurons and billions of synapses, and show that STACS achieves competitive levels of parallel efficiency.
Abstract not provided.
Abstract not provided.
2024 IEEE Neuro Inspired Computational Elements Conference, NICE 2024 - Proceedings
Neuromorphic computing platforms hold the promise to dramatically reduce power requirements for calculations that are computationally intensive. One such application space is scientific machine learning (SciML). Techniques in this space use neural networks to approximate solutions of scientific problems. For instance, the popular physics-informed neural network (PINN) approximates the solution to a partial differential equation by using a trained feed-forward neural network, and injecting the knowledge of the physics through the loss function. Recent efforts have demonstrated how to convert a trained PINN to a spiking network architecture. In this work, we discuss our approach to quantization and implementation required to migrate these spiking PINNs to Intel's Loihi 2 neuromorphic hardware. We explore the effect of quantization on the model accuracy, as well as the energy and throughput characteristics of the implementation. It is our intent that this serve as a starting point for additional SciML implementations on neuromorphic hardware.
Nature Communications
Perspectives for understanding the brain vary across disciplines and this has challenged our ability to describe the brain’s functions. In this comment, we discuss how emerging theoretical computing frameworks that bridge top-down algorithm and bottom-up physics approaches may be ideally suited for guiding the development of neural computing technologies such as neuromorphic hardware and artificial intelligence. Furthermore, we discuss how this balanced perspective may be necessary to incorporate the neurobiological details that are critical for describing the neural computational disruptions within mental health and neurological disorders.
IEEE Electron Devices Magazine
Achieving brain-like efficiency in computing requires a co-design between the development of neural algorithms, brain-inspired circuit design, and careful consideration of how to use emerging devices. The recognition that leveraging device-level noise as a source of controlled stochasticity represents an exciting prospect of achieving brain-like capabilities in probabilistic neural algorithms, but the reality of integrating stochastic devices with deterministic devices in an already-challenging neuromorphic circuit design process is formidable. Here, we explore how the brain combines different signaling modalities into its neural circuits as well as consider the implications of more tightly integrated stochastic, analog, and digital circuits. Further, by acknowledging that a fully CMOS implementation is the appropriate baseline, we conclude that if mixing modalities is going to be successful for neuromorphic computing, it will be critical that device choices consider strengths and limitations at the overall circuit level.
ACM International Conference Proceeding Series
Approximation algorithms for computationally complex problems are of significant importance in computing as they provide computational guarantees of obtaining practically useful results for otherwise computationally intractable problems. The demonstration of implementing formal approximation algorithms on spiking neuromorphic hardware is a critical step in establishing that neuromorphic computing can offer cost-effective solutions to significant optimization problems while retaining important computational guarantees on the quality of solutions. Here, we demonstrate that the Loihi platform is capable of effectively implementing the Goemans-Williamson (GW) approximation algorithm for MAXCUT, an NP-hard problem that has applications ranging from VLSI design to network analysis. We show that a Loihi implementation of the approximation step of the GW algorithm obtains equivalent maximum cuts of graphs as conventional algorithms, and we describe how different aspects of architecture precision impacts the algorithm performance.
Abstract not provided.
Abstract not provided.
Frontiers in Neuroinformatics
At the turn of the millennium the computational neuroscience community realized that neuroscience was in a software crisis: software development was no longer progressing as expected and reproducibility declined. The International Neuroinformatics Coordinating Facility (INCF) was inaugurated in 2007 as an initiative to improve this situation. The INCF has since pursued its mission to help the development of standards and best practices. In a community paper published this very same year, Brette et al. tried to assess the state of the field and to establish a scientific approach to simulation technology, addressing foundational topics, such as which simulation schemes are best suited for the types of models we see in neuroscience. In 2015, a Frontiers Research Topic “Python in neuroscience” by Muller et al. triggered and documented a revolution in the neuroscience community, namely in the usage of the scripting language Python as a common language for interfacing with simulation codes and connecting between applications. The review by Einevoll et al. documented that simulation tools have since further matured and become reliable research instruments used by many scientific groups for their respective questions. Open source and community standard simulators today allow research groups to focus on their scientific questions and leave the details of the computational work to the community of simulator developers. A parallel development has occurred, which has been barely visible in neuroscientific circles beyond the community of simulator developers: Supercomputers used for large and complex scientific calculations have increased their performance from ~10 TeraFLOPS (1013 floating point operations per second) in the early 2000s to above 1 ExaFLOPS (1018 floating point operations per second) in the year 2022. This represents a 100,000-fold increase in our computational capabilities, or almost 17 doublings of computational capability in 22 years. Moore's law (the observation that it is economically viable to double the number of transistors in an integrated circuit every other 18–24 months) explains a part of this; our ability and willingness to build and operate physically larger computers, explains another part. It should be clear, however, that such a technological advancement requires software adaptations and under the hood, simulators had to reinvent themselves and change substantially to embrace this technological opportunity. It actually is quite remarkable that—apart from the change in semantics for the parallelization—this has mostly happened without the users knowing. The current Research Topic was motivated by the wish to assemble an update on the state of neuroscientific software (mostly simulators) in 2022, to assess whether we can see more clearly which scientific questions can (or cannot) be asked due to our increased capability of simulation, and also to anticipate whether and for how long we can expect this increase of computational capabilities to continue.
Proceedings - 2023 IEEE International Parallel and Distributed Processing Symposium, IPDPS 2023
Finding the maximum cut of a graph (MAXCUT) is a classic optimization problem that has motivated parallel algorithm development. While approximate algorithms to MAXCUT offer attractive theoretical guarantees and demonstrate compelling empirical performance, such approximation approaches can shift the dominant computational cost to the stochastic sampling operations. Neuromorphic computing, which uses the organizing principles of the nervous system to inspire new parallel computing architectures, offers a possible solution. One ubiquitous feature of natural brains is stochasticity: the individual elements of biological neural networks possess an intrinsic randomness that serves as a resource enabling their unique computational capacities. By designing circuits and algorithms that make use of randomness similarly to natural brains, we hypothesize that the intrinsic randomness in microelectronics devices could be turned into a valuable component of a neuromorphic architecture enabling more efficient computations. Here, we present neuromorphic circuits that transform the stochastic behavior of a pool of random devices into useful correlations that drive stochastic solutions to MAXCUT. We show that these circuits perform favorably in comparison to software solvers and argue that this neuromorphic hardware implementation provides a path for scaling advantages. This work demonstrates the utility of combining neuromorphic principles with intrinsic randomness as a computational resource for new computational architectures.
Proceedings - 2023 IEEE International Parallel and Distributed Processing Symposium, IPDPS 2023
Finding the maximum cut of a graph (MAXCUT) is a classic optimization problem that has motivated parallel algorithm development. While approximate algorithms to MAXCUT offer attractive theoretical guarantees and demonstrate compelling empirical performance, such approximation approaches can shift the dominant computational cost to the stochastic sampling operations. Neuromorphic computing, which uses the organizing principles of the nervous system to inspire new parallel computing architectures, offers a possible solution. One ubiquitous feature of natural brains is stochasticity: the individual elements of biological neural networks possess an intrinsic randomness that serves as a resource enabling their unique computational capacities. By designing circuits and algorithms that make use of randomness similarly to natural brains, we hypothesize that the intrinsic randomness in microelectronics devices could be turned into a valuable component of a neuromorphic architecture enabling more efficient computations. Here, we present neuromorphic circuits that transform the stochastic behavior of a pool of random devices into useful correlations that drive stochastic solutions to MAXCUT. We show that these circuits perform favorably in comparison to software solvers and argue that this neuromorphic hardware implementation provides a path for scaling advantages. This work demonstrates the utility of combining neuromorphic principles with intrinsic randomness as a computational resource for new computational architectures.
Abstract not provided.
Neuromorphic Computing and Engineering
Though neuromorphic computers have typically targeted applications in machine learning and neuroscience (‘cognitive’ applications), they have many computational characteristics that are attractive for a wide variety of computational problems. In this work, we review the current state-of-the-art for non-cognitive applications on neuromorphic computers, including simple computational kernels for composition, graph algorithms, constrained optimization, and signal processing. We discuss the advantages of using neuromorphic computers for these different applications, as well as the challenges that still remain. The ultimate goal of this work is to bring awareness to this class of problems for neuromorphic systems to the broader community, particularly to encourage further work in this area and to make sure that these applications are considered in the design of future neuromorphic systems.
Abstract not provided.
ACM International Conference Proceeding Series
It has been demonstrated that grid cells in the brain are encoding physical locations using hexagonally spaced, periodic phase-space representations. We explore how such a representation may be computationally advantageous for related engineering applications. Theories of how the brain decodes from a phase-space representation have been developed based on neuroscience data. However, theories of how sensory information is encoded into this phase space are less certain. Here we show a method for how a navigation-relevant input space such as elevation trajectories may be mapped into a phase-space coordinate system that can be decoded using previously developed theories. We also consider how such an algorithm may then also be mapped onto neuromrophic systems. Just as animals can tell where they are in a local region based on where they have been, our encoding algorithm enables the localization to a position in space by integrating measurements from a trajectory over a map. In this paper, we walk through our approach with simulations using a digital elevation model.
ACM International Conference Proceeding Series
It has been demonstrated that grid cells in the brain are encoding physical locations using hexagonally spaced, periodic phase-space representations. We explore how such a representation may be computationally advantageous for related engineering applications. Theories of how the brain decodes from a phase-space representation have been developed based on neuroscience data. However, theories of how sensory information is encoded into this phase space are less certain. Here we show a method for how a navigation-relevant input space such as elevation trajectories may be mapped into a phase-space coordinate system that can be decoded using previously developed theories. We also consider how such an algorithm may then also be mapped onto neuromrophic systems. Just as animals can tell where they are in a local region based on where they have been, our encoding algorithm enables the localization to a position in space by integrating measurements from a trajectory over a map. In this paper, we walk through our approach with simulations using a digital elevation model.
Abstract not provided.
Abstract not provided.
Abstract not provided.
ACM International Conference Proceeding Series
Neuromorphic computing (NMC) is an exciting paradigm seeking to incorporate principles from biological brains to enable advanced computing capabilities. Not only does this encompass algorithms, such as neural networks, but also the consideration of how to structure the enabling computational architectures for executing such workloads. Assessing the merits of NMC is more nuanced than simply comparing singular, historical performance metrics from traditional approaches versus that of NMC. The novel computational architectures require new algorithms to make use of their differing computational approaches. And neural algorithms themselves are emerging across increasing application domains. Accordingly, we propose following the example high performance computing has employed using context capturing mini-apps and abstraction tools to explore the merits of computational architectures. Here we present Neural Mini-Apps in a neural circuit tool called Fugu as a means of NMC insight.
ACM International Conference Proceeding Series
It has been demonstrated that grid cells are encoding physical locations using hexagonally spaced, periodic phase-space representations. Theories of how the brain is decoding this phase-space representation have been developed based on neuroscience data. However, theories of how sensory information is encoded into this phase space are less certain. Here we show a method on how a navigation-relevant input space such as elevation trajectories may be mapped into a phase-space coordinate system that can be decoded using previously developed theories. Just as animals can tell where they are in a local region based on where they have been, our encoding algorithm enables the localization to a position in space by integrating measurements from a trajectory over a map. In this extended abstract, we walk through our approach with simulations using a digital elevation model.
Abstract not provided.
Abstract not provided.
Nature Electronics
Neuromorphic computing, which aims to replicate the computational structure and architecture of the brain in synthetic hardware, has typically focused on artificial intelligence applications. What is less explored is whether such brain-inspired hardware can provide value beyond cognitive tasks. Here we show that the high degree of parallelism and configurability of spiking neuromorphic architectures makes them well suited to implement random walks via discrete-time Markov chains. These random walks are useful in Monte Carlo methods, which represent a fundamental computational tool for solving a wide range of numerical computing tasks. Using IBM’s TrueNorth and Intel’s Loihi neuromorphic computing platforms, we show that our neuromorphic computing algorithm for generating random walk approximations of diffusion offers advantages in energy-efficient computation compared with conventional approaches. We also show that our neuromorphic computing algorithm can be extended to more sophisticated jump-diffusion processes that are useful in a range of applications, including financial economics, particle physics and machine learning.
Abstract not provided.
Proceedings - 2022 IEEE International Conference on Rebooting Computing, ICRC 2022
Stochasticity is ubiquitous in the world around us. However, our predominant computing paradigm is deterministic. Random number generation (RNG) can be a computationally inefficient operation in this system especially for larger workloads. Our work leverages the underlying physics of emerging devices to develop probabilistic neural circuits for RNGs from a given distribution. However, codesign for novel circuits and systems that leverage inherent device stochasticity is a hard problem. This is mostly due to the large design space and complexity of doing so. It requires concurrent input from multiple areas in the design stack from algorithms, architectures, circuits, to devices. In this paper, we present examples of optimal circuits developed leveraging AI-enhanced codesign techniques using constraints from emerging devices and algorithms. Our AI-enhanced codesign approach accelerated design and enabled interactions between experts from different areas of the micro-electronics design stack including theory, algorithms, circuits, and devices. We demonstrate optimal probabilistic neural circuits using magnetic tunnel junction and tunnel diode devices that generate an RNG from a given distribution.
Proceedings - 2022 IEEE International Conference on Rebooting Computing, ICRC 2022
Stochasticity is ubiquitous in the world around us. However, our predominant computing paradigm is deterministic. Random number generation (RNG) can be a computationally inefficient operation in this system especially for larger workloads. Our work leverages the underlying physics of emerging devices to develop probabilistic neural circuits for RNGs from a given distribution. However, codesign for novel circuits and systems that leverage inherent device stochasticity is a hard problem. This is mostly due to the large design space and complexity of doing so. It requires concurrent input from multiple areas in the design stack from algorithms, architectures, circuits, to devices. In this paper, we present examples of optimal circuits developed leveraging AI-enhanced codesign techniques using constraints from emerging devices and algorithms. Our AI-enhanced codesign approach accelerated design and enabled interactions between experts from different areas of the micro-electronics design stack including theory, algorithms, circuits, and devices. We demonstrate optimal probabilistic neural circuits using magnetic tunnel junction and tunnel diode devices that generate an RNG from a given distribution.
Abstract not provided.
Graph algorithms enable myriad large-scale applications including cybersecurity, social network analysis, resource allocation, and routing. The scalability of current graph algorithm implementations on conventional computing architectures are hampered by the demise of Moore’s law. We present a theoretical framework for designing and assessing the performance of graph algorithms executing in networks of spiking artificial neurons. Although spiking neural networks (SNNs) are capable of general-purpose computation, few algorithmic results with rigorous asymptotic performance analysis are known. SNNs are exceptionally well-motivated practically, as neuromorphic computing systems with 100 million spiking neurons are available, and systems with a billion neurons are anticipated in the next few years. Beyond massive parallelism and scalability, neuromorphic computing systems offer energy consumption orders of magnitude lower than conventional high-performance computing systems. We employ our framework to design and analyze new spiking algorithms for shortest path and dynamic programming problems. Our neuromorphic algorithms are message-passing algorithms relying critically on data movement for computation. For fair and rigorous comparison with conventional algorithms and architectures, which is challenging but paramount, we develop new models of data-movement in conventional computing architectures. This allows us to prove polynomial-factor advantages, even when we assume a SNN consisting of a simple grid-like network of neurons. To the best of our knowledge, this is one of the first examples of a rigorous asymptotic computational advantage for neuromorphic computing.
Abstract not provided.
Probabilistic and Bayesian neural networks have long been proposed as a method to incorporate uncertainty about the world (both in training data and operation) into artificial intelligence applications. One approach to making a neural network probabilistic is to leverage a Monte Carlo sampling approach that samples a trained network while incorporating noise. Such sampling approaches for neural networks have not been extensively studied due to the prohibitive requirement of many computationally expensive samples. While the development of future microelectronics platforms that make this sampling more efficient is an attractive option, it has not been immediately clear how to sample a neural network and what the quality of random number generation should be. This research aimed to start addressing these two fundamental questions by examining basic “off the shelf” neural networks can be sampled through a few different mechanisms (including synapse “dropout” and neuron “dropout”) and examine how these sampling approaches can be evaluated both in terms of evaluating algorithm effectiveness and the required quality of random numbers.
Annual ACM Symposium on Parallelism in Algorithms and Architectures
We present a theoretical framework for designing and assessing the performance of algorithms executing in networks consisting of spiking artificial neurons. Although spiking neural networks (SNNs) are capable of general-purpose computation, few algorithmic results with rigorous asymptotic performance analysis are known. SNNs are exceptionally well-motivated practically, as neuromorphic computing systems with 100 million spiking neurons are available, and systems with a billion neurons are anticipated in the next few years. Beyond massive parallelism and scalability, neuromorphic computing systems offer energy consumption orders of magnitude lower than conventional high-performance computing systems. We employ our framework to design and analyze neuromorphic graph algorithms, focusing on shortest path problems. Our neuromorphic algorithms are message-passing algorithms relying critically on data movement for computation, and we develop data-movement lower bounds for conventional algorithms. A fair and rigorous comparison with conventional algorithms and architectures is challenging but paramount. We prove a polynomial-factor advantage even when we assume an SNN consisting of a simple grid-like network of neurons. To the best of our knowledge, this is one of the first examples of a provable asymptotic computational advantage for neuromorphic computing.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
n this presentation we will discuss recent results on using the SpiNNaker neuromorphic platform (48-chip model) for deep learning neural network inference. We use the Sandia Labs developed Whet stone spiking deep learning library to train deep multi-layer perceptrons and convolutional neural networks suitable for the spiking substrate on the neural hardware architecture. By using the massively parallel nature of SpiNNaker, we are able to achieve, under certain network topologies, substantial network tiling and consequentially impressive inference throughput. Such high-throughput systems may have eventual application in remote sensing applications where large images need to be chipped, scanned, and processed quickly. Additionally, we explore complex topologies that push the limits of the SpiNNaker routing hardware and investigate how that impacts mapping software-implemented networks to on-hardware instantiations.
Proceedings - 2021 International Conference on Rebooting Computing, ICRC 2021
Boolean functions and binary arithmetic operations are central to standard computing paradigms. Accordingly, many advances in computing have focused upon how to make these operations more efficient as well as exploring what they can compute. To best leverage the advantages of novel computing paradigms it is important to consider what unique computing approaches they offer. However, for any special-purpose co-processor, Boolean functions and binary arithmetic operations are useful for, among other things, avoiding unnecessary I/O on-and-off the co-processor by pre- and post-processing data on-device. This is especially true for spiking neuromorphic architectures where these basic operations are not fundamental low-level operations. Instead, these functions require specific implementation. Here we discuss the implications of an advantageous streaming binary encoding method as well as a handful of circuits designed to exactly compute elementary Boolean and binary operations.
Advanced Intelligent Systems
Neuromorphic computing is a critical future technology for the computing industry, but it has yet to achieve its promise and has struggled to establish a cohesive research community. A large part of the challenge is that full realization of the potential of brain inspiration requires advances in both device hardware, computing architectures, and algorithms. This simultaneous development across technology scales is unprecedented in the computing field. This article presents a strategy, framed by market and policy pressures, for moving past these current technological and cultural hurdles to realize its full impact across technology. Achieving the full potential of brain-derived algorithms as well as post-complementary metal-oxide-semiconductor (CMOS) scaling neuromorphic hardware requires appropriately balancing the near-term opportunities of deep learning applications with the long-term potential of less understood opportunities in neural computing.
Abstract not provided.
Computing stands to be radically improved by neuromorphic computing (NMC) approaches inspired by the brain's incredible efficiency and capabilities. Most NMC research, which aims to replicate the brain's computational structure and architecture in man-made hardware, has focused on artificial intelligence; however, less explored is whether this brain-inspired hardware can provide value beyond cognitive tasks. We demonstrate that high-degree parallelism and configurability of spiking neuromorphic architectures makes them well-suited to implement random walks via discrete time Markov chains. Such random walks are useful in Monte Carlo methods, which represent a fundamental computational tool for solving a wide range of numerical computing tasks. Additionally, we show how the mathematical basis for a probabilistic solution involving a class of stochastic differential equations can leverage those simulations to provide solutions for a range of broadly applicable computational tasks. Despite being in an early development stage, we find that NMC platforms, at a sufficient scale, can drastically reduce the energy demands of high-performance computing platforms.
Neuromorphic architectures have seen a resurgence of interest in the past decade owing to 100x-1000x efficiency gain over conventional Von Neumann architectures. Digital neuromorphic chips like Intel's Loihi have shown efficiency gains compared to GPUs and CPUs and can be scaled to build larger systems. Analog neuromorphic architectures promise even further savings in energy efficiency, area, and latency than their digital counterparts. Neuromorphic analog and digital technologies provide both low-power and configurable acceleration of challenging artificial intelligence (AI) algorithms. We present a hybrid analog-digital neuromorphic architecture that can amplify the advantages of both high-density analog memory and spike-based digital communication while mitigating each of the other approaches' limitations.
Abstract not provided.
Abstract not provided.
Abstract not provided.
ACM International Conference Proceeding Series
The widely parallel, spiking neural networks of neuromorphic processors can enable computationally powerful formulations. While recent interest has focused on primarily machine learning tasks, the space of appropriate applications is wide and continually expanding. Here, we leverage the parallel and event-driven structure to solve a steady state heat equation using a random walk method. The random walk can be executed fully within a spiking neural network using stochastic neuron behavior, and we provide results from both IBM TrueNorth and Intel Loihi implementations. Additionally, we position this algorithm as a potential scalable benchmark for neuromorphic systems.
Abstract not provided.
Frontiers in Computational Neuroscience
Historically, neuroscience principles have heavily influenced artificial intelligence (AI), for example the influence of the perceptron model, essentially a simple model of a biological neuron, on artificial neural networks. More recently, notable recent AI advances, for example the growing popularity of reinforcement learning, often appear more aligned with cognitive neuroscience or psychology, focusing on function at a relatively abstract level. At the same time, neuroscience stands poised to enter a new era of large-scale high-resolution data and appears more focused on underlying neural mechanisms or architectures that can, at times, seem rather removed from functional descriptions. While this might seem to foretell a new generation of AI approaches arising from a deeper exploration of neuroscience specifically for AI, the most direct path for achieving this is unclear. Here we discuss cultural differences between the two fields, including divergent priorities that should be considered when leveraging modern-day neuroscience for AI. For example, the two fields feed two very different applications that at times require potentially conflicting perspectives. We highlight small but significant cultural shifts that we feel would greatly facilitate increased synergy between the two fields.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Neuromorphic computing is known for its integration of algorithms and hardware elements that are inspired by the brain. Conventionally, this nontraditional method of computing is used for many neural or learning inspired applications. Unfortunately, this has resulted in the field of neuromorphic computing being relatively narrow in scope. In this paper we discuss two research areas actively trying to widen the impact of neuromorphic systems. The first is Fugu, a high-level programming interface designed to bridge the gap between general computer scientists and those who specialize in neuromorphic areas. The second aims to map classical scientific computing problems onto these frameworks through the example of random walks. This elucidates a class of scientific applications that are conducive to neuromorphic algorithms.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This research aims to develop brain-inspired solutions for reliable and adaptive autonomous navigation in systems that have limited internal and external sensors and may not have access to reliable GPS information. The algorithms investigated and developed by this project was performed in the context of Sandas A4H (autonomy for hypersonics) mission campaign. These algorithms were additionally explored with respect to their suitability for implementation on emerging neuromorphic computing hardware technology. This project is premised on the hypothesis that brain-inspired SLAM (simultaneous localization and mapping) algorithms may provide an energy-efficient, context-flexible approach to robust sensor-based, real-time navigation.
Abstract not provided.
ACM International Conference Proceeding Series
Neuromorphic hardware architectures represent a growing family of potential post-Moore's Law Era platforms. Largely due to event-driving processing inspired by the human brain, these computer platforms can offer significant energy benefits compared to traditional von Neumann processors. Unfortunately there still remains considerable difficulty in successfully programming, configuring and deploying neuromorphic systems. We present the Fugu framework as an answer to this need. Rather than necessitating a developer attain intricate knowledge of how to program and exploit spiking neural dynamics to utilize the potential benefits of neuromorphic computing, Fugu is designed to provide a higher level abstraction as a hardware-independent mechanism for linking a variety of scalable spiking neural algorithms from a variety of sources. Individual kernels linked together provide sophisticated processing through compositionality. Fugu is intended to be suitable for a wide-range of neuromorphic applications, including machine learning, scientific computing, and more brain-inspired neural algorithms. Ultimately, we hope the community adopts this and other open standardization attempts allowing for free exchange and easy implementations of the ever-growing list of spiking neural algorithms.
Abstract not provided.
Abstract not provided.
Communications of the ACM
Abstract not provided.
Abstract not provided.
Abstract not provided.
Nature Electronics
A hybrid analogue–digital computing system based on memristive devices is capable of solving classic control problems with potentially a lower energy consumption and higher speed than fully digital systems.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE Access
Emerging memory devices, such as resistive crossbars, have the capacity to store large amounts of data in a single array. Acquiring the data stored in large-capacity crossbars in a sequential fashion can become a bottleneck. We present practical methods, based on sparse sampling, to quickly acquire sparse data stored on emerging memory devices that support the basic summation kernel, reducing the acquisition time from linear to sub-linear. The experimental results show that at least an order of magnitude improvement in acquisition time can be achieved when the data are sparse. In addition, we show that the energy cost associated with our approach is competitive to that of the sequential method.
Abstract not provided.
IEEE Access
Emerging memory devices, such as resistive crossbars, have the capacity to store large amounts of data in a single array. Acquiring the data stored in large-capacity crossbars in a sequential fashion can become a bottleneck. We present practical methods, based on sparse sampling, to quickly acquire sparse data stored on emerging memory devices that support the basic summation kernel, reducing the acquisition time from linear to sub-linear. The experimental results show that at least an order of magnitude improvement in acquisition time can be achieved when the data are sparse. Finally, in addition, we show that the energy cost associated with our approach is competitive to that of the sequential method.
Proceedings - 2017 International Conference on Computational Science and Computational Intelligence, CSCI 2017
A forensics investigation after a breach often uncovers network and host indicators of compromise (IOCs) that can be deployed to sensors to allow early detection of the adversary in the future. Over time, the adversary will change tactics, techniques, and procedures (TTPs), which will also change the data generated. If the IOCs are not kept up-to-date with the adversary's new TTPs, the adversary will no longer be detected once all of the IOCs become invalid. Tracking the Known (TTK) is the problem of keeping IOCs, in this case regular expression (regexes), up-to-date with a dynamic adversary. Our framework solves the TTK problem in an automated, cyclic fashion to bracket a previously discovered adversary. This tracking is accomplished through a data-driven approach of self-adapting a given model based on its own detection capabilities.In our initial experiments, we found that the true positive rate (TPR) of the adaptive solution degrades much less significantly over time than the naïve solution, suggesting that self-updating the model allows the continued detection of positives (i.e., adversaries). The cost for this performance is in the false positive rate (FPR), which increases over time for the adaptive solution, but remains constant for the naïve solution. However, the difference in overall detection performance, as measured by the area under the curve (AUC), between the two methods is negligible. This result suggests that self-updating the model over time should be done in practice to continue to detect known, evolving adversaries.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE Transactions on Information Forensics and Security
File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features, such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used to reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers, such as support vector machines over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.
Neural Computation
Neural-inspired spike-based computing machines often claim to achieve considerable advantages in terms of energy and time efficiency by using spikes for computation and communication. However, fundamental questions about spike-based computation remain unanswered. For instance, how much advantage do spike-based approaches have over conventionalmethods, and underwhat circumstances does spike-based computing provide a comparative advantage? Simply implementing existing algorithms using spikes as the medium of computation and communication is not guaranteed to yield an advantage. Here, we demonstrate that spike-based communication and computation within algorithms can increase throughput, and they can decrease energy cost in some cases. We present several spiking algorithms, including sorting a set of numbers in ascending/descending order, as well as finding the maximum or minimum ormedian of a set of numbers.We also provide an example application: a spiking median-filtering approach for image processing providing a low-energy, parallel implementation. The algorithms and analyses presented here demonstrate that spiking algorithms can provide performance advantages and offer efficient computation of fundamental operations useful in more complex algorithms.
Neuromorphic computing has many promises in the future of computing due to its energy efficient and scalable implementation. Here we extend a neural algorithm that is able to solve the diffusion equation PDE by implementing random walks on neuromorphic hardware. Additionally, we introduce four random walk applications that use this spiking neural algorithm. The four applications currently implemented are: generating a random walk to replicate an image, finding a path between two nodes, finding triangles in a graph, and partitioning a graph into two sections. We then made these four applications available to be implemented on software using a graphical user interface (GUI).
The rise of low-power neuromorphic hardware has the potential to change high-performance computing; however much of the focus on brain-inspired hardware has been on machine learning applications. A low-power solution for solving partial differential equations could radically change how we approach large-scale computing in the future. The random walk is a fundamental stochastic process that underlies many numerical tasks in scientific computing applications. We consider here two neural algorithms that can be used to efficiently implement random walks on spiking neuromorphic hardware. The first method tracks the positions of individual walkers independently by using a modular code inspired by grid cells in the brain. The second method tracks the densities of random walkers at each spatial location directly. We present the scaling complexity of each of these methods and illustrate their ability to model random walkers under different probabilistic conditions. Finally, we present implementations of these algorithms on neuromorphic hardware.
Abstract not provided.
Proceedings of the IEEE Conference on Decision and Control
This paper formulates general computation as a feedback-control problem, which allows the agent to autonomously overcome some limitations of standard procedural language programming: resilience to errors and early program termination. Our formulation considers computation to be trajectory generation in the program's variable space. The computing then becomes a sequential decision making problem, solved with reinforcement learning (RL), and analyzed with Lyapunov stability theory to assess the agent's resilience and progression to the goal. We do this through a case study on a quintessential computer science problem, array sorting. Evaluations show that our RL sorting agent makes steady progress to an asymptotically stable goal, is resilient to faulty components, and performs less array manipulations than traditional Quicksort and Bubble sort.
Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018
Malware detection and remediation is an on-going task for computer security and IT professionals. Here, we examine the use of neural algorithms to detect malware using the system calls generated by executables-alleviating attempts at obfuscation as the behavior is monitored. We examine several deep learning techniques, and liquid state machines baselined against a random forest. The experiments examine the effects of concept drift to understand how well the algorithms generalize to novel malware samples by testing them on data that was collected after the training data. The results suggest that each of the examined machine learning algorithms is a viable solution to detect malware-achieving between 90% and 95% class-averaged accuracy (CAA). In real-world scenarios, the performance evaluation on an operational network may not match the performance achieved in training. Namely, the CAA may be about the same, but the values for precision and recall over the malware can change significantly. We structure experiments to highlight these caveats and offer insights into expected performance in operational environments. In addition, we use the induced models to better understand what differentiates malware samples from goodware, which can further be used as a forensics tool to provide directions for investigation and remediation.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Springer Proceedings in Complexity
Anomaly detection is an important problem in various fields of complex systems research including image processing, data analysis, physical security and cybersecurity. In image processing, it is used for removing noise while preserving image quality, and in data analysis, physical security and cybersecurity, it is used to find interesting data points, objects or events in a vast sea of information. Anomaly detection will continue to be an important problem in domains intersecting with “Big Data”. In this paper we provide a novel algorithm for anomaly detection that uses phase-coded spiking neurons as basic computational elements.
2017 IEEE International Conference on Rebooting Computing, ICRC 2017 - Proceedings
Unlike general purpose computer architectures that are comprised of complex processor cores and sequential computation, the brain is innately parallel and contains highly complex connections between computational units (neurons). Key to the architecture of the brain is a functionality enabled by the combined effect of spiking communication and sparse connectivity with unique variable efficacies and temporal latencies. Utilizing these neuroscience principles, we have developed the Spiking Temporal Processing Unit (STPU) architecture which is well-suited for areas such as pattern recognition and natural language processing. In this paper, we formally describe the STPU, implement the STPU on a field programmable gate array, and show measured performance data.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
ACM International Conference Proceeding Series
In 2016, Lewis Rhodes Labs, (LRL), shipped the first commercially viable Neuromorphic Processing Unit, (NPU), branded as a Neuromorphic Data Microscope (NDM). This product leverages architectural mechanisms derived from the sensory cortex of the human brain to efficiently implement pattern matching. LRL and Sandia National Labs have optimized this product for streaming analytics, and demonstrated a 1,000x power per operation reduction in an FPGA format. When reduced to an ASIC, the efficiency will improve to 1,000,000x. Additionally, the neuromorphic nature of the device gives it powerful computational attributes that are counterintuitive to those schooled in traditional von Neumann architectures. The Neuromorphic Data Microscope is the first of a broad class of brain-inspired, time domain processors that will profoundly alter the functionality and economics of data processing.
Abstract not provided.
Abstract not provided.
Proceedings of the International Joint Conference on Neural Networks
Considerable effort is currently being spent designing neuromorphic hardware for addressing challenging problems in a variety of pattern-matching applications. These neuromorphic systems offer low power architectures with intrinsically parallel and simple spiking neuron processing elements. Unfortunately, these new hardware architectures have been largely developed without a clear justification for using spiking neurons to compute quantities for problems of interest. Specifically, the use of spiking for encoding information in time has not been explored theoretically with complexity analysis to examine the operating conditions under which neuromorphic computing provides a computational advantage (time, space, power, etc.) In this paper, we present and formally analyze the use of temporal coding in a neural-inspired algorithm for optimization-based computation in neural spiking architectures.
Proceedings of the International Joint Conference on Neural Networks
Neural machine learning methods, such as deep neural networks (DNN), have achieved remarkable success in a number of complex data processing tasks. These methods have arguably had their strongest impact on tasks such as image and audio processing - data processing domains in which humans have long held clear advantages over conventional algorithms. In contrast to biological neural systems, which are capable of learning continuously, deep artificial networks have a limited ability for incorporating new information in an already trained network. As a result, methods for continuous learning are potentially highly impactful in enabling the application of deep networks to dynamic data sets. Here, inspired by the process of adult neurogenesis in the hippocampus, we explore the potential for adding new neurons to deep layers of artificial neural networks in order to facilitate their acquisition of novel information while preserving previously trained data representations. Our results on the MNIST handwritten digit dataset and the NIST SD 19 dataset, which includes lower and upper case letters and digits, demonstrate that neurogenesis is well suited for addressing the stability-plasticity dilemma that has long challenged adaptive machine learning algorithms.
R&D Magazine
Abstract not provided.
bioRxiv
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Biologically Inspired Cognitive Architectures
Biological neural networks continue to inspire new developments in algorithms and microelectronic hardware to solve challenging data processing and classification problems. Here, we survey the history of neural-inspired and neuromorphic computing in order to examine the complex and intertwined trajectories of the mathematical theory and hardware developed in this field. Early research focused on adapting existing hardware to emulate the pattern recognition capabilities of living organisms. Contributions from psychologists, mathematicians, engineers, neuroscientists, and other professions were crucial to maturing the field from narrowly-tailored demonstrations to more generalizable systems capable of addressing difficult problem classes such as object detection and speech recognition. Algorithms that leverage fundamental principles found in neuroscience such as hierarchical structure, temporal integration, and robustness to error have been developed, and some of these approaches are achieving world-leading performance on particular data classification tasks. In addition, novel microelectronic hardware is being developed to perform logic and to serve as memory in neuromorphic computing systems with optimized system integration and improved energy efficiency. Key to such advancements was the incorporation of new discoveries in neuroscience research, the transition away from strict structural replication and towards the functional replication of neural systems, and the use of mathematical theory frameworks to guide algorithm and hardware developments.
Neural Computation
The dentate gyrus forms a critical link between the entorhinal cortex and CA3 by providing a sparse version of the signal. Concurrent with this increase in sparsity, a widely accepted theory suggests the dentate gyrus performs pattern separation-similar inputs yield decorrelated outputs. Although an active region of study and theory, few logically rigorous arguments detail the dentate gyrus's (DG) coding.We suggest a theoretically tractable, combinatorial model for this action. The model provides formal methods for a highly redundant, arbitrarily sparse, and decorrelated output signal. To explore the value of this model framework, we assess how suitable it is for two notable aspects of DG coding: how it can handle the highly structured grid cell representation in the input entorhinal cortex region and the presence of adult neurogenesis, which has been proposed to produce a heterogeneous code in the DG.We find tailoring themodel to grid cell input yields expansion parameters consistent with the literature. In addition, the heterogeneous coding reflects activity gradation observed experimentally. Finally,we connect this approach with more conventional binary threshold neural circuit models via a formal embedding.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Neural machine learning methods, such as deep neural networks (DNN), have achieved remarkable success in a number of complex data processing tasks. These methods have arguably had their strongest impact on tasks such as image and audio processing – data processing domains in which humans have long held clear advantages over conventional algorithms. In contrast to biological neural systems, which are capable of learning continuously, deep artificial networks have a limited ability for incorporating new information in an already trained network. As a result, methods for continuous learning are potentially highly impactful in enabling the application of deep networks to dynamic data sets. Here, inspired by the process of adult neurogenesis in the hippocampus, we explore the potential for adding new neurons to deep layers of artificial neural networks in order to facilitate their acquisition of novel information while preserving previously trained data representations. Our results on the MNIST handwritten digit dataset and the NIST SD 19 dataset, which includes lower and upper case letters and digits, demonstrate that neurogenesis is well suited for addressing the stability-plasticity dilemma that has long challenged adaptive machine learning algorithms.
2016 IEEE International Conference on Rebooting Computing, ICRC 2016 - Conference Proceedings
For decades, neural networks have shown promise for next-generation computing, and recent breakthroughs in machine learning techniques, such as deep neural networks, have provided state-of-the-art solutions for inference problems. However, these networks require thousands of training processes and are poorly suited for the precise computations required in scientific or similar arenas. The emergence of dedicated spiking neuromorphic hardware creates a powerful computational paradigm which can be leveraged towards these exact scientific or otherwise objective computing tasks. We forego any learning process and instead construct the network graph by hand. In turn, the networks produce guaranteed success often with easily computable complexity. We demonstrate a number of algorithms exemplifying concepts central to spiking networks including spike timing and synaptic delay. We also discuss the application of cross-correlation particle image velocimetry and provide two spiking algorithms; one uses time-division multiplexing, and the other runs in constant time.
Neuron
Opportunities offered by new neuro-technologies are threatened by lack of coherent plans to analyze, manage, and understand the data. High-performance computing will allow exploratory analysis of massive datasets stored in standardized formats, hosted in open repositories, and integrated with simulations.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings of the International Joint Conference on Neural Networks
Through various means of structural and synaptic plasticity enabling online learning, neural networks are constantly reconfiguring their computational functionality. Neural information content is embodied within the configurations, representations, and computations of neural networks. To explore neural information content, we have developed metrics and computational paradigms to quantify neural information content. We have observed that conventional compression methods may help overcome some of the limiting factors of standard information theoretic techniques employed in neuroscience, and allows us to approximate information in neural data. To do so we have used compressibility as a measure of complexity in order to estimate entropy to quantitatively assess information content of neural ensembles. Using Lempel-Ziv compression we are able to assess the rate of generation of new patterns across a neural ensemble's firing activity over time to approximate the information content encoded by a neural circuit. As a specific case study, we have been investigating the effect of neural mixed coding schemes due to hippocampal adult neurogenesis.
Abstract not provided.
Abstract not provided.
Proceedings of the National Academy of Sciences of the United States of America
Rewarding experiences are often well remembered, and such memory formation is known to be dependent on dopamine modulation of the neural substrates engaged in learning and memory; however, it is unknown how and where in the brain dopamine signals bias episodic memory toward preceding rather than subsequent events. Here we found that photostimulation of channelrhodopsin-2-expressing dopaminergic fibers in the dentate gyrus induced a long-term depression of cortical inputs, diminished theta oscillations, and impaired subsequent contextual learning. Computational modeling based on this dopamine modulation indicated an asymmetric association of events occurring before and after reward in memory tasks. In subsequent behavioral experiments, preexposure to a natural reward suppressed hippocampus-dependent memory formation, with an effective time window consistent with the duration of dopamine-induced changes of dentate activity. Overall, our results suggest a mechanism by which dopamine enables the hippocampus to encode memory with reduced interference from subsequent experience.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Nature Communications
Persistent neurogenesis in the dentate gyrus produces immature neurons with high intrinsic excitability and low levels of inhibition that are predicted to be more broadly responsive to afferent activity than mature neurons. Mounting evidence suggests that these immature neurons are necessary for generating distinct neural representations of similar contexts, but it is unclear how broadly responsive neurons help distinguish between similar patterns of afferent activity. Here we show that stimulation of the entorhinal cortex in mouse brain slices paradoxically generates spiking of mature neurons in the absence of immature neuron spiking. Immature neurons with high intrinsic excitability fail to spike due to insufficient excitatory drive that results from low innervation rather than silent synapses or low release probability. Our results suggest that low synaptic connectivity prevents immature neurons from responding broadly to cortical activity, potentially enabling excitable immature neurons to contribute to sparse and orthogonal dentate representations.
Cold Spring Harbor Perspectives in Biology
The restriction of adult neurogenesis to only a handful of regions of the brain is suggestive of some shared requirement for this dramatic form of structural plasticity. However, a common driver across neurogenic regions has not yet been identified. Computational studies have been invaluable in providing insight into the functional role of new neurons; however, researchers have typically focused on specific scales ranging from abstract neural networks to specific neural systems, most commonly the dentate gyrus area of the hippocampus. These studies have yielded a number of diverse potential functions for new neurons, ranging from an impact on pattern separation to the incorporation of time into episodic memories to enabling the forgetting of old information. This review will summarize these past computational efforts and discuss whether these proposed theoretical functions can be unified into a common rationale for why neurogenesis is required in these unique neural circuits.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Frontiers in Neuroscience
The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.
Abstract not provided.
2015 4th Berkeley Symposium on Energy Efficient Electronic Systems E3s 2015 Proceedings
As transistors start to approach fundamental limits and Moore's law slows down, new devices and architectures are needed to enable continued performance gains. New approaches based on RRAM (resistive random access memory) or memristor crossbars can enable the processing of large amounts of data[1, 2]. One of the most promising applications for RRAM crossbars is brain inspired or neuromorphic computing[3, 4].
2015 4th Berkeley Symposium on Energy Efficient Electronic Systems, E3S 2015 - Proceedings
As transistors start to approach fundamental limits and Moore's law slows down, new devices and architectures are needed to enable continued performance gains. New approaches based on RRAM (resistive random access memory) or memristor crossbars can enable the processing of large amounts of data[1, 2]. One of the most promising applications for RRAM crossbars is brain inspired or neuromorphic computing[3, 4].
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings of the International Joint Conference on Neural Networks
Some next generation computing devices may consist of resistive memory arranged as a crossbar. Currently, the dominant approach is to use crossbars as the weight matrix of a neural network, and to use learning algorithms that require small incremental weight updates, such as gradient descent (for example Backpropagation). Using real-world measurements, we demonstrate that resistive memory devices are unlikely to support such learning methods. As an alternative, we offer a random search algorithm tailored to the measured characteristics of our devices.
Proceedings of the International Joint Conference on Neural Networks
The field of machine learning strives to develop algorithms that, through learning, lead to generalization; that is, the ability of a machine to perform a task that it was not explicitly trained for. An added challenge arises when the problem domain is dynamic or non-stationary with the data distributions or categorizations changing over time. This phenomenon is known as concept drift. Game-theoretic algorithms are often iterative by nature, consisting of repeated game play rather than a single interaction. Effectively, rather than requiring extensive retraining to update a learning model, a game-theoretic approach can adjust strategies as a novel approach to concept drift. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in an adaptive manner with repeated play to address concept drift, and show results of applying this algorithm to synthetic as well as real data.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation/Completion of Episodic Information - Sandia researchers developed novel methods and metrics for studying the computational function of neurogenesis, thus generating substantial impact to the neuroscience and neural computing communities. This work could benefit applications in machine learning and other analysis activities.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Procedia Computer Science
Recent cyber security events have demonstrated the need for algorithms that adapt to the rapidly evolving threat landscape of complex network systems. In particular, human analysts often fail to identify data exfiltration when it is encrypted or disguised as innocuous data. Signature-based approaches for identifying data types are easily fooled and analysts can only investigate a small fraction of network events. However, neural networks can learn to identify subtle patterns in a suitably chosen input space. To this end, we have developed a signal processing approach for classifying data files which readily adapts to new data formats. We evaluate the performance for three input spaces consisting of the power spectral density, byte probability distribution and sliding-window entropy of the byte sequence in a file. By combining all three, we trained a deep neural network to discriminate amongst nine common data types found on the Internet with 97.4% accuracy.
Procedia Computer Science
Despite technological advances making computing devices faster, smaller, and more prevalent in today's age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently and recom-bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Physiological Reviews
Adult neurogenesis in the hippocampus is a notable process due not only to its uniqueness and potential impact on cognition but also to its localized vertical integration of different scales of neuroscience, ranging from molecular and cellular biology to behavior. Our review summarizes the recent research regarding the process of adult neurogenesis from these different perspectives, with particular emphasis on the differentiation and development of new neurons, the regulation of the process by extrinsic and intrinsic factors, and their ultimate function in the hippocampus circuit. Arising from a local neural stem cell population, new neurons progress through several stages of maturation, ultimately integrating into the adult dentate gyrus network. Furthermore, the increased appreciation of the full neurogenesis process, from genes and cells to behavior and cognition, makes neurogenesis both a unique case study for how scales in neuroscience can link together and suggests neurogenesis as a potential target for therapeutic intervention for a number of disorders.
This report discusses aspects of neuromorphic computing and how it is used to model microsystems.
Adult neurogenesis in the hippocampus region of the brain is a neurobiological process that is believed to contribute to the brain's advanced abilities in complex pattern recognition and cognition. Here, we describe how realistic scale simulations of the neurogenesis process can offer both a unique perspective on the biological relevance of this process and confer computational insights that are suggestive of novel machine learning techniques. First, supercomputer based scaling studies of the neurogenesis process demonstrate how a small fraction of adult-born neurons have a uniquely larger impact in biologically realistic scaled networks. Second, we describe a novel technical approach by which the information content of ensembles of neurons can be estimated. Finally, we illustrate several examples of broader algorithmic impact of neurogenesis, including both extending existing machine learning approaches and novel approaches for intelligent sensing.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
ECS Transactions
Resistive random access memory (ReRAM), or memristors, may be capable of significantly improve the efficiency of neuromorphic computing, when used as a central component of an analog hardware accelerator. However, the significant electrical variation within a device and between devices degrades the maximum efficiency and accuracy which can be achieved by a ReRAMbased neuromorphic accelerator. In this report, the electrical variability is characterized, with a particular focus on that which is due to fundamental, intrinsic factors. Analytical and ab initio models are presented which offer some insight into the factors responsible for this variability.
Abstract not provided.