Publications

Results 76–81 of 81

Search results

Jump to search filters

Wafer-Scale TaOx Device Variability and Implications for Neuromorphic Computing Applications

IEEE International Reliability Physics Symposium Proceedings

Bennett, Christopher H.; Garland, Diana; Jacobs-Gedrim, Robin B.; Agarwal, Sapan A.; Marinella, Matthew J.

Scaling arrays of non-volatile memory devices from academic demonstrations to reliable, manufacturable systems requires a better understanding of variability at array and wafer-scale levels. CrossSim models the accuracy of neural networks implemented on an analog resistive memory accelerator using the cycle-to-cycle variability of a single device. In this work, we extend this modeling tool to account for device-to-device variation in a realistic way, and evaluate the impact of this reliability issue in the context of neuromorphic online learning tasks.

More Details

Designing and modeling analog neural network training accelerators

2019 International Symposium on VLSI Technology, Systems and Application, VLSI-TSA 2019

Agarwal, Sapan A.; Jacobs-Gedrim, Robin B.; Bennett, Christopher H.; Hsia, Alexander W.; Van Heukelom, Michael V.; Hughart, David R.; Fuller, Elliot J.; Li, Yiyang; Talin, A.A.; Marinella, Matthew J.

Analog crossbars have the potential to reduce the energy and latency required to train a neural network by three orders of magnitude when compared to an optimized digital ASIC. The crossbar simulator, CrossSim, can be used to model device nonidealities and determine what device properties are needed to create an accurate neural network accelerator. Experimentally measured device statistics are used to simulate neural network training accuracy and compare different classes of devices including TaOx ReRAM, Lir-Co-Oz devices, and conventional floating gate SONOS memories. A technique called 'Periodic Carry' can overcomes device nonidealities by using a positional number system while maintaining the benefit of parallel analog matrix operations.

More Details

Contrasting Advantages of Learning With Random Weights and Backpropagation in Non-Volatile Memory Neural Networks

IEEE Access

Bennett, Christopher H.; Parmar, Vivek; Calvet, Laurie E.; Klein, Jacques O.; Suri, Manan; Marinella, Matthew J.; Querlioz, Damien

Recently, a Cambrian explosion of a novel, non-volatile memory (NVM) devices known as memristive devices have inspired effort in building hardware neural networks that learn like the brain. Early experimental prototypes built simple perceptrons from nanosynapses, and recently, fully-connected multi-layer perceptron (MLP) learning systems have been realized. However, while backpropagating learning systems pair well with high-precision computer memories and achieve state-of-the-art performances, this typically comes with a massive energy budget. For future Internet of Things/peripheral use cases, system energy footprint will be a major constraint, and emerging NVM devices may fill the gap by sacrificing high bit precision for lower energy. In this paper, we contrast the well-known MLP approach with the extreme learning machine (ELM) or NoProp approach, which uses a large layer of random weights to improve the separability of high-dimensional tasks, and is usually considered inferior in a software context. However, we find that when taking the device non-linearity into account, NoProp manages to equal hardware MLP system in terms of accuracy. While also using a sign-based adaptation of the delta rule for energy-savings, we find that NoProp can learn effectively with four to six 'bits' of device analog capacity, while MLP requires eight-bit capacity with the same rule. This may allow the requirements for memristive devices to be relaxed in the context of online learning. By comparing the energy footprint of these systems for several candidate nanosynapses and realistic peripherals, we confirm that memristive NoProp systems save energy compared with MLP systems. Lastly, we show that ELM/NoProp systems can achieve better generalization abilities than nanosynaptic MLP systems when paired with pre-processing layers (which do not require backpropagated error). Collectively, these advantages make such systems worthy of consideration in future accelerators or embedded hardware.

More Details

Semi-supervised learning and inference in domain-wall magnetic tunnel junction (DW-MTJ) neural networks

Proceedings of SPIE - The International Society for Optical Engineering

Bennett, Christopher H.; Hassan, Naimul; Hu, Xuan; Incornvia, Jean A.C.; Friedman, Joseph S.; Marinella, Matthew J.

Advances in machine intelligence have sparked interest in hardware accelerators to implement these algorithms, yet embedded electronics have stringent power, area budgets, and speed requirements that may limit non- volatile memory (NVM) integration. In this context, the development of fast nanomagnetic neural networks using minimal training data is attractive. Here, we extend an inference-only proposal using the intrinsic physics of domain-wall MTJ (DW-MTJ) neurons for online learning to implement fully unsupervised pattern recognition operation, using winner-take-all networks that contain either random or plastic synapses (weights). Meanwhile, a read-out layer trains in a supervised fashion. We find our proposed design can approach state-of-the-art success on the task relative to competing memristive neural network proposals, while eliminating much of the area and energy overhead that would typically be required to build the neuronal layers with CMOS devices.

More Details
Results 76–81 of 81
Results 76–81 of 81