Publications

Results 26–50 of 81

Search results

Jump to search filters

Ionizing Radiation Effects in SONOS-Based Neuromorphic Inference Accelerators

IEEE Transactions on Nuclear Science

Xiao, Tianyao X.; Bennett, Christopher H.; Agarwal, Sapan A.; Hughart, David R.; Barnaby, Hugh J.; Puchner, Helmut; Prabhakar, Venkatraman; Talin, A.A.; Marinella, Matthew J.

We evaluate the sensitivity of neuromorphic inference accelerators based on silicon-oxide-nitride-oxide-silicon (SONOS) charge trap memory arrays to total ionizing dose (TID) effects. Data retention statistics were collected for 16 Mbit of 40-nm SONOS digital memory exposed to ionizing radiation from a Co-60 source, showing good retention of the bits up to the maximum dose of 500 krad(Si). Using this data, we formulate a rate-equation-based model for the TID response of trapped charge carriers in the ONO stack and predict the effect of TID on intermediate device states between 'program' and 'erase.' This model is then used to simulate arrays of low-power, analog SONOS devices that store 8-bit neural network weights and support in situ matrix-vector multiplication. We evaluate the accuracy of the irradiated SONOS-based inference accelerator on two image recognition tasks - CIFAR-10 and the challenging ImageNet data set - using state-of-the-art convolutional neural networks, such as ResNet-50. We find that across the data sets and neural networks evaluated, the accelerator tolerates a maximum TID between 10 and 100 krad(Si), with deeper networks being more susceptible to accuracy losses due to TID.

More Details

Heavy-Ion-Induced Displacement Damage Effects in Magnetic Tunnel Junctions with Perpendicular Anisotropy

IEEE Transactions on Nuclear Science

Xiao, Tianyao X.; Bennett, Christopher H.; Mancoff, Frederick B.; Manuel, Jack E.; Hughart, David R.; Jacobs-Gedrim, Robin B.; Bielejec, Edward S.; Vizkelethy, Gyorgy V.; Sun, Jijun; Aggarwal, Sanjeev; Arghavani, Reza A.; Marinella, Matthew J.

We evaluate the resilience of CoFeB/MgO/CoFeB magnetic tunnel junctions (MTJs) with perpendicular magnetic anisotropy (PMA) to displacement damage induced by heavy-ion irradiation. MTJs were exposed to 3-MeV Ta2+ ions at different levels of ion beam fluence spanning five orders of magnitude. The devices remained insensitive to beam fluences up to $10^{11}$ ions/cm2, beyond which a gradual degradation in the device magnetoresistance, coercive magnetic field, and spin-transfer-torque (STT) switching voltage were observed, ending with a complete loss of magnetoresistance at very high levels of displacement damage (>0.035 displacements per atom). The loss of magnetoresistance is attributed to structural damage at the MgO interfaces, which allows electrons to scatter among the propagating modes within the tunnel barrier and reduces the net spin polarization. Ion-induced damage to the interface also reduces the PMA. This study clarifies the displacement damage thresholds that lead to significant irreversible changes in the characteristics of STT magnetic random access memory (STT-MRAM) and elucidates the physical mechanisms underlying the deterioration in device properties.

More Details

In situ Parallel Training of Analog Neural Network Using Electrochemical Random-Access Memory

Frontiers in Neuroscience (Online)

Talin, A.A.; Li, Yiyang; Fuller, Elliot J.; Bennett, Christopher H.; Xiao, Tianyao X.; Salleo, Alberto; Melianas, Armantas; Isele, Erik; Marinella, Matthew J.; Tao, Hanbo

In-memory computing based on non-volatile resistive memory can significantly improve the energy efficiency of artificial neural networks. However, accurate in situ training has been challenging due to the nonlinear and stochastic switching of the resistive memory elements. One promising analog memory is the electrochemical random-access memory (ECRAM), also known as the redox transistor. Its low write currents and linear switching properties across hundreds of analog states enable accurate and massively parallel updates of a full crossbar array, which yield rapid and energy-efficient training. While simulations predict that ECRAM based neural networks achieve high training accuracy at significantly higher energy efficiency than digital implementations, these predictions have not been experimentally achieved. In this work, we train a 3 × 3 array of ECRAM devices that learns to discriminate several elementary logic gates (AND, OR, NAND). We record the evolution of the network’s synaptic weights during parallel in situ (on-line) training, with outer product updates. Due to linear and reproducible device switching characteristics, our crossbar simulations not only accurately simulate the epochs to convergence, but also quantitatively capture the evolution of weights in individual devices. The implementation of the first in situ parallel training together with strong agreement with simulation results provides a significant advance toward developing ECRAM into larger crossbar arrays for artificial neural network accelerators, which could enable orders of magnitude improvements in energy efficiency of deep neural networks.

More Details

Controllable Reset Behavior in Domain Wall-Magnetic Tunnel Junction Artificial Neurons for Task-Adaptable Computation

IEEE Magnetics Letters

Liu, Samuel; Bennett, Christopher H.; Friedman, Joseph; Marinella, Matthew J.; Paydarfar, David; Incorvia, Jean A.

Neuromorphic computing with spintronic devices has been of interest due to the limitations of CMOS-driven von Neumann computing. Domain wall-magnetic tunnel junction (DW-MTJ) devices have been shown to be able to intrinsically capture biological neuron behavior. Edgy-relaxed behavior, where a frequently firing neuron experiences a lower action potential threshold, may provide additional artificial neuronal functionality when executing repeated tasks. In this letter, we demonstrate that this behavior can be implemented in DW-MTJ artificial neurons via three alternative mechanisms: shape anisotropy, magnetic field, and current-driven soft reset. Using micromagnetics and analytical device modeling to classify the Optdigits handwritten digit dataset, we show that edgy-relaxed behavior improves both classification accuracy and classification rate for ordered datasets while sacrificing little to no accuracy for a randomized dataset. This letter establishes methods by which artificial spintronic neurons can be flexibly adapted to datasets.

More Details

Filament-Free Bulk Resistive Memory Enables Deterministic Analogue Switching

Advanced Materials

Talin, A.A.; Fuller, Elliot J.; Li, Yiyang; Marinella, Matthew J.; Sugar, Joshua D.; Bennett, Christopher H.; Bartsch, Michael B.; Horton, Robert D.; Yoo, Sangmin; Ashby, David; Lu, Wei D.

Digital computing is nearing its physical limits as computing needs and energy consumption rapidly increase. Analogue-memory-based neuromorphic computing can be orders of magnitude more energy efficient at data-intensive tasks like deep neural networks, but has been limited by the inaccurate and unpredictable switching of analogue resistive memory. Filamentary resistive random access memory (RRAM) suffers from stochastic switching due to the random kinetic motion of discrete defects in the nanometer-sized filament. In this work, this stochasticity is overcome by incorporating a solid electrolyte interlayer, in this case, yttria-stabilized zirconia (YSZ), toward eliminating filaments. Filament-free, bulk-RRAM cells instead store analogue states using the bulk point defect concentration, yielding predictable switching because the statistical ensemble behavior of oxygen vacancy defects is deterministic even when individual defects are stochastic. Both experiments and modeling show bulk-RRAM devices using TiO2-X switching layers and YSZ electrolytes yield deterministic and linear analogue switching for efficient inference and training. Bulk-RRAM solves many outstanding issues with memristor unpredictability that have inhibited commercialization, and can, therefore, enable unprecedented new applications for energy-efficient neuromorphic computing. Beyond RRAM, this work shows how harnessing bulk point defects in ionic materials can be used to engineer deterministic nanoelectronic materials and devices.

More Details

Analog architectures for neural network acceleration based on non-volatile memory

Applied Physics Reviews

Xiao, Tianyao X.; Bennett, Christopher H.; Feinberg, Benjamin F.; Agarwal, Sapan A.; Marinella, Matthew J.

Analog hardware accelerators, which perform computation within a dense memory array, have the potential to overcome the major bottlenecks faced by digital hardware for data-heavy workloads such as deep learning. Exploiting the intrinsic computational advantages of memory arrays, however, has proven to be challenging principally due to the overhead imposed by the peripheral circuitry and due to the non-ideal properties of memory devices that play the role of the synapse. We review the existing implementations of these accelerators for deep supervised learning, organizing our discussion around the different levels of the accelerator design hierarchy, with an emphasis on circuits and architecture. We explore and consolidate the various approaches that have been proposed to address the critical challenges faced by analog accelerators, for both neural network inference and training, and highlight the key design trade-offs underlying these techniques.

More Details

Mosaics, The Best of Both Worlds: Analog devices with Digital Spiking Communication to build a Hybrid Neural Network Accelerator

Aimone, James B.; Bennett, Christopher H.; Cardwell, Suma G.; Dellana, Ryan A.; Xiao, Tianyao X.

Neuromorphic architectures have seen a resurgence of interest in the past decade owing to 100x-1000x efficiency gain over conventional Von Neumann architectures. Digital neuromorphic chips like Intel's Loihi have shown efficiency gains compared to GPUs and CPUs and can be scaled to build larger systems. Analog neuromorphic architectures promise even further savings in energy efficiency, area, and latency than their digital counterparts. Neuromorphic analog and digital technologies provide both low-power and configurable acceleration of challenging artificial intelligence (AI) algorithms. We present a hybrid analog-digital neuromorphic architecture that can amplify the advantages of both high-density analog memory and spike-based digital communication while mitigating each of the other approaches' limitations.

More Details

Three Artificial Spintronic Leaky Integrate-and-Fire Neurons

SPIN

Brigner, Wesley H.; Hu, Xuan; Hassan, Naimul; Jiang-Wei, Lucian; Bennett, Christopher H.; Akinola, Otitoaleke; Pasquale, Massimo; Marinella, Matthew J.; Incorvia, Jean A.C.; Friedman, Joseph S.

Due to their nonvolatility and intrinsic current integration capabilities, spintronic devices that rely on domain wall (DW) motion through a free ferromagnetic track have garnered significant interest in the field of neuromorphic computing. Although a number of such devices have already been proposed, they require the use of external circuitry to implement several important neuronal behaviors. As such, they are likely to result in either a decrease in energy efficiency, an increase in fabrication complexity, or even both. To resolve this issue, we have proposed three individual neurons that are capable of performing these functionalities without the use of any external circuitry. To implement leaking, the first neuron uses a dipolar coupling field, the second uses an anisotropy gradient and the third uses shape variations of the DW track.

More Details
Results 26–50 of 81
Results 26–50 of 81