Lateral inhibition is an important functionality in neuromorphic computing, modeled after the biological neuron behavior that a firing neuron deactivates its neighbors belonging to the same layer and prevents them from firing. In most neuromorphic hardware platforms lateral inhibition is implemented by external circuitry, thereby decreasing the energy efficiency and increasing the area overhead of such systems. Recently, the domain wall - magnetic tunnel junction (DW-MTJ) artificial neuron is demonstrated in modeling to be intrinsically inhibitory. Without peripheral circuitry, lateral inhibition in DW-MTJ neurons results from magnetostatic interaction between neighboring neuron cells. However, the lateral inhibition mechanism in DW-MTJ neurons has not been studied thoroughly, leading to weak inhibition only in very closely-spaced devices. This work approaches these problems by modeling current- and field- driven DW motion in a pair of adjacent DW-MTJ neurons. We maximize the magnitude of lateral inhibition by tuning the magnetic interaction between the neurons. The results are explained by current-driven DW velocity characteristics in response to an external magnetic field and quantified by an analytical model. Dependence of lateral inhibition strength on device parameters is also studied. Finally, lateral inhibition behavior in an array of 1000 DW-MTJ neurons is demonstrated. Our results provide a guideline for the optimization of lateral inhibition implementation in DW-MTJ neurons. With strong lateral inhibition achieved, a path towards competitive learning algorithms such as the winner-take-all are made possible on such neuromorphic devices.
Non-volatile memory arrays can deploy pre-trained neural network models for edge inference. However, these systems are affected by device-level noise and retention issues. Here, we examine damage caused by these effects, introduce a mitigation strategy, and demonstrate its use in fabricated array of SONOS (Silicon-Oxide-Nitride-Oxide-Silicon) devices. On MNIST, fashion-MNIST, and CIFAR-10 tasks, our approach increases resilience to synaptic noise and drift. We also show strong performance can be realized with ADCs of 5-8 bits precision.
Machine learning implements backpropagation via abundant training samples. We demonstrate a multi-stage learning system realized by a promising non-volatile memory device, the domain-wall magnetic tunnel junction (DW-MTJ). The system consists of unsupervised (clustering) as well as supervised sub-systems, and generalizes quickly (with few samples). We demonstrate interactions between physical properties of this device and optimal implementation of neuroscience-inspired plasticity learning rules, and highlight performance on a suite of tasks. Our energy analysis confirms the value of the approach, as the learning budget stays below 20µJ even for large tasks used typically in machine learning.
The domain wall-magnetic tunnel junction (DW-MTJ) is a spintronic device that enables efficient logic circuit design because of its low energy consumption, small size, and non-volatility. Furthermore, the DW-MTJ is one of the few spintronic devices for which a direct cascading mechanism is experimentally demonstrated without any extra buffers; this enables potential design and fabrication of a large-scale DW-MTJ logic system. However, DW-MTJ logic relies on the conversion between electrical signals and magnetic states which is sensitive to process imperfection. Therefore, it is important to analyze the robustness of such DW-MTJ devices to anticipate the system reliability before fabrication. Here we propose a new DW-MTJ model that integrates the impacts of process variation to enable the analysis and optimization of DW-MTJ logic. This will allow circuit and device design that enhances the robustness of DW-MTJ logic and advances the development of energy-efficient spintronic computing systems.
Neuromorphic computing captures the quintessential neural behaviors of the brain and is a promising candidate for the beyond-von Neumann computer architectures, featuring low power consumption and high parallelism. The neuronal lateral inhibition feature, closely associated with the biological receptive field, is crucial to neuronal competition in the nervous system as well as its neuromorphic hardware counterpart. The domain wall - magnetic tunnel junction (DW-MTJ) neuron is an emerging spintronic artificial neuron device exhibiting intrinsic lateral inhibition. This work discusses lateral inhibition mechanism of the DW-MTJ neuron and shows by micromagnetic simulation that lateral inhibition is efficiently enhanced by the Dzyaloshinskii-Moriya interaction (DMI).
Brigner, Wesley H.; Hassan, Naimul; Jiang-Wei, Lucian; Hu, Xuan; Saha, Diptish; Bennett, Christopher H.; Marinella, Matthew J.; Incorvia, Jean A.C.; Garcia-Sanchez, Felipe; Friedman, Joseph S.
Spintronic devices based on domain wall (DW) motion through ferromagnetic nanowire tracks have received great interest as components of neuromorphic information processing systems. Previous proposals for spintronic artificial neurons required external stimuli to perform the leaking functionality, one of the three fundamental functions of a leaky integrate-and-fire (LIF) neuron. The use of this external magnetic field or electrical current stimulus results in either a decrease in energy efficiency or an increase in fabrication complexity. In this article, we modify the shape of previously demonstrated three-terminal magnetic tunnel junction neurons to perform the leaking operation without any external stimuli. The trapezoidal structure causes a shape-based DW drift, thus intrinsically providing the leaking functionality with no hardware cost. This LIF neuron, therefore, promises to advance the development of spintronic neural network crossbar arrays.
The radiation response of TaOx-based RRAM devices fabricated in academic (Set A) and industrial (Set B) settings was compared. Ionization damage from a 60Co gamma source did not cause any changes in device resistance for either device type, up to 45 Mrad(Si). Displacement damage from a heavy ion beam caused the Set B in the high resistance state to decrease in resistance at 1 x 1021 oxygen displacements per cm3; meanwhile, the Set A devices did not exhibit any decrease in resistance due to displacement damage. Both types of devices demonstrated an increase in resistance around 3 x 1022 oxygen displacements per cm3, possibly due to damage at the oxide/metal interfaces. These extremely high levels of damage represent near-total atomic disruption, and if this level of damage were ever reached, other circuit elements would likely fail before the RRAM devices in this study. Generally, both sets of devices were much more resistant to radiation effects than other devices reported in the literature. Displacement damage effects were only observed in the Set A devices once the displacement-induced oxygen vacancies surpassed the intrinsic vacancy concentration in the devices, suggesting that high oxygen vacancy concentration played a role in the devices’ high tolerance to displacement damage.
There have been recent efforts towards the development of biologically-inspired neuromorphic devices and architecture. Here, we show a synapse circuit that is designed to perform spike-timing-dependent plasticity which works with the leaky, integrate, and fire neuron in a neuromorphic computing architecture. The circuit consists of a three-terminal magnetic tunnel junction with a mobile domain wall between two low-pass filters and has been modeled in SPICE. The results show that the current flowing through the synapse is highly correlated to the timing delay between the pre-synaptic and post-synaptic neurons. Using micromagnetic simulations, we show that introducing notches along the length of the domain wall track pins the domain wall at each successive notch to properly respond to the timing between the input and output current pulses of the circuit, producing a multi-state resistance representing synaptic weights. We show in SPICE that a notch-free ideal magnetic device also shows spike-timing dependent plasticity in response to the circuit current. This work is key progress towards making more bio-realistic artificial synapses with multiple weights, which can be trained online with a promise of CMOS compatibility and energy efficiency.
Efficiency bottlenecks inherent to conventional computing in executing neural algorithms have spurred the development of novel devices capable of “in-memory” computing. Commonly known as “memristors,” a variety of device concepts including conducting bridge, vacancy filament, phase change, and other types have been proposed as promising elements in artificial neural networks for executing inference and learning algorithms. In this article, we review the recent advances in memristor technology for neuromorphic computing and discuss strategies for addressing the most significant performance challenges, including nonlinearity, high read/write currents, and endurance. As an alternative to two-terminal memristors, we introduce the three-terminal electrochemical memory based on the redox transistor (RT), which uses a gate to tune the redox state of the channel. Decoupling the “read” and “write” operations using a third terminal and storage of information as a charge-compensated redox reaction in the bulk of the transistor enables high-density information storage. These properties enable low-energy operation without compromising analog performance and nonvolatility. Finally, we discuss the RT operating mechanisms using organic and inorganic materials, approaches for array integration, and prospects for achieving the device density and switching speeds necessary to make electrochemical memory competitive with established digital technology.