Designing for Interpretability and Adaptability by Using Weighted Averages
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Applied Physics A: Materials Science and Processing
A unified physics-based model of electron transport in metal-insulator-metal (MIM) systems is presented. In this model, transport through metal-oxide interfaces occurs by electron tunneling between the metal electrodes and oxide defect states. Transport in the oxide bulk is dominated by hopping, modeled as a series of tunneling events that alter the electron occupancy of defect states. Electron transport in the oxide conduction band is treated by the drift–diffusion formalism and defect chemistry reactions link all the various transport mechanisms. It is shown that the current-limiting effect of the interface band offsets is a function of the defect vacancy concentration. These results provide insight into the underlying physical mechanisms of leakage currents in oxide-based capacitors and steady-state electron transport in resistive random access memory (ReRAM) MIM devices. Finally, an explanation of ReRAM bipolar switching behavior based on these results is proposed.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Computational Electronics
With the end of Dennard scaling and the ever-increasing need for more efficient, faster computation, resistive switching devices (ReRAM), often referred to as memristors, are a promising candidate for next generation computer hardware. These devices show particular promise for use in an analog neuromorphic computing accelerator as they can be tuned to multiple states and be updated like the weights in neuromorphic algorithms. Modeling a ReRAM-based neuromorphic computing accelerator requires a compact model capable of correctly simulating the small weight update behavior associated with neuromorphic training. These small updates have a nonlinear dependence on the initial state, which has a significant impact on neural network training. Consequently, we propose the piecewise empirical model (PEM), an empirically derived general purpose compact model that can accurately capture the nonlinearity of an arbitrary two-terminal device to match pulse measurements important for neuromorphic computing applications. By defining the state of the device to be proportional to its current, the model parameters can be extracted from a series of voltages pulses that mimic the behavior of a device in an analog neuromorphic computing accelerator. This allows for a general, accurate, and intuitive compact circuit model that is applicable to different resistance-switching device technologies. In this work, we explain the details of the model, implement the model in the circuit simulator Xyce, and give an example of its usage to model a specific Ta / TaO x device.
2017 IEEE International Conference on Rebooting Computing, ICRC 2017 - Proceedings
Resistive memory (ReRAM) shows promise for use as an analog synapse element in energy-efficient neural network algorithm accelerators. A particularly important application is the training of neural networks, as this is the most computationally-intensive procedure in using a neural algorithm. However, training a network with analog ReRAM synapses can significantly reduce the accuracy at the algorithm level. In order to assess this degradation, analog properties of ReRAM devices were measured and hand-written digit recognition accuracy was modeled for the training using backpropagation. Bipolar filamentary devices utilizing three material systems were measured and compared: one oxygen vacancy system, Ta-TaOx, and two conducting metallization systems, Cu-SiO2, and Ag/chalcogenide. Analog properties and conductance ranges of the devices are optimized by measuring the response to varying voltage pulse characteristics. Key analog device properties which degrade the accuracy are update linearity and write noise. Write noise may improve as a function of device manufacturing maturity, but write nonlinearity appears relatively consistent among the different device material systems and is found to be the most significant factor affecting accuracy. This suggests that new materials and/or fundamentally different resistive switching mechanisms may be required to improve device linearity and achieve higher algorithm training accuracy.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The goal of this LDRD is to develop a quantum nanophotonics capability that will allow practical control over electron (hole) and photon confinement in more than one dimension. We plan to use quantum dots (QDs) to control electrons, and photonic crystals to control photons. InGaN QDs will be fabricated using quantum size control processes, and methods will be developed to add epitaxial layers for hole injection and surface passivation. We will also explore photonic crystal nanofabrication techniques using both additive and subtractive fabrication processes, which can tailor photonic crystal properties. These two efforts will be combined by incorporating the QDs into photonic crystal surface emitting lasers (PCSELs). Modeling will be performed using finite-different time-domain and gain analysis to optimize QD-PCSEL designs that balance laser performance with the ability to nano-fabricate structures. Finally, we will develop design rules for QD-PCSEL architectures, to understand their performance possibilities and limits.
This presentation describes how liquid states can be observed using a crossbar device. Models, tests, errors, and other results are described here.
Abstract not provided.
Digest of Technical Papers - Symposium on VLSI Technology
Analog resistive memories promise to reduce the energy of neural networks by orders of magnitude. However, the write variability and write nonlinearity of current devices prevent neural networks from training to high accuracy. We present a novel periodic carry method that uses a positional number system to overcome this while maintaining the benefit of parallel analog matrix operations. We demonstrate how noisy, nonlinear TaOx devices that could only train to 80% accuracy on MNIST, can now reach 97% accuracy, only 1% away from an ideal numeric accuracy of 98%. On a file type dataset, the TaOx devices achieve ideal numeric accuracy. In addition, low noise, linear Li1-xCoO2 devices train to ideal numeric accuracies using periodic carry on both datasets.
Digest of Technical Papers - Symposium on VLSI Technology
Analog resistive memories promise to reduce the energy of neural networks by orders of magnitude. However, the write variability and write nonlinearity of current devices prevent neural networks from training to high accuracy. We present a novel periodic carry method that uses a positional number system to overcome this while maintaining the benefit of parallel analog matrix operations. We demonstrate how noisy, nonlinear TaOx devices that could only train to 80% accuracy on MNIST, can now reach 97% accuracy, only 1% away from an ideal numeric accuracy of 98%. On a file type dataset, the TaOx devices achieve ideal numeric accuracy. In addition, low noise, linear Li1-xCoO2 devices train to ideal numeric accuracies using periodic carry on both datasets.
Abstract not provided.
2017 IEEE 9th International Memory Workshop, IMW 2017
Parasitic resistances cause devices in a resistive memory array to experience different read/write voltages depending on the device location, resulting in uneven writes and larger leakage currents. We present a new method to compensate for this by adding extra series resistance to the drivers to equalize the parasitic resistance seen by all the devices. This allows for uniform writes, enabling multi-level cells with greater numbers of distinguishable levels, and reduced write power, enabling larger arrays.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Nature Materials
The brain is capable of massively parallel information processing while consuming only ~1-100 fJ per synaptic event1,2. Inspired by the efficiency of the brain, CMOS-based neural architectures3 and memristors4,5 are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODeswitches at lowvoltage and energy (<10 pJ for 103 μm2 devices), displays >500 distinct, non-volatile conductance states within a~1V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems6,7. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.
Abstract not provided.
Abstract not provided.
2016 IEEE International Conference on Rebooting Computing, ICRC 2016 - Conference Proceedings
We address practical limits of energy efficiency scaling for logic and memory. Scaling of logic will end with unreliable operation, making computers probabilistic as a side effect. The errors can be corrected or tolerated, but overhead will increase with further scaling. We address the tradeoff between scaling and error correction that yields minimum energy per operation, finding new error correction methods with energy consumption limits about 2× below current approaches. The maximum energy efficiency for memory depends on several other factors. Adiabatic and reversible methods applied to logic have promise, but overheads have precluded practical use. However, the regular array structure of memory arrays tends to reduce overhead and makes adiabatic memory a viable option. This paper reports an adiabatic memory that has been tested at about 85× improvement over standard designs for energy efficiency. Combining these approaches could set energy efficiency expectations for processor-in-memory computing systems.
Abstract not provided.
Proceedings of the International Joint Conference on Neural Networks
Resistive memories enable dramatic energy reductions for neural algorithms. We propose a general purpose neural architecture that can accelerate many different algorithms and determine the device properties that will be needed to run backpropagation on the neural architecture. To maintain high accuracy, the read noise standard deviation should be less than 5% of the weight range. The write noise standard deviation should be less than 0.4% of the weight range and up to 300% of a characteristic update (for the datasets tested). Asymmetric nonlinearities in the change in conductance vs pulse cause weight decay and significantly reduce the accuracy, while moderate symmetric nonlinearities do not have an effect. In order to allow for parallel reads and writes the write current should be less than 100 nA as well.
Proceedings of the International Joint Conference on Neural Networks
Resistive memories enable dramatic energy reductions for neural algorithms. We propose a general purpose neural architecture that can accelerate many different algorithms and determine the device properties that will be needed to run backpropagation on the neural architecture. To maintain high accuracy, the read noise standard deviation should be less than 5% of the weight range. The write noise standard deviation should be less than 0.4% of the weight range and up to 300% of a characteristic update (for the datasets tested). Asymmetric nonlinearities in the change in conductance vs pulse cause weight decay and significantly reduce the accuracy, while moderate symmetric nonlinearities do not have an effect. In order to allow for parallel reads and writes the write current should be less than 100 nA as well.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Device Research Conference - Conference Digest, DRC
Wide band gap semiconductors like AlN typically cannot be efficiently p-doped: acceptor levels are far from the valence band-edge, preventing holes from activating. This means that pn-junctions cannot be created, and the semiconductor is less useful, a particular problem for deep Ultraviolet (UV) optoelectronics.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Frontiers in Neuroscience
The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.
Abstract not provided.
2015 4th Berkeley Symposium on Energy Efficient Electronic Systems E3s 2015 Proceedings
As transistors start to approach fundamental limits and Moore's law slows down, new devices and architectures are needed to enable continued performance gains. New approaches based on RRAM (resistive random access memory) or memristor crossbars can enable the processing of large amounts of data[1, 2]. One of the most promising applications for RRAM crossbars is brain inspired or neuromorphic computing[3, 4].
2015 4th Berkeley Symposium on Energy Efficient Electronic Systems, E3S 2015 - Proceedings
As transistors start to approach fundamental limits and Moore's law slows down, new devices and architectures are needed to enable continued performance gains. New approaches based on RRAM (resistive random access memory) or memristor crossbars can enable the processing of large amounts of data[1, 2]. One of the most promising applications for RRAM crossbars is brain inspired or neuromorphic computing[3, 4].