Publications

Results 51–75 of 101
Skip to search filters

Rebooting Computing and Low-Power Image Recognition Challenge

2015 IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2015

Lu, Yung H.; Kadin, Alan M.; Berg, Alexander C.; Conte, Thomas M.; DeBenedictis, Erik; Garg, Rachit; Gingade, Ganesh; Hoang, Bichlien; Huang, Yongzhen; Li, Boxun; Liu, Jingyu; Liu, Wei; Mao, Huizi; Peng, Junran; Tang, Tianqi; Track, Elie K.; Wang, Jingqiu; Wang, Tao; Wang, Yu; Yao, Jun

Rebooting Computing (RC) is an effort in the IEEE to rethink future computers. RC started in 2012 by the co-chairs, Elie Track (IEEE Council on Superconductivity) and Tom Conte (Computer Society). RC takes a holistic approach, considering revolutionary as well as evolutionary solutions needed to advance computer technologies. Three summits have been held in 2013 and 2014, discussing different technologies, from emerging devices to user interface, from security to energy efficiency, from neuromorphic to reversible computing. The first part of this paper introduces RC to the design automation community and solicits revolutionary ideas from the community for the directions of future computer research. Energy efficiency is identified as one of the most important challenges in future computer technologies. The importance of energy efficiency spans from miniature embedded sensors to wearable computers, from individual desktops to data centers. To gauge the state of the art, the RC Committee organized the first Low Power Image Recognition Challenge (LPIRC). Each image contains one or multiple objects, among 200 categories. A contestant has to provide a working system that can recognize the objects and report the bounding boxes of the objects. The second part of this paper explains LPIRC and the solutions from the top two winners.

More Details

Energy scaling advantages of resistive memory crossbar based computation and its application to sparse coding

Frontiers in Neuroscience

Agarwal, Sapan A.; Quach, Tu-Thach Q.; Parekh, Ojas D.; Hsia, Alexander H.; DeBenedictis, Erik; James, Conrad D.; Marinella, Matthew J.; Aimone, James B.

The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.

More Details

Millivolt switches will support better energy-reliability tradeoffs

2015 4th Berkeley Symposium on Energy Efficient Electronic Systems, E3S 2015 - Proceedings

DeBenedictis, Erik; Zima, Hans

Millivolt switches will not only improve energy efficiency, but will enable a new capability to manage the energy-reliability tradeoff. By effectively utilizing this system-level capability, it may be possible to obtain one or two additional generations of scaling beyond current projections. Millivolt switches will enable further energy scaling, a process that is expected to continue until the technology encounters thermal noise errors [Theis 10]. If thermal noise errors can be accommodated at higher levels through a new form of error correction, it may be possible to scale about 3× lower in system energy than is currently projected. A general solution to errors would also address long standing problems with Cosmic Ray strikes, weak and aging parts, some cyber security vulnerabilities, etc.

More Details

Training neural hardware with noisy components

Proceedings of the International Joint Conference on Neural Networks

Rothganger, Fredrick R.; Evans, Brian R.; Aimone, James B.; DeBenedictis, Erik

Some next generation computing devices may consist of resistive memory arranged as a crossbar. Currently, the dominant approach is to use crossbars as the weight matrix of a neural network, and to use learning algorithms that require small incremental weight updates, such as gradient descent (for example Backpropagation). Using real-world measurements, we demonstrate that resistive memory devices are unlikely to support such learning methods. As an alternative, we offer a random search algorithm tailored to the measured characteristics of our devices.

More Details

Optimal adiabatic scaling and the processor-in-memory-and-storage architecture (OAS+PIMS)

Proceedings of the 2015 IEEE/ACM International Symposium on Nanoscale Architectures, NANOARCH 2015

DeBenedictis, Erik; Cook, Jeanine C.; Hoemmen, Mark F.; Metodi, Tzvetan S.

We discuss a new approach to computing that retains the possibility of exponential growth while making substantial use of the existing technology. The exponential improvement path of Moore's Law has been the driver behind the computing approach of Turing, von Neumann, and FORTRAN-like languages. Performance growth is slowing at the system level, even though further exponential growth should be possible. We propose two technology shifts as a remedy, the first being the formulation of a scaling rule for scaling into the third dimension. This involves use of circuit-level energy efficiency increases using adiabatic circuits to avoid overheating. However, this scaling rule is incompatible with the von Neumann architecture. The second technology shift is a computer architecture and programming change to an extremely aggressive form of Processor-In-Memory (PIM) architecture, which we call Processor-In-Memory-and-Storage (PIMS). Theoretical analysis shows that the PIMS architecture is compatible with the 3D scaling rule, suggesting both immediate benefit and a long-term improvement path.

More Details

Scaling to Nanotechnology Limits with the PIMS Computer Architecture and a new Scaling Rule

DeBenedictis, Erik

We describe a new approach to computing that moves towards the limits of nanotechnology using a newly formulated sc aling rule. This is in contrast to the current computer industry scali ng away from von Neumann's original computer at the rate of Moore's Law. We extend Moore's Law to 3D, which l eads generally to architectures that integrate logic and memory. To keep pow er dissipation cons tant through a 2D surface of the 3D structure requires using adiabatic principles. We call our newly proposed architecture Processor In Memory and Storage (PIMS). We propose a new computational model that integrates processing and memory into "tiles" that comprise logic, memory/storage, and communications functions. Since the programming model will be relatively stable as a system scales, programs repr esented by tiles could be executed in a PIMS system built with today's technology or could become the "schematic diagram" for implementation in an ultimate 3D nanotechnology of the future. We build a systems software approach that offers advantages over and above the technological and arch itectural advantages. Firs t, the algorithms may be more efficient in the conventional sens e of having fewer steps. Second, the algorithms may run with higher power efficiency per operation by being a better match for the adiabatic scaling ru le. The performance analysis based on demonstrated ideas in physical science suggests 80,000 x improvement in cost per operation for the (arguably) gene ral purpose function of emulating neurons in Deep Learning.

More Details

Development characterization and modeling of a TaOx ReRAM for a neuromorphic accelerator

Marinella, Matthew J.; Mickel, Patrick R.; Lohn, Andrew L.; Hughart, David R.; Bondi, Robert J.; Mamaluy, Denis M.; Hjalmarson, Harold P.; Stevens, James E.; Decker, Seth D.; Apodaca, Roger A.; Evans, Brian R.; Aimone, James B.; Rothganger, Fredrick R.; James, Conrad D.; DeBenedictis, Erik

This report discusses aspects of neuromorphic computing and how it is used to model microsystems.

More Details
Results 51–75 of 101
Results 51–75 of 101