Brain-inspired computing

By Neal Singer

Photography By Dino Vournas

Friday, May 12, 2017

Work hastens arrival of synaptic microelectronic devices

A battery-like device that takes the place of a transistor seems like a backward technological leap.

But Sandia researchers, in collaboration with Stanford University, have demonstrated what amounts to a low-voltage artificial synapse that moves ions across a thin dielectric to store information, similar to the way a battery stores a charge. The method could be the basis for future three-dimensional computer architectures, including flexible computer circuits that integrate with the human brain.

The device seems to function better than its more familiar cousin, the transistor, under certain conditions in both digital and analog computers. Lighter and cheaper, it consumes less energy and generates less heat than ordinary transistors. And although slower, it appears trainable, so it might be able to store information without the separate memory required by transistor-based devices. In addition, the device is fabricated from organic polymers that render the device more capable of interfacing with living neural tissue.

Ideal for machine/brain interface

A team led by Alec Talin (8342), developed the ENODe — an electrochemical neuromorphic organic device — in collaboration with Alberto Salleo from Stanford University. (Enodes’ Old English definition means “to clear of knots.”) Flexible and made from polymers compatible with biological neurons, it is ideal for intimate machine-brain interface. An explanation of the work was recently published in Nature Materials.

“The inspiration came from my work in solid state batteries,” says Alec. He leads the solid state battery thrust in the DOE-funded Energy Frontier Research Centers’ Nanostructures for Electrical Energy Storage-II, in which Sandia participates.

The researchers actually have created two similar synaptic device designs, the organic ENODe device for bio-interfacing, and an inorganic device dubbed LISTA (lithium-ion synaptic transistor for analog computing) to interact with conventional CMOS circuits for brain-inspired computing applications. (LISTA means “smart” in Spanish).

The LISTA device was built with support from the Grand Challenge Laboratory Directed Research and Development (LDRD) project “Hardware Acceleration of Adaptive Neural Algorithms” (HAANA) led by its principal investigator Conrad James (1728) and by the Thin Film Battery LDRD project, led by Farid El Gabaly (8342).

Energy per computation has been reduced by a factor greater than one trillion since the first vacuum tube-based computers.

“The LISTA device is well-suited for implementing the synaptic connections in neural network algorithms,” says Conrad. “We need low power consumption and we also need to have the synapses capable of being placed into finely spaced states in order to optimize the algorithm performance. Conventional digital transistor technology has difficulty with these requirements while the LISTA device is very promising on both fronts. We are also examining the ENODe device for brain-inspired computing, given its faster switching speeds.”

Prototype neural networks in hardware

A technical description of the LISTA device work was published in the journal Advanced Materials. “One of the thrusts of the HAANA grand challenge is to develop new devices for efficient and accurate neural network performance,” Alec says. “ENODe, because of its stability and relative ease of fabrication, is valuable for building prototype neural networks in hardware. We are also very interested in exploring direct connections between our polymer synapse and live biological neurons.”

The LISTA and ENODe artificial synapses each act like a battery with an additional input; i.e., with three terminals instead of two.  Charge can be moved between the “gate” and “channel” electrode. When the state of charge of the channel changes, so does its electronic conductivity and the current between the source and drain. When no voltage is applied to the gate electrode, the channel conductivity remains constant.

The devices get around the high level of energy needed to switch between states in two-terminal synapse models. An energy barrier that’s too low means that mere thermal fluctuations can overcome it, which could produce the undesirable result of a device switching states at random. “If you lower the voltage, it would simply switch back to its earlier state,” says Alec.

In ENODe, the barrier that maintains the device state is separate from the one that governs switching. Thus, the device needs a tiny bit of voltage to switch, and yet can retain that induced state for a long time. “Like a battery, it stays charged once you charge it,” says Alec.

A system composed of such devices is expected to use far less power than any silicon-based transistor system. The device achieves more than 500 states of conductivity and needs only 0.5 millivolts to switch between adjacent states. However, though requiring on the order of a thousand times less energy than silicon transistors, it still uses about 10,000 times the energy of a biological synapse. But the Sandia synapse can be cut down in size, with a corresponding reduction in energy requirements.

More than 20 peer-reviewed publications

“Energy per computation has been reduced by a factor greater than one trillion since the first vacuum tube-based computers,” says Matt Marinella (5268), device lead on the HAANA project. “However, continuing this exponential trend in energy reduction is one of the greatest challenges in modern computing. We believe these brain-inspired devices are one of the most promising avenues to achieve this goal.”

The HAANA project is focused on developing neural-inspired algorithms and hardware architectures for imaging and cybersecurity applications. Now in its third year, HAANA has more than 20 peer-reviewed publications on a range of topics including deep learning, spiking digital architectures, and resistive memory devices. The project’s microelectronic hardware thrust, led by Matt, is tasked with designing and fabricating devices that are specifically tuned for neural-inspired algorithms that require time-consuming and power-hungry training with example data.

Polymer fabrication of the ENODe was led by Stanford’s Salleo. Testing was done by Alec and by Elliot Fuller (8342), a Sandia postdoc who did much of the fabrication and testing of the Sandia devices. Modelling was handled by Matt and by Sapan Agarwal (8956). Other researchers include Francois Leonard (8342), Robin Jacobs-Gedrim (1768) and Steven Plimpton (1444).

“These are currently only benchtop devices,” says Alec. “Polymers are cheap to manufacture, but before we go there, we need to demonstrate arrays capable of implementing brain-inspired algorithms, which is what we are currently working on.”