News

June 13, 2014

AREVA building on Sandia’s molten salt expertise

These mirrors at the National Solar Thermal Test Facility, called Compact Linear Fresnel Reflectors, are being used in conjunction with molten salt thermal storage in a pioneering approach to solar thermal energy production.  (Photo by Randy Montoya)

by Stephanie Hobby

A soaring structure on the south side of DOE’s National Solar Thermal Test Facility (NSTTF) combines two cutting-edge technologies in concentrating solar energy: Compact Linear Fresnel Reflectors and molten salt thermal storage. Using them together is a pioneering concept.

Today’s Compact Linear Fresnel systems use water or oil as the thermal fluid to capture heat from solar collectors. The hot fluid then heats water and converts it into superheated steam to drive a turbine connected to a generator that produces electricity.

With significant input from Sandia researchers, AREVA Solar designed the 100-foot-tall A-frame structure and Compact Linear Fresnel Reflectors, which are mirrors arranged in rows at ground level. The goal is to explore a different technology to collect and store heat generated by the reflectors in molten salt. If the system proves to be efficient and effective, AREVA, headquartered in Mountain View, Calif., will consider the technology for its solar plants around the world.

“Our goal is to demonstrate the viability and performance of a Linear Fresnel system that uses molten salt as a working fluid, thus allowing us to offer steam at higher temperatures (up to 585 degrees Celsius) and also deliver a cost-competitive storage solution for concentrating solar power

projects,” says Robert Gamble, general manager, North America at AREVA Solar.

AREVA Solar approached Sandia because of its unique Molten Salt Test Loop and Sandia researchers’ accompanying expertise. The $10 million Molten Salt Test Loop, known as MSTL, was completed in late 2012 and is the only test facility in the nation that can provide real concentrating solar power plant conditions and collect data to help companies make commercial decisions. Sandia researchers have been testing components for external customers and have developed the expertise needed to help design and conduct experiments.

“A customer can come to us with an idea, and we have the knowledge to help them shape that idea into a working test,” says engineer Bill Kolb (6123). “In the world of molten salt, this is where you come for expertise.”

Compact Linear Fresnel Reflectors are attractive because they can generate a large amount of heat cost-effectively, using comparatively small land area. The mirrors are aligned to focus the sun’s reflected light at the top of the structure, which houses stainless steel receiver tubes through which the molten salt is pumped and then returned to a hot tank of salt, which can be used later to produce electricity. The receiver tubes are at the focal point of the series of mirrors, and then an additional set of mirrors across the top of the tubes captures and refocuses any sunlight that doesn’t directly hit the tubes, taking advantage of all available sunlight.

“This really is based on an industry need for thermal storage, so what we have here is a proof-of-concept demonstration project, aimed at an industry need. The idea is all the feedback and lessons we learn will be fed into our optimized design for the power industry,” says AREVA’s lead project engineer Antoine Bera.

In the early days of concentrating solar power, the industry was focused on generating steam to turn turbines, and there was not a lot of demand for thermal storage. Today, as the technology evolves, more companies are incorporating thermal storage into their designs. Molten salt is increasingly the medium of choice because it is affordable, abundant and stores thermal energy for long periods of time, providing greater flexibility for the electric grid.

 “This is enabling technology, and is providing a path to DOE’s SunShot goals. Essentially, molten salt allows dispatchable electrical energy and reduces the levelized cost of energy, which is the advantage of using molten salt technology,” says Concentrating Solar Technology department manager Subhash Shinde (6123).

Turning to Sandia was an easy choice, Gamble says. “Sandia’s first-of-a-kind Molten Salt Test Loop, along with leading molten salt expertise, made it an obvious choice. Shared lessons learned and expert reviews from Sandia’s molten salt experience in the fields of circulating molten salt, testing valves, instruments, and freeze/thaw cycles have helped drive decisions in AREVA’s research and design for molten salt and have proved the great value of this partnership,” he says.

The construction portion of the project was led by Sandia Facilities Project Manager Scott Rowland (4822) and his team. “This was a big undertaking, from a technical and contractual standpoint,” Scott says. “The customer, AREVA, was also the design team for the project. Our goal was to take molten salt from the facility and create a piping system 100 feet in the air with a football field of mirrors below to heat it even further. AREVA’s previous installations had been done with water and steam so this was a design modification to their system.”

The construction, commissioning, start-up, and initial testing have been completed, with further on-sun testing scheduled soon.

 

 

 

-- Stephanie Hobby

Back to top of page

New class of terahertz detectors promise imaging breakthorugh

Francois Leonard, left, and Alec Talin (both 8656) measure the thermal properties of a carbon nanotube terahertz detector using an infrared camera. Further technical improvements to the properties of the carbon nanotube material, François says, will lead to an even more effective design and performance of the terahertz detector that he and his collaborators have already achieved.        (Photo by Dino Vournas)

by MIke Janes

Sandia researchers, along with collaborators from Rice University and the Tokyo Institute of Technology, are developing new terahertz detectors based on carbon nanotubes that could lead to significant improvements in medical imaging, airport passenger screening, food inspection, and other applications.

A paper to appear in Nano Letters journal, “Carbon Nanotube Terahertz Detector,” debuted in the May 29 edition of the publication’s “Just Accepted Manuscripts” section. The paper describes a technique that uses carbon nanotubes to detect light in the terahertz frequency range without cooling.

Historically, the terahertz frequency range — which falls between the more conventional ranges used for electronics on one end and optics on another — has presented great promise along with vexing challenges for researchers, says François Léonard (8656), one of the authors.

“The photonic energy in the terahertz range is much smaller than for visible light, and we simply don’t have a lot of materials to absorb that light efficiently and convert it into an electronic signal,” says François. “So we need to look for other approaches.”

Technology offers hope in medicine, other applications

Researchers need to solve this technical problem to take advantage of the many beneficial applications for terahertz radiation, says co-author Junichiro Kono of Rice University. Terahertz waves, for example, can easily penetrate fabric and other materials and could provide less intrusive ways for security screenings of people and cargo. Terahertz imaging also could be used in food inspection without adversely impacting food quality.

Perhaps the most exciting application offered by terahertz technology, says Kono, is as a potential replacement for magnetic resonance imaging (MRI) technology in screening for cancer and other diseases.

“The potential improvements in size, ease, cost, and mobility of a terahertz-based detector are phenomenal,” he says. “With this technology, you could conceivably design a hand-held terahertz detection camera that images tumors in real-time with pinpoint accuracy. And it could be done without the intimidating nature of MRI technology.”

Carbon nanotubes may help bridge the technical gap

Sandia, its collaborators, and François, in particular, have been studying carbon nanotubes and related nanomaterials for years. In 2008, François authored “The Physics of Carbon Nanotube Devices,” which looks at the experimental and theoretical aspects of carbon nanotube devices. (See the April 25, 2008, issue of Sandia Lab News.)

Carbon nanotubes are long, thin cylinders composed entirely of carbon atoms. While their diameters are in the 1- to 10-nanometer range, they can be up to several centimeters long. The carbon-carbon bond is very strong, so it resists any kind of deformation.

The scientific community has long been interested in the terahertz properties of carbon nanotubes, says François, but virtually all of the research to date has been theoretical or computer model based. A handful of papers have investigated terahertz sensing using carbon nanotubes, but those have focused mainly on the use of a single or single bundle of nanotubes.

The problem, François says, is that terahertz radiation typically requires an antenna to achieve coupling into a single nanotube due to the relatively large size of terahertz waves. The Sandia, Rice University, and Tokyo Institute of Technology research team, however, found a way to create a small but visible-to-the-naked eye detector, developed by Rice researcher Robert Hauge and graduate student Xiaowei He, that uses carbon nanotube thin films without requiring an antenna. The technique is thus amenable to simple fabrication and represents one of the team’s most important achievements, François says.

“Carbon nanotube thin films are extremely good absorbers of electromagnetic light,” he says. In the terahertz range, it turns out that thin films of these nanotubes will soak up all of the incoming terahertz radiation. Nanotube films have even been called “the blackest material” for their ability to absorb light effectively.

The researchers were able to wrap together several nanoscopic-sized tubes to create a macroscopic thin film that contains a mix of metallic and semiconducting carbon nanotubes.

“Trying to do that with a different kind of material would be nearly impossible, since a semiconductor and a metal couldn’t coexist at the nanoscale at high density,” says Kono. “But that’s what we’ve achieved with the carbon nanotubes.”

The technique is key, he says, because it combines the superb terahertz absorption properties of the metallic nanotubes and the unique electronic properties of the semiconducting carbon nanotubes. This allows researchers to achieve a photodetector that does not require power to operate, with performance comparable to existing technology.

A clear path to performance improvement

The next step for researchers, François says, is to improve the design, engineering, and performance of the terahertz detector.

For instance, they need to integrate an independent terahertz radiation source with the detector for applications that require a source, François says. The team also needs to incorporate electronics into the system, and to further improve properties of the carbon nanotube material.

“We have some very clear ideas about how we can achieve these technical goals,” says François, adding that new collaborations with industry or government agencies are welcome.

“Our technical accomplishments open up a new path for terahertz technology, and I am particularly proud of the multidisciplinary and collaborative nature of this work across three institutions,” he says.

In addition to Sandia, Rice, and the Tokyo Institute, the project received contributions from researchers taking part in NanoJapan, a 12-week summer program that enables freshman and sophomore physics and engineering students from US universities to complete nanoscience research internships in Japan focused on terahertz nanoscience.

 

 

-- MIke Janes

Back to top of page

Get ready for the computers of the future

PREPARING FOR FUTURE COMPUTING — François Léonard (8656) holds a wire mesh cylinder similar in design to a carbon nanotube that might form the basis for future computing technology. Experts at Sandia are exploring what computers of the future might look like — new types of machines that would do more while using less energy.            (Photo by Randy Wong)

by Sue Major Holmes

Computing experts at Sandia have launched an effort to help discover what computers of the future might look like, from next-generation supercomputers to systems that learn on their own — new machines that do more while using less energy.

“We think that by combining capabilities in microelectronics and computer architecture, Sandia can help initiate the jump to the next technology curve sooner and with less risk,” says Div. 1400 Director Rob Leland. He has outlined a major Research Challenge into next-generation computing called Beyond Moore Computing, part of Sandia’s overall work on future computing.

For decades, the computer industry operated under Moore’s Law, named for Intel Corp. co-founder Gordon Moore, who in 1965 postulated it was economically feasible to improve the density, speed, and power of integrated circuits exponentially over time. But speed has plateaued, the energy required to run systems is rising sharply, and industry can’t indefinitely cram more transistors onto chips.

The plateauing of Moore’s Law is driving up energy costs for modern scientific computers to the point that, if current trends hold, more powerful future supercomputers would become impractical due to enormous energy consumption.

Solving that conundrum will require new computer architecture that reduces energy costs, which are principally associated with moving data, Rob says. Eventually, computing also will need new technology that uses less energy at the transistor device-level, he adds.

Sandia experts expect multiple computing device-level technologies in the future rather than one dominant architecture. About a dozen possible next-generation candidates exist, including tunnel FETs (field effect transistors, in which the output current is controlled by a variable electric field), carbon nanotubes, superconductors, and fundamentally new approaches such as quantum computing and brain-inspired computing.

Sandia’s facilities will play key role in researching future computing technology

Sandia is well-positioned to work on future computing technology due to its broad and long history in supercomputers, from architecture to algorithms to applications. Rob says Sandia can play a key role because of that background and two key facilities: the Microsystems and Engineering Sciences Applications (MESA) complex, which performs multidisciplinary microsystems research and development and fabricates chips to test ideas; and the Center for Integrated Nano-technology (CINT), a DOE Office of Science national user facility operated by Sandia and Los Alamos national laboratories.

No one is sure what tomorrow’s high performance computers will look like. “We have some ideas, of course, and we have different camps of opinion about what it might look like, but we’re really right in the midst of figuring that out,” Rob says.

Erik DeBenedictis (1425) says Sandia can play an important role in creating breakthroughs that are not simply variations of transistors — developments such as computers that learn or technologies that move data from one part of the computer to another more efficiently. Those are crucial for big data problems.

What ultimately prevails might well be something not yet invented, Rob says.

“That’s the first challenge, to figure out what the new device technology is, then work through what the implications of that are, what sort of computer architecture is required to assemble that device into components and subsystems and systems,” he says.

New technology must be broadly adopted to drive improvements

Sandia needs both capability computing, which means finer resolution and more accuracy, and capacity computing, or running many different jobs simultaneously.

“So what does efficiency buy you? It allows you to have a bigger computer or more computers with the same amount of operating expense — paying your power bill,” says manager John Aidun (1425). “There’s no limit to the amount of efficiency we would like to achieve because really there’s no limit to the amount of computing we would like to do.”

Whatever technology comes next must be broadly adopted so it will drive continual improvements, similar to the way the 1947 invention of the transistor transformed society. It’s not enough to have a device that’s fast; it has to be something that can be built into a complete computer system, John says.

Thus, new technology must have commercial uses. “There will have to be some industrial base that supports it and produces it and that can be used to assemble a large number of these into a system that can be deployed for national security,” Rob says. “What we’d really like to do is figure out how to advance the state of the art for national security in a way that is more broadly deployable across society.”

The computer industry is exploring technologies that in essence are drop-in replacements for transistors with improved characteristics: different designs such as the fin FET, a 3-D rather than a flat configuration on a computer chip, John says. While the design would be moderately disruptive for industry, it’s still compatible with standard silicon fab technology and opens the potential for generations of ever-smaller FinFETs on a chip, he says.

While industry views a beyond-transistor technology as something far off, Sandia’s national security interests anticipate bigger changes will be needed sooner than industry would develop them on its own, John says. He estimates Sandia could have a prototype new technology within a decade.

Identifying best computer designs can help accelerate innovation

To accelerate the process, Sandia wants to identify computer designs that could take advantage of new device technologies and demonstrate key components or steps in fabrication that would lower the risk for industry by demonstrating technological feasibility.

“We’d be doing it with an eye toward helping industry give due attention to national security needs in computing,” John says.

The numerical capability developed in computers in World War II remains valuable today for such tasks as nuclear weapons simulations. But the modern era’s largest computing development, the Internet, deals with text and demands computing functions called integer calculation, also used in mobile computing.

Improving mobile computing could allow much more efficient and rapid data processing aboard satellites, so less data would need to be sent to Earth for processing.

“The mobility we see in cell phones and tablets is the closest match for the mobility needs of UAVs and satellites,” Erik says. “The energy and time required to transmit data to the ground, process it there, and send the answer back is a bottleneck, and it can be more resource-intensive than just computing on the device.”

He also suggested turning more programming over to cognitive computers to help programmers manage ever-faster computers. “While computers have gotten millions of times faster, programmers and analysts are pretty much as efficient as they’ve always been,” he says.

Cognitive computing can play role in pattern recognition

Cognitive computers might be able to do more to recognize patterns in satellite imagery, for example. People would still make the judgments, but computers would help by recognizing some lower-level patterns, he says. Up to now, programmers have created ways for computers to recognize images; computers didn’t learn on their own. A cognitive computer, however, would learn to identify patterns, Erik says.

“A computer can learn to recognize images pretty well. Humans assisted by a computer recognizing images could improve the ability significantly,” he says.

Researchers also must determine what hardware and software changes are needed so new devices are possible to manufacture and practical to operate. “You have to design over all those different considerations,” Rob says. “That’s what makes this a particularly challenging problem.”

Today’s computer systems rely on huge, longstanding investments in massive amounts of software.

“So we are strongly motivated to develop computers that will run old software that was optimized for traditional computer architectures that are not used today,” Erik says. “To break out of that, we have to find different architectures that are more energy-efficient at running old code and are more easily programmed for new code, or architectures that can learn some behaviors that once required programming.”

Since the software of today won’t unleash the full capabilities of the hardware of tomorrow, he expects computers in about a decade that can run both today’s software and new software. New software “would learn or would process information in fundamentally different ways, and become the most powerful aspect of the computer over time,” he says.

 

 

-- Sue Major Holmes

Back to top of page

The brain: Key to a better computer

INSPIRED BY THE BRAIN — Sandia researchers are drawing inspiration from neurons in the brain, such as these green fluorescent protein-labeled neurons in mouse neocortex, with the aim of developing neuro-inspired computing systems. (Photo by Frances Chance, courtesy of Janelia Farm Research Campus)

by Sue Major Holmes

Your brain is incredibly well suited to handling whatever comes along, plus it’s tough and operates on little energy. Those attributes — dealing with real-world situations, resiliency, and energy efficiency — are precisely what might be possible with neuro-inspired computing.

“Today’s computers are wonderful at bookkeeping and solving scientific problems often described by partial differential equations, but they’re horrible at just using common sense, seeing new patterns, dealing with ambiguity, and making smart decisions,” says manager John Wagner (1462).

In contrast, the brain is “proof that you can have a formidable computer that never stops learning, operates on the power of a 20-watt light bulb, and can last a hundred years,” he says.

Although brain-inspired computing is in its infancy, Sandia has included it in a long-term research project whose goal is future computer systems. Neuro-inspired computing seeks to develop algorithms that would run on computers that function more like a brain than a conventional computer.

“We’re evaluating what the benefits would be of a system like this and considering what types of devices and architectures would be needed to enable it,” says microsystems researcher Murat Okandan (1719).

Sandia’s facilities and past research make the Laboratories a natural for this work: its Microsystems & Engineering Science Applications (MESA) complex, a fabrication facility that can build massively interconnected computational elements; its computer architecture group and its long history of designing and building supercomputers; strong cognitive neurosciences research, with expertise in such areas as brain-inspired algorithms; and its decades of work on nationally important problems, John says.

New technology often is spurred by a particular need. Early conventional computing grew from the need for neutron diffusion simulations and weather prediction. Today, big data problems and remote autonomous and semiautonomous systems need far more computational power and better energy efficiency.

Ideal for robots, remote sensors

Neuro-inspired computers would be ideal for operating such systems as unmanned aerial vehicles, robots, and remote sensors, and solving big data problems, such as those the cyber world faces, and analyzing transactions whizzing around the world, “looking at what’s going where and for what reason,” Murat says.

Such computers would be able to detect patterns and anomalies, sensing what fits and what doesn’t. Perhaps the computer wouldn’t find the entire answer, but could wade through enormous amounts of data to point a human analyst in the right direction, Murat says.

“If you do conventional computing, you are doing exact computations and exact computations only. If you’re looking at neurocomputation, you are looking at history, or memories in your sort of innate way of looking at them, then making predictions on what’s going to happen next,” he says. “That’s a very different realm.”

Modern computers are largely calculating machines with a central processing unit and memory that stores both a program and data. They take a command from the program and data from the memory to execute the command, one step at a time, no matter how fast they run. Parallel and multicore computers can do more than one thing at a time but still use the same basic approach, and remain very far removed from the way the brain routinely handles multiple problems concurrently.

The architecture of neuro-inspired computers would be fundamentally different, uniting processing and storage in a network architecture “so the pieces that are processing the data are the same pieces that are storing the data, and the data will be processed with all nodes functioning concurrently,” John says.  “It won’t be a serial step-by-step process; it’ll be this network processing everything all at the same time. So it will be very efficient and very quick.”

Unlike today’s computers, neuro-inspired computers would inherently use the critical notion of time. “The things that you represent are not just static shots, but they are preceded by something and there’s usually something that comes after them,” creating episodic memory that links what happens when. This requires massive interconnectivity and a unique way of encoding information in the activity of the system itself, Murat says.

More possibilities

Each neuron in a neural structure can have connections coming in from about 10,000 neurons, which in turn can connect to 10,000 other neurons in a dynamic way. Conventional computer transistors, on the other hand, connect on average to four other transistors in a static pattern.

Computer design has drawn from neuroscience before, but an explosion in neuroscience research in recent years opens more possibilities. While it’s far from a complete picture, Murat says what’s known offers “more guidance in terms of how neural systems might be representing data and processing information” and clues about replicating those tasks in a different structure to address problems impossible to solve on today’s systems.

Brain-inspired computing isn’t the same as artificial intelligence, although a broad definition of artificial intelligence could encompass it.

“Where I think brain-inspired computing can start differentiating itself is where it really truly tries to take inspiration from biosystems, which have evolved over generations to be incredibly good at what they do and very robust against a component failure. They are very energy efficient and very good at dealing with real-world situations. Our current computers are very energy inefficient, they are very failure prone due to components failing, and they can’t make sense of complex data sets,” Murat says.

Computers today do required computations without any sense of what the data is — it’s just a representation chosen by a programmer.

“Whereas if you think about neuro-inspired computing systems, the structure itself will have an internal representation of the datastream that it’s receiving and previous history that it’s seen, so ideally it will be able to make predictions on what the future states of that datastream should be, and have a sense for what the information represents,” Murat says.

He estimates a project dedicated to brain-inspired computing will develop early examples of a new architecture in the first several years, but says higher levels of complexity could take decades, even with the many efforts around the world working toward the same goal.

“The ultimate question is, ‘What are the physical things in the biological system that let you think and act, what’s the core essence of intelligence and thought?’ That might take just a bit longer,” he says.

-- Sue Major Holmes

Back to top of page

Download Lab News June 13, 2014 (PDF, 2MB)