Sandia LabNews

Defense-scale supercomputing comes to alternative energy research

ROB LELAND, right, discusses the capabilities of the just-dedicated Red Mesa supercomputer with Enterprise Transformation Div. 9000 VP Joe Polito, left, and National Renewable Energy Laboratory Director Dan Arvizu. Rob is director of Computation, Computers, Information, and Mathematics Center 1400. (Photo by Randy Montoya)

Improved energy extraction from sun, wind and other renewable resources could take decades if researchers had to rely solely on physical testing. Instead, Sandia and DOE’s National Renewal Energy Laboratory (NREL) on April 7 formally dedicated the 180-teraflop, highly efficient Red Mesa supercomputer to simulate and model a number of these problems.

ROB LELAND, right, discusses the capabilities of the just-dedicated Red Mesa supercomputer with Enterprise Transformation Div. 9000 VP Joe Polito, left, and National Renewable Energy Laboratory Director Dan Arvizu. Rob is director of Computation, Computers, Information, and Mathematics Center 1400. (Photo by Randy Montoya)

Welcoming 40 visitors to the ribbon-cutting event in Tech Area 1, Joe Polito, VP of Enterprise Transformation Div. 9000, congratulated Golden, Colo.,-based NREL, Sandia, Sun/Oracle, Intel, and DOE/Sandia Site Office on creating “a state-of-the-art computing platform to address pressing energy problems for the country, using the most energy-efficient supercomputer in the country.”

Red Mesa, when combined with Red Sky, its architecturally similar Sandia parent, reaches a LINPAC speed of 500 teraflops, making it the 10th fastest computer in the world.

In just six weeks, NREL researchers solved a cornstalk-to-energy problem on Red Mesa that formerly would have taken six months.  “We need supercomputing,” said Steve Hammond, director of NREL’s Computational Science Center, “to help us learn to transform forestry and agricultural by-products into fuels and energy more rapidly and economically. We also need to better understand the fuel-injection atomization process, and thermochemical conversion technologies in general. And we need to learn how to minimize waste products like tar, which are expensive to clean up in the biomass gasification process and that we shouldn’t be creating in the first place.”

“Let’s get the job done”

“The country faces great energy challenges,” NREL Director Dan Arvizu said. “Their complexity and long-term nature will require huge private and public investments. Helping that happen is part of the mission of NREL. Meanwhile, lab partnerships are difficult; we know that. But we’ve progressed [in aligning the respective expertise of the two labs] because of a feeling of ‘Let’s get the job done.’ With good researchers, you don’t have to force a collaboration. It happens naturally.”

 Said Rob Leland, director of Computation, Computers, Information, and Mathematics Center 1400, “We’re at the end of the machine-development stage and at the early states of starting the science-and-discovery journey for this partnership we’ve put together. That’s exciting to me.”

The congressional directive to DOE put the situation clearly: “The Department is directed to use $12,000,000 . . . to execute an existing memorandum of agreement with Sandia National Laboratories for supercomputing equipment and capacity to support the National Renewable Energy Laboratory’s Energy Efficiency and Renewable Energy-based mission needs. Numerical simulations on high-performance computers enable the study of complex engineering systems and natural phenomena that would be too expensive, or even impossible, to study by direct experimentation. This resource will be located at Sandia to take advantage of the [Labs’] more than 20 years of experience with high-performance computing hardware and software development. The Committee expects both laboratories to contribute in their respective areas to science and energy excellence.”

The commission’s decision was seconded in a dedication speech by former Sandian and current Intel chief technology officer for high-performance computing Bill Camp, who led the design of Sandia’s Red Storm supercomputer, the most oft-copied supercomputer in the world.

 “Even though other labs may have more money to spend on computing,” he said, “when [leaders] in our industry chose an innovative national lab to work with, they consistently chose Sandia Labs.” Sandia worked with Caltech and N-Cube, Camp recalled, to develop the first parallel processing computer. Sandia also designed the first teraflop computer, ASCI Red. “This is an area where you have to eat your own product,” he said. That is, “you can’t grow high-performance computing without using previous high-performance computing.” Red Mesa, he implied, was in that tradition.

Operational innovations make Red Mesa a kind of “green” machine, said John Zepper (9326). “Typically at a supercomputer,” he said, “standing on one side on it, you need to wear a bathing suit due to the hot air, and on the other, a parka due to the cold air.” Because rectifying huge cooling inequities produce huge power bills, an innovation used on Red Mesa produced the Glacial Door — a door capping each cabinet that keeps cooling mechanisms within a few inches of the heat source. Witnesses to the ribbon cutting, test-strolling the aisles of the supercomputer, detected no change in temperature. With the new improved airflow system, air exiting the array of supercomputer cabinets is actually slightly cooler than when it came in.

Changes save millions of dollars

Other improvements included a better electrical power distribution system that allowed for easier installation and removal of electrical wiring. The Red Mesa machine is configured with an all-optical, connector-based Infiniband network.

 “Our changes, both in software and hardware, will save millions of dollars over the life of this machine,” John told the group.

These changes only came about, noted Rob Leland, because “vendors were willing to take technical and economic risks that permitted us to deploy a dozen significant innovations. This [off-the-shelf computer and its accompanying innovations] represented quite a big risk and vendors were willing to go on this journey with us because they saw strategic value to their business. And so we got price points that were remarkable, which means value to the taxpayer.”

Mark Hamilton of Oracle concurred. “We made a complete solution out of off-the-shelf but best-of-breed components integrated from multiple sources,” he said, “creating one of the fastest computers in its hardware, cable, switching, storage, and software.”

The benefit to Oracle: The company, having proven out the innovations on Red Mesa, is introducing the same innovations in smaller Oracle machines.

Said Margie Tatro, director of Energy Systems Center 6200, “Dan Arvizu and Rick Stulen signed an MOU to bring high-performance computing to the renewable energy mission. I want to thank DOE, as well as the urgency and relevancy of our partners in the private sector, for helping Sandia and NREL overcome obstacles and make this happen.”

Capturing hearts and minds

Megan McCluer, DOE program manager for wind and hydropower technologies, brought up another subject to model: energy transmission. “When the source isn’t at the load center, how do you get supply to demand when it comes at different frequencies, voltages, phases, meanwhile avoiding congestion issues and optimized to lowest cost?

 “We have to capture the public’s hearts and minds by investing in resources that make a difference to the consumer,” she said.

 “We look forward to many years of productive collaboration working together on nationally important energy supply problems,” said Joe Polito, summarizing the partnership.          NREL is DOE’s primary national laboratory for renewable energy and energy efficiency research and development. NREL is operated for DOE by The Alliance for Sustainable Energy, LLC.