By Will Keener
The path to the hydrogen economy leads through some familiar territory. Although there are many long-term options for providing hydrogen as a future fuel, coal is a leading contender in the near term.
That’s the view of Chris Shaddix (8367), principal investigator for clean coal combustion at Sandia’s Combustion Research Facility. While some day we may be able to produce hydrogen by breaking up water molecules in association with the high-temperature heat from nuclear power reactors, or through renewable energy technologies, right now the most cost-effective way to produce hydrogen is with coal, Chris says.
Chris and his colleagues are involved in a number of experiments to optimize the combustion of coal to produce the most energy and the least possible pollution. While traditional coal combustion produces many harmful emissions, modern plants can meet environmental regulations for burning coal cleanly, Chris says. This can be costly to utility companies, but the cost of competing fuels — particularly natural gas — have climbed to the point where burning clean coal is competitive.
Figure in the possible benefits of sequestration of carbon dioxide emissions from the stacks (see Lab News, Jan. 20 stories beginning on page 1) and coal looks very promising for generating both electricity and hydrogen to provide a bridge to that future technology. “Utilities are starting to invest in coal,” says Chris.
Two different approaches to burning coal are now under study. One combines coal with pure oxygen. The second, called gasification, burns coal only partially to create a fuel-gas. The first approach, called oxy-combustion, is driven by concern over emissions of CO2 and other pollutants. The burning of coal in oxygen is a near-term solution that with current knowledge can produce exhaust streams that are close to pure CO2, says Chris. Harmful pollutants like nitrogen oxides, sulfur compounds, and mercury are virtually eliminated.
The oxy-combustion approach is favored by companies in Japan, Canada, Germany, and elsewhere where pilot plants are under construction. “Because the US didn’t sign the Kyoto accord, companies here are not as interested,” says Chris. “They tend to favor gasification technologies, which offer higher efficiency and low pollution formation.”
One of these technologies, called steam reformation, combines the coal with steam in a hot environment to produce a “syngas,” composed mostly of CO and hydrogen. Once the syngas is produced it can be burned directly in a combustor — such as a turbine — to produce power. Or the syngas can be further reacted with more steam to shift the remaining CO to CO2 and to produce more hydrogen. The CO2 can be sequestered and the hydrogen can be used in lots of places: to power a car in an engine or fuel cell, to power a turbine to produce electricity, or to power a turbine to fly an airplane.
DOE has already demonstrated this process in two pilot projects. The next step is for the US to combine coal gasification with hydrogen production and CO2 sequestration, says Chris. At the same time, several commercial proposals are afoot in the US for private utilities to build these plants without government support.
Working with the National Energy Technology Lab, Morgantown, W. Va., the CRF is focused on understanding the chemistry and physics of coal combustion, using its state-of-the-art diagnostic capabilities and modeling expertise. “We apply computational models of reacting particles to the data to understand why we see the results we see,” says Chris.
Alejandro Molina (8367), a Sandia postdoc working with Chris, lights a flat-flame burner plate in the CRF’s small-scale lab for coal studies and adjusts the amount of coal particles fed through the burner. A two-foot-tall chimney around the burner protects against disturbances inside the lab and enables researchers to analyze the combustion.
He shows a visitor a bright zone, just above the burner, where initial combustion occurs. A longer vertical track of flame is known as the char oxidation zone. “To optimize coal combustion for carbon sequestration, it is very important to understand how fast it burns and releases energy,” Alejandro says. Burning coal with air, which is predominantly (79 per cent) nitrogen, creates the problem of separating CO2 and nitrogen before sequestration. “If you use pure oxygen instead of air, you get water and CO2, so you only have to condense the water and you have 100 percent CO2.”
One problem with this oxygen approach has been a high flame temperature, he continues, which could rapidly destroy the metal burner materials. One solution is to recycle cooler CO2 into the burner to cool the flame temperature. “The question is: what is the right proportion of oxygen and CO2?”
Alejandro has been working on these experiments for about two years in the small-scale lab, but work is now under way to bring two other CRF facilities into the research. A gasification lab will help the researchers study the behavior of coal gas under pressure. And while the small-scale work focused on particle behavior and fundamental-scale measurements, says Chris, a large-scale lab will focus on gas issues within the reactor.
Two new reactors
The gasification lab, expected to be operational by this summer, includes a two-inch tube within a pressure vessel. “Gasification is slower than combustion, so it is done under pressure to increase the reaction rates,” Chris explains. The new apparatus is instrumented for laser diagnostics and sample collection and includes electrical heaters to preheat the gases so they flow through a vertical center section where data can be collected.
The third reactor is a two-story flow reactor that will help the team study the oxygen-coal combustion with recycled CO2. The unit includes a six-inch-diameter reactor tube running downward below a 75-kilowatt thermal heater. Specially designed hardware injects highly refined coal particles into the top of the reactor tube. As the reaction moves down the tube, equipment allows sampling and laser diagnostic testing.A key effort will be to measure the concentrations of ammonia and hydrogen cyanide, precursors to nitric oxide formation, says Chris. The coal “char” phase of burning can eliminate nitric oxide, creating the possibility that in actual operations more NO is consumed than is produced, leading ultimately to a commercial application. Large-scale tests in this reactor are expected to begin in a few months.
-- Will Keener
His invention, the Counter Rotating Ring Receiver Reactor Recuperator (CR5, for short), splits water into hydrogen and oxygen, using a simple, two-step thermochemical process.
The CR5 is a stack of rings made of a reactive ferrite material, consisting of iron oxide mixed with a metal oxide such as cobalt, magnesium, or nickel oxide. Every other ring rotates in opposite directions. Concentrated solar heat is reflected through a small hole onto one side of the stack of rings. The side of the rings in the sunlit area is hot, while the other side is relatively cold. As the rotating rings pass each other in between these regions, the hot rings heat up the cooler rings, and the colder rings cool down the hot rings. This arrangement results in a conservation of heat entering the system, limiting the energy input required from the sunlight.
Steam runs by the rings on the cooler side causing a chemical reaction to take place, allowing the ferrite material to grab oxygen out of the water, leaving the hydrogen. The hydrogen is then pumped out and compressed for use.
A separate chemical reaction that drives off the oxygen occurs where the sunlight directly illuminates the ferrite material at the solar receiving end. This is needed to regenerate the rings so they can react with more water during the next cycle.
“This is out-of-the-box thinking,” says Rich, principal investigator of the internally funded Laboratory Directed Research and Development (LDRD) project. “We are combining a mechanical engine with a chemical producing device — something not done before to produce hydrogen.”
And it’s something that probably only Rich could have contrived because of his unique background. He has knowledge of splitting water using high-temperature solar techniques — the theme of his PhD dissertation at the University of Minnesota — and of concentrated solar gained from his 15 years working with Stirling engine solar collector systems at Sandia.
Stirling dishes — named after Robert Stirling who invented them in 1816 — generate electricity by focusing the sun’s rays onto a receiver, which transmits the heat energy to an engine. The engine is a sealed system filled with hydrogen, and as the gas heats and cools, its pressure rises and falls. The change in pressure drives the pistons inside the engine, producing mechanical power. The mechanical power in turn drives a generator and makes electricity. The key to a Stirling engine’s high efficiency is heat recuperation, analogous to the CR5.
Instead of making electricity like the Stirling systems, Rich’s invention will produce hydrogen.
Rich envisions fields of large mirror dish collector systems making hydrogen, which would be stored and sent to stations where hydrogen-electric hybrid vehicles could “fill up.”
He and co-collaborator Jim Miller (1815), a chemical engineer, have been testing materials at the University of New Mexico’s Advanced Materials Laboratory to determine which will be best for attracting oxygen in the cool stage and releasing it in the hot stage.
“This invention calls for a new type of material,” Rich says. “We have to come up with one that is black and absorbs heat from the sun and which has the right oxidation reaction.”
Through the tests at the Advanced Materials Laboratory, Rich and Jim have shown that by suspending the ferrite material in zirconia, a refractory oxide that withstands high temperatures, there was a high yield of hydrogen “quickly and repeatedly,” even after forming the mixture into complex solid shapes. Without using the zirconia, the ferrite material doesn’t hold together well; it essentially forms a slag and stops reacting.
The ferrite/zirconia structures are laid line-by-line using robocasting, a method developed and perfected by other team members that relies on robotics for computer-controlled deposition of materials through a syringe. The materials flow like toothpaste and are deposited in thin sequential layers onto a base to build up complex shapes.
A near-future step will be to build a prototype of the CR5, Rich says. Rather than constructing large dish mirrors to collect the concentrated solar, as is his ultimate goal, the initial tests will be done in an indoor solar furnace (see front-page photo) using a heliostat at the DOE-owned, Sandia-operated National Solar Thermal Test Facility.
Rich says the problem he and Jim are attempting to solve is extremely difficult.“The water molecule (in the steam) is a tough nut to crack,” Rich says. “There is no guaranteed success. But that’s the spirit of an LDRD. It allows you to take a chance. I am grateful for this opportunity. We are putting different things together in ways other people haven’t thought of before. It’s long-term stuff but ultimately can result in a clean alternative to pulling oil out of the ground.” -- Chris Burroughs
By Nancy Garcia
The µChemLab project began with a problem, how to detect trace explosives with a compact, field-portable device. The device relied upon miniaturizing a standard laboratory technique for separating mixtures of components as they move through a column under an electric field — chromatography. But the hair-thin chromatographic columns, shrunk and coiled onto a microfluidic chip, suffered from a “racetrack” effect at the turns, so that particles being separated on the outside of the curves had farther to travel, which smeared the sharp peaks needed for identification.
Unfortunately, the theory behind this transport problem was too complex to model on supercomputers, but Eric Cummings, a principal investigator during the 10-year development funded through Laboratory Directed Research and Development projects, saw that the velocity aspect could be represented by a simple equation with a few constraints, leading to a theory for ideal electrokinetic flow.
Mathematicians Stuart Griffiths (8700) and Robert Nilson (8764), meanwhile, simulated the transport of chemicals in channels in a six-month effort that led to patented turns and bends that minimized dispersion.
Still, Eric believed optimizing any given case should be shorter. “In the back of my mind it seemed there had to be a general solution that was very simple because the theory behind it was so simple,” he says.
He had been trying to verify ideal electrokinesis experimentally. Particles moved differently at boundaries, so the team decided to expand the number of boundaries by creating channels with posts in the middle so they could study behavior there. Posts made the process of moving and sorting particles akin to having racers navigate a forest instead of an open plain — albeit a very small one — since the channels run 50 to 100 microns wide and 5 to 50 microns deep. They also permitted effectively moving particles under an applied voltage that was low enough to not heat and possibly harm the sample.
Working with Anup Singh (8321), Eric investigated using fluorescent liposomes to track the progress of particles through the channels. Their conductance made them behave poorly as markers for electrokinetic mobility. They tended to stream along the post arrays in a phenomenon known as dielectrophoresis, particularly if the applied field were at an angle to the array, and to concentrate in regions over time.
That unforeseen development led to the creation of a sorting and trapping technique known as insulating dielectrophoresis, or iDEP (because the posts are made of a material, such as silica or plastic, that is electrically insulating).
Trapping by tilting the array created non-uniform fields. So overall flow would not be affected, they conducted a quick analysis to find conditions needed to keep the fields uniform within the channel regions.
The analysis indicated that varying the channel depths by interspersing deeper regions with more shallow “spillways” could not only provide uniform fields on each side of the junction, it also allowed designers to incorporate angles in the shallower regions to turn the flow without causing dispersion. This ability to turn the channels permitted creating networks by using calculations simple enough to perform on a calculator.
This breakthrough, several years after the research started, now offered a general solution to the initial dispersion problem.
Experimental investigations of microfluidic channels containing cross-channel ridges designed using Eric’s spatially uniform field approach resulted in two publications in the fall of 2005 in Analytical Chemistry. One demonstrated the influence of manufacturing limitations on fluid flow in ideally designed channels and the other demonstrated continuous separation and concentration of bacterial cells. The work was conducted by postdoctoral researchers Andrew Skulan and Louise Barrett in Microfluidics Dept. 8324 under principal investigator Greg Fiechtner.
Eric’s approach has also led to other devices. One sorts particles into parallel streams by their volume and conductivity and is referred to as a particle spectrometer.
The team has found it useful to design these microfluidic spectrometers with arrays of ridges patterned through photolithography.
“Once you’ve paid for one ridge you can have more,” Eric says. They call this latest approach a corduroy design methodology.
With it, they have concentrated materials by a factor of 6,000 in 16 seconds, although the upper limit is just a function of how much flow can be pushed through. They have also broadened the concept to concentrate, mix, purify,
filter, or sort molecules through perturbing the flow in a variety of ways, besides using an obstacle such as a ridge or valley.
The advantages are that the separations can occur at dramatically higher rates than conventional methods, and they can be based on mechanical or electrical properties not previously exploited by other methods. The speed comes about because separations occur throughout the entire breadth of the microfluidic channel, since they rely on bulk behavior that occurs in a gradient, rather than surface phenomena requiring interaction with a boundary. Theoretically, a large protein might be separated in 10 milliseconds — five orders of magnitude quicker than through conventional chromatography.
A further advantage is that to handle a greater volume, the process could be carried out continuously and in parallel.
The team members believe they are heading toward near-instantaneous separations and manipulations of cells, proteins, and other molecules that can aid research in genetics, proteomics, or sorting and preparation of novel materials; development of medical diagnostic devices; and rapid detection of biological or chemical incidents, among anticipated applications. About such applications Andrew notes, “You can really let your imagination run wild. It’s a very simple design, but there is just a wealth of different behaviors we can obtain by varying the conditions we apply.”The researchers were thrilled to take advantage of the anomalous behavior they observed by employing their broad range of expertise to create better devices to support national security and related missions. “I’m like a kid in a candy store,” Andrew says. -- Nancy Garcia