News

August 23, 2013

New initiative will bolster infrastructure for hydrogen vehicles

by MIke Janes

The broad public and government interest in renewable energy and the hope for a zero-emission transportation future seem to be at an all-time high. So what is the remaining hurdle to overcome before we see widespread adoption of clean, hydrogen-powered vehicles on the road?

In a word: infrastructure. For hydrogen-based vehicles, very little infrastructure currently exists. But that could change soon, and Sandia’s Center for Infrastructure Research and Innovation (CIRI) on the Livermore Valley Open Campus (LVOC) hopes to contribute in a big way.

Daniel Dedrick (8367), the Labs’ hydrogen program manager, calls CIRI a “collaboration facility” modeled on the success of the Combustion Research Facility (CRF). For years, the CRF has played a critical role in partnering with industry to provide a science base for fine-tuning the internal combustion engine and making it cleaner and more efficient.

“CIRI is a partnership-based RD&D facility focused on hydrogen infrastructure,” Daniel says. “More specifically, it will be a coordination of critical materials, science, and engineering research capabilities at Sandia that are needed to improve performance and reduce costs associated with hydrogen infrastructure.”

CIRI will incorporate new resources, including a full-scale test facility where equipment manufacturers can test their hardware, and existing capabilities such as a material mechanics lab that analyzes, characterizes, and predicts the behavior of various materials. It will also include a testbed refueling station where industry and research partners can run experiments to better understand refueling dynamics.

CIRI’s partners, Daniel says, will include industry members up and down the supply chain and include companies who sell gases and chemicals, as well as those companies who manufacture important components like compressors, tanks, and tubing. Testing and optimizing those components in a systems environment — another feature in the works for CIRI — will be critical in developing hydrogen refueling stations and making them more economically and technically feasible for consumers, fuel providers, and station operators.

Industry, of course, will be key to CIRI’s success, and Daniel points out that the nearby CRF already has a long history of partnerships with all of the US automakers. He hints that “significant support” for CIRI has already been assured from a number of critical companies.

The long-term vision for CIRI also includes research of vehicle systems integration with the electrical grid to understanding how hydrogen can help address issues associated with renewables integration, energy storage, and distributed generation.

Hydrogen on the upswing

Since the introduction more than 100 years ago of the modern, now-ubiquitous internal combustion engine that operates on gasoline, the nation has developed, honed, and successfully deployed tens of thousands of fueling stations to power our vehicles. The driving lifestyles Americans have been accustomed to over the decades wouldn’t be possible without that network of fueling stations.

A hydrogen fuel cell-based infrastructure, however, is limited, largely because the technology is relatively immature in the consumer environment and requires a different approach compared to the existing liquid-based fueling infrastructure.

Still, despite the technical challenges, inconsistent government investments, and competing technologies, the automotive industry has stayed the course, says Daniel.

“The automakers have stayed very consistent with their business models these past few years,” he says. “They understand the benefits of hydrogen fuel cell vehicles and have continued to move the technology forward.” Hydrogen fuel cell vehicles, Daniel asserts, compare favorably to electric battery vehicles, mainly because fewer high-cost materials are necessary with fuel cell vehicles.

Recent news reports would appear to confirm the notion that automakers remain bullish on hydrogen fuel cell vehicles.

According to a July report from Bloomberg, General Motors and Honda Motor Co. are now partnering to bring hydrogen fuel cell vehicles into the marketplace.

In November, Toyota Motor Corp. is expected to unveil its own fuel cell sedan, one that is expected to go on the market in 2015. Hyundai Motor Co. and Mercedes-Benz are also planning commercial availability of hydrogen vehicles in 2015.

In fact, virtually every major automobile manufacturer has designed and produced hydrogen fuel cell vehicles. In May, DOE Undersecretary Dave Danielson announced a public/private partnership known as H2USA focused on hydrogen infrastructure that is designed to help coordinate the nation-wide rollout of hydrogen fuelling stations. Sandia became a member of H2USA this month.

Daniel and Sandia’s transportation energy experts, along with their federal sponsors, understand the role that fuel cell electric vehicles can play in a zero-emission transportation future. The Labs’ vision, Daniel says, is for CIRI to help industry with the all-important hydrogen infrastructure part of the equation.

Reducing cost, improving performance of hydrogen systems

Sandia’s own hydrogen and fuel cells program has significantly informed and influenced the hydrogen transportation community already on key technical issues, and Daniel says researchers now have data that demonstrates the need for further R&D on infrastructure.

And what are the specific infrastructure issues that need to be addressed? Daniel points to three areas: high-pressure compression, storage, and delivery.

Compressed hydrogen is stored at high pressures (10,000 psi) in hydrogen vehicles, and the hardware used with that technology is relatively immature in the consumer environment. This leads to deficiencies in reliability and efficiency that need to be addressed. More research into how hydrogen is compressed, stored, and dispensed is required to make the infrastructure components more robust and dependable.

Secondly, Daniel says, the current capital costs of a hydrogen fueling station are higher than traditional gasoline stations and must be lowered significantly. A typical hydrogen station currently costs roughly $1-2 million from start to finish, while the cost of a gasoline station today runs around $200,000 to $300,000.

“A main reason for this is the high cost of materials used for a hydrogen station,” Daniel explains. High-nickel steels, in particular, are expensive, so new research is needed to find lower-cost classes of materials that will perform well and last long. “We also need to understand how the entire [hydrogen fueling station] system behaves so that we can refine it, introduce new technologies as needed, and shrink the system’s footprint,” Daniel adds.

For now, Daniel’s focus is on finalizing partnerships with industry and other national laboratories, with a grand opening of CIRI tentatively planned for early next year.

“If we’re successful with CIRI, we’ll be able to say that we enabled industry by providing them with a technical, scientific basis for producing cost-competitive hydrogen fueling stations that can be deployed anywhere, whether it’s downtown San Francisco, Washington, D.C., Livermore, or Albuquerque,” says Daniel.

 

-- MIke Janes

Back to top of page

Size matters when monitoring reactors from a distance

FLUX CAPACITOR? — No, but as shown in this image of antineutrino fluxes around the world, WATCHMAN will have plenty to watch. (Image courtesy of Glenn Jocher and John Learned, University of Hawaii)

by Patti Koning

 At a mine in Virginia, researchers from Sandia and Lawrence Livermore National Laboratory (LLNL) are hard at work helping NNSA meet a goal set in its 2011 Strategic Plan: to demonstrate remote monitoring capabilities for reactor operations by 2016.

“This is a really hard problem to solve,” says Peter Marleau (8132), the project lead for Sandia’s part of the project. “Our ultimate goal is to create a detector that can find or exclude hidden 10-megawatt reactors at distances up to hundreds of kilometers.”

This is a research reactor size that could potentially be hidden from other detection methods while still breeding plutonium in quantities that could be of concern for nuclear weapon proliferation.

About five years ago, a collaboration between Sandia and LLNL proved the effectiveness of an antineutrino detector for reactor monitoring. Nuclear decay produces large quantities of antineutrinos — the antiparticles of neutrinos, fast-moving elementary particles with minuscule mass that pass through ordinary matter undisturbed. An antineutrino detector tracks the particles as they emanate from a reactor to measure operational status and thermal power.

But that technology is limited by range; to pick up antineutrino signatures, the detector must be very close to the core of the reactor. To monitor from further away — tens to hundreds of kilometers — you need a really big detector, Peter says. And by really big, he means millions of kilograms of detector material.

One of the most viable options for scaling to these sizes is water lightly doped with gadolinium. “This is a path that science hasn’t gone down yet,” says Peter. “It’s pretty exciting research. We’re pushing technology that could be very useful for physics detectors in the future.”

Sandia and LLNL are teaming with six universities — University of Hawaii, Hawaii Pacific University, Virginia Tech, and the University of California, Berkeley, UC Davis, and UC Irvine  — on a project funded by NNSA’s Office of Defense Nuclear Nonproliferation Research and Development. The project is called WATCHMAN, or WATer CHerenkov Monitoring of Anti Neutrinos.

There is some precedent for a detector of this scale. Japan’s Super-Kamioka Detector, or Super-K, has a massive 50-kiloton tank that stands 136 feet high. But as a neutrino observatory searching for proton decay, neutrinos, and cosmic rays, Super-K is not sensitive to reactor antineutrinos. Super-K uses ultra-clean water, not the doped water proposed for the WATCHMAN detector.

Because the WATCHMAN detector will break so much new ground, the team is starting with basic research to optimize the design and location of the demonstration site of a smaller kiloton-scale gadolinium-doped water detector. They are about halfway through a two-year scoping study.

Understanding background at shallow depths

One challenge for a detector of this size is background radiation. The bigger the detector, the more background radiation it is going to pick up. One solution to this, says Peter, is to place the detector underground.

“At a certain depth, cosmic radiation disappears,” he explains. “But that kind of excavation would be very costly and time-consuming for a detector of this scale. We’re looking at what happens at shallow depths. Is there a more reasonable depth of a few hundred feet where background radiation drops off to a more manageable level? That’s something we don’t know because there just isn’t much research on background radiation at shallow depths, especially with a water-based detector. We are after a low signal-to-noise regime to optimize the detector’s effectiveness, so understanding background is huge.”

In particular, the study is looking at background radiation created by muons that can mimic antineutrinos. Muons are fundamental particles created when cosmic rays collide with molecules in the upper atmosphere. Every second of the day, muons are hurtling down to earth; the rule of thumb, says Peter, is that a muon passes through your thumbnail every minute. They eventually attenuate when they are absorbed or deflected by other atoms.

“Muons are very high energy particles that are not easily stopped. When they interact with rock, they basically rip apart the nuclei in the rock and create a shower of particles that can look like an antineutrino signal in the detector,” says Peter. A muon can also pass through a water-based detector and create radionuclides with decay particles that look like antineutrinos.

To understand how these phenomena affect background radiation at shallow depths, the WATCHMAN team is conducting a series of fast neutron measurements at the Kimballton Underground Research Facility (KURF), a science facility operated by the Virginia Tech Neutrino Science Center.

Splitting the problem

Early into the project, it became clear to the WATCHMAN team that a single detector could not encompass all of the muon interactions, so they split the problem. LLNL took on radionuclides while Sandia focused on the neutron spectrum, specifically muon-induced fast neutrons with energies between 100 and 200 MeV.

LLNL’s radionuclide detector, still in development, will be similar to the kiloton detector on a much smaller scale, about 3 meters across, with a tank that will hold gadolinium-doped water. “There has been no measurement of the production of radionuclides from muons interacting with water, so it’s not a well bounded problem,” says Peter.

Over the past year, the Sandia team members designed and constructed a one-of-a-kind detector that was delivered to the KURF site in early June. That detector, which the team dubbed MARS for Multiplicity and Recoil Spectrometer, combines two established modes of neutron detection to cover the entire energy spectrum of interest.

Think of it as a detector within a detector within many more smaller detectors. MARS consists of a 20-centimeter-thick pile of lead sandwiched between plastic scintillator wrapped in photo multiplier tubes to read out the scintillation light. The entire package is wrapped in paddle detectors to tag muons that enter MARS.

“We want to measure what is coming out of the walls of the cavern, not neutrons that are created within our detector by muon interaction,” explains Peter.

The lead acts as a multiplier for high energy neutrons, so there are two modes of detecting neutrons. A high energy neutron can interact directly with the scintillator, which causes a large pulse of light that is captured by the photo multiplier tubes — this is the recoil. Neutrons at even higher energy levels interact with the lead, producing many more low energy neutrons — the multiplicity. The number of low energy neutrons produced is correlated to the energy of the neutron.

The antineutrino detector developed by Sandia and LLNL in 2008 found new life in MARS. “We were able to reuse the plastic scintillators with layers of neutron capture gadolinium from two detectors that were deployed at the San Onofre nuclear power plant,” says Peter. “We added more photo multiplier tubes to increase light collection.”

MARS was then placed in a trailer so it can be moved to three different locations within the mine to collect data at different depths. It is currently sitting at 600 feet; in a few months it will be moved to 350 feet and will finish the year of testing at 150 feet. This project is the first-ever continuous measurement as a function of depth.

Peter expects many in the scientific community to pay close attention to the WATCHMAN project as it develops. He describes the project as a good testbed for basic scientific research with the potential to make measurements of great interest to the scientific community.

“The idea of creating a detector based on gadolinium-doped water has been kicked around for some time, but no one has pursued it. We don’t have a choice — for long-range detection we need a large detector and gadolinium-doped water is possibly the only feasible option for a detector of that size,” says Peter.

“In addition, there is no antineutrino detector in North America today. It’s good to have several well-separated independent detectors with different systematic uncertainties observing the same phenomena as a cross check. WATCHMAN could be America’s contribution to neutrino astrophysics.”

 

-- Patti Koning

Back to top of page

Uncovering the mechanisms of virulence in pathogens

Zach Bent, in the foreground with Steve Branda, manipulates a pathogen capture reaction under high temperature to maintain stringent hybridization conditions so that binding specificity remains high, thus enhancing efficiency of the capture. (Photo by Dino Vournas)

by Patti Koning

The original aim of the RapTOR (Rapid Threat Organism Recognition) Grand Challenge was to identify “unknown unknowns” — dangerous, virulent pathogens of unknown origin. Zach Bent (8623) and Steve Branda (8621) have now applied molecular biology capture approaches developed by the RapTOR team to a different problem, understanding how known pathogens become virulent. Until recently, determining what made pathogens dangerous was too expensive and time-consuming, but the capture methods overcome that barrier and could help researchers develop diagnostics and therapeutics to better combat drug resistant pathogens.

Analyzing pathogens’ gene expression during infection has been difficult because pathogen RNA molecules (“signal”) are generally outnumbered by host-derived RNA (“background”) in the sample (e.g., infected blood) by about 100-fold. RapTOR began to address this signal-to-background ratio problem with a negative capture method that uses affinity probes to capture and discard nucleic acids not of interest (e.g., host-derived) prior to sample analysis via Second Generation Sequencing (SGS). To amplify the pathogen signal, Zach turned to capture probes that selectively bind and recover pathogen nucleic acids for sequencing, a strategy referred to as pathogen capture.

“Our approach is very minimalist. It’s fast and cheap, which is important because it requires a large excess of probe, about 100 times more probe than sample,” says Steve. “We finely chop and tag pathogen nucleic acids to generate the probes. It’s crude, but it gets the job done. This method uses the entire genome, so every possible transcript can be captured and sequenced.”

A simple concept

The concept is simple: Saturate your sample with pathogen-specific probes; give the probes every opportunity to find their complementary nucleic acid partners; recover the hybridized probes on a column; wash extensively to disrupt non-specific interactions; and release the pathogen nucleic acids from the column for sequence analysis.

“It sounds straightforward, but carrying it out is tricky because the conditions have to be exact for the probe to find the target, grab it, and remain stable enough to isolate it from the sample,” says Steve. “Efficiently separating the target from the probe is key as well. You don’t want to waste time and money inadvertently sequencing the probe.”

Another challenge is making the probe discriminating enough to grab only nucleic acids from the pathogen, avoiding those from the host. This increases the degree of pathogen enrichment, which is critical to getting sufficient sequence coverage of the pathogen's entire transcriptome at a reasonable cost. Zach and Steve’s goal was at least 100-fold enrichment.

Getting it right took about two years of painstaking research; Zach and Steve are grateful to the many interns and technologists who have helped on the project. Most improvements were incremental, but their breakthrough came with precise definition of the column washing conditions.

“We significantly raised the ionic strength of the washing buffer to increase the stringency of the wash. Suddenly almost all of the nonspecific binding we were getting went away. That brought our enrichment up from 10-fold to over 100-fold in a single step,” says Zach.

Seeing pathogens in high fidelity

Zach and Steve use pathogen capture in tandem with other molecular biology methods developed for the RapTOR project, and the results are quite amazing — the ability to see details of what the pathogen is doing, due to the sheer number and variety of pathogen gene transcripts that can be sequenced.

Zach puts the advance in perspective. “I spent all of my time in graduate school looking at the in vivo expression of four different genes. This required an intense amount of work to get results of any significance,” he says. “With our new technique, instead of looking at a few genes, I can look at expression of all of the pathogen’s genes, several thousand of them, at once.”

This technique mitigates a significant barrier to research — the cost of sequencing. While sequencing remains a fixed cost, the cost of prepping the sample is small (about $20 to prepare and capture a sample), and the sample itself is rich in genetic material. By enriching the sample for transcripts of interest, the researchers can load multiple samples on the sequencer and still get appropriate coverage depth. This yields better bang for the buck, in terms of sequencing costs, and also saves time because multiple samples are analyzed in a single sequencing run.

Zach and Steve tested the efficacy of their pathogen capture technique by analyzing in vitro infections. In a paper titled “Enriching pathogen transcripts from infected samples: A capture-based approach to enhanced host-pathogen RNA sequencing”, published in Analytical Biochemistry in March 2013, they reported 10- to 100-fold enrichment of reads mapping to the pathogen’s transcriptome, relative to untreated controls. This enrichment greatly increased the diversity of pathogen transcripts sequenced, as well as the coverage depth at which each transcript was sequenced.

They then closely studied F. tularensis at two critical transitions during infection: when the pathogen escapes from an internal membrane-bound compartment within the host cell, and when the pathogen begins replicating within the cytosol of the host cell.

“We saw what we expected: Two distinct gene expression profiles at those two transitions,” says Steve. Their work also revealed several hundred transcripts of unknown function, many of which are up-regulated (switched on) and therefore potentially important for pathogen survival and proliferation within the host cell. These results will be reported in a PLoS ONE article that is currently in press.

“We don’t have the resources right now to study those particular unknown genes,” says Zach. “Just the number of unknown genes we found is quite interesting, because Francisella has been studied extensively for two decades. The number of unknown, up-regulated genes we found in just this initial experiment shows that we really haven’t been seeing the whole picture.”

Understanding host/pathogen wargames

Moving forward, the researchers would like to study infections in even greater detail. Zach likens host/pathogen interactions to an arms race on a microscopic level. “The pathogen invades, the host responds, the pathogen counterattacks, and this goes on until one side wins,” he says. “Being able to look so closely at expression of all of the host and pathogen genes will let us figure out those key critical moments when the host is overpowered and an infection spreads.”

They want to expand their research to include animal studies to track the dynamics of infection at different physical locations, a particularly under-studied dimension of infection. The speed and low cost of the pathogen capture-based technique would enable tracking of an infection as it spreads to different parts of the body.

“By being able to sample many different physical locations, you can see how different body defenses are mounting a reaction, and what the pathogen is doing to survive,” says Steve. “Those have been really difficult experiments to carry out, and we don’t at all understand what it means for bacteria to survive in the liver versus a lung, for example. This is a completely different facet on the whole interaction between a pathogen and a host.”

In the future, pathogen capture could enable researchers to study bacterial gene function during infection with such precision that they can develop new countermeasures that are effective against drug-resistant strains. Therapeutics could be made to target specific bacterial pathways and infection mechanisms entirely separate from those targeted by broad-spectrum antibiotics. By hitting multiple pathways at once, says Steve, a therapeutic could conceivably stop a pathogen in its tracks, preventing it from developing resistance to the new drugs.

Zach is also the principal investigator of an Exploratory Express LDRD to analyze the Yersinia enterocolitica transcriptome at very early time points in infection using pathogen capture technology. “We aren’t quite finished analyzing all of the data, but we have already made several exciting discoveries that shed light on the roles of its various virulence mechanisms,” says Zach.

In October, Zach and Steve will begin working on an LDRD-funded project led by Robert Meagher (8621) to build an automated device to perform pathogen capture on 96 samples simultaneously. “Right now, we have a very specialized setup and it still relies on a lot of manual labor,” says Zach. “We want this to be accessible and compatible with a typical lab setup, so that other groups can take advantage of our technique.”

Developing the automated device will take a year or so, but during that time Zach and Steve will conduct more studies to prove the technology. “With each new development, you can do more experiments than you could in the past, and different types of experiments as well,” says Steve. “It’s been a long time in the making, and there’s still much more work to be done, but we’ll continue to get exciting results along the way.”

-- Patti Koning

Back to top of page

Download Lab News August 23, 2013 (PDF, 2MB)