skip to: onlinetools | mainnavigation | content | footer
[an error occurred while processing this directive]

SANDIA LAB NEWS

Lab News -- March 3, 2006

March 3, 2006

LabNews 03/03/2006PDF (650KB)

Sandia and partners work together to build prototype electromagnetic mortar launcher for future armies

By John German

Sandia and a team of government and university labs are building a prototype mortar launcher that could alter the way armies have launched projectiles at their enemies for a thousand years.

As part of a two-year electromagnetic mortar project for the Defense Advanced Research Projects Agency (DARPA), the research team is building a prototype electromagnetic (EM) gun and demonstrating electromagnetic launch of mortar-class munitions. Full-scale field testing is scheduled for this fall at Sandia.

Partners in the project include the Institute of Advanced Technology at the University of Texas; the Munitions Development Division of the US Army Armaments Research, Development, and Engineering Center; and capacitor film manufacturing company TPL, Inc.

The DARPA-funded project focuses on low-cost, high-fire-rate munitions. A complementary joint Sandia project with Lockheed Martin is demonstrating EM-launched missile systems (Lab News, Jan. 21, 2005).

In conventional mortar-firing operations, crews determine mortar range by the amount of propellant (the number of individually packaged propellant charges behind the round), barrel attitude, and external factors such as terrain features and wind direction.

The science is essentially unchanged since 11th-century Chinese combatants used the first crude gunpowder-propelled projectiles to decimate enemy lines. Later in Europe, 15th-century armies perfected the use of cannonballs to pulverize castle walls.

“Although today’s mortar crews become very good over time, launching mortars is by today’s standards an inexact science, and it is constrained by the incremental degree of control offered by propellant rings,” says Bob Turman, Senior Manager for Directed Energy Systems Org. 5440.

In an electromagnetic launcher, coils stacked along the gun’s barrel are subjected to precisely timed current pulses, one after the other, creating a magnetic wave that moves quickly up the barrel and pushes the mortar and armature along with it.

No propellant is necessary, eliminating not only a safety hazard for soldiers and a logistics headache for the military, but also a major source of imprecision in conventional mortar guns. A very slight variation in propellant quality, temperature, or quantity can result in a mortar missing its target.

“There is only so much fidelity you can get with propellant rings,” says project manager Ron Kaye (5445).

The barrel-end velocity of an EM-launched projectile, on the other hand, when timed by computer, can be very precisely controlled, he says. The intensity with which individual coils are fired in succession can also be adjusted on the fly to make slight adjustments, literally while the mortar is traversing the barrel.

“This will allow the warfighter to essentially dial a range,” says Ron. “It will allow for a new degree of control.”

Because no propellant loading is necessary, launch cycle times can be reduced from the 10 rounds-per-minute maximum of a skilled mortar crew to, Bob estimates, 16 to 24 rounds per minute —determined by the time required to reload the mortar and recharge the energy-storage capacitors. Eliminating the propellant also opens the door to fully automated, robotically reloaded EM mortar guns that might achieve even faster fire rates.

EM guns produce almost no muzzle flash and a fraction of the muzzle report associated with traditional artillery. In essence, the mortar departs the barrel with a swoosh rather than a bang. In today’s world of space- and aircraft-based reconnaissance sensing, reducing the optical and acoustic signal will make it more difficult for the enemy to pinpoint the source of artillery fire, says Ron.

Sandia’s DARPA project focuses on land-based army munitions, and the Sandia team has built a nonfunctional replica of a turret containing an EM gun that could sit atop a Future Combat System vehicle or Bradley fighting vehicle.

A full-scale, 50-coil EM gun prototype has been designed and is being built in Area 4.

Projectile interaction with the EM gun barrel components have been modeled on Sandia computers and validated using data from a four-coil mock-up gun. Laboratory tests on the full-scale prototype are scheduled for this fall.

For vehicle applications, a portable electrical power generation and capacitance-based storage system would be necessary, which Bob believes need be only as large and weighty as the turrets on current military platforms.

DARPA is considering EM mortar launchers as a potential component of the US military’s Future Combat System. If hybrid electric vehicles are adopted, the EM launcher could, essentially, share an on-board power plant with its host vehicle, says Ron.

The Sandia-led project falls in the category of applied development and goes well beyond the research projects conducted at Sandia in the early ‘90s that resulted in demonstration of an EM-launched projectile across Coyote Canyon, says Bob.

“DARPA has provided the specifications and the parameters and asked us to build them a mortar demonstration using existing 120 mm mortar ammunition,” he says. “We’re getting close to a working, full-scale gun.” -- John German

Top of page


Sandia’s Z machine exceeds two billion degrees Kelvin

By Neal Singer

Ions produced by Sandia’s Z machine have exceeded 2 billion degrees Kelvin, 10 times hotter than any fusion experiment on Earth and hotter than the interiors of stars.

The reaction, if it could be harnessed, presents the possibility of eventually building smaller, hotter nuclear fusion plants to produce the same amount of energy as larger, cooler plants.

A description of the achievement and its theoretical explanation — which appeared in the Feb. 24 Physical Review Letters — also could serve to describe how astrophysical entities like solar flares maintain extreme temperatures.

The temperatures, first recorded approximately 18 months ago, puzzled Sandia researchers because they were roughly four times stronger than the estimated kinetic energy of the machine’s implosion phase, where a magnetic field smashes ions together to release heat in the form of X-rays.

According to the conventional view of Z-pinches, the emitted heat should be less — not more — than the kinetic energy from which it came.

“Dave LePell [1646] measured the temperature,” says team leader Chris Deeney (1640), “which prompted the question: how can it be that high for so long?”

“Long,” in this case, was 10 nanoseconds.

Because the team was concerned about possible errors in measurement, they did not report the readings at first but instead — coordinating with computer models created by John Apruzese at the Naval Research Laboratory — did additional experiments.

Measurements were possible even at these extreme temperatures because emerging X-rays create spectral lines on a spectrometer. The width of the line establishes its temperature.

“Depending on how hot they are,” Chris says, “the rays move with a given velocity that produces a [Doppler] red or blue shift that widens the spectral line.” But, he says, “Other phenomena could cause the line to broaden: for example, plasma opacity that would cause emitted X-rays to be reabsorbed, depressing the center.”

When no error was found, Chris and Dave, along with Christine Coverdale (1344) and Brent Jones (1646), who helped plan and lead the shots, turned to Sandia consultant Malcolm Haines, well-known for his work in Z pinches as a physics professor at the Imperial College in London.

Haines theorized that the rapid conversion of magnetic energy to a very high ion plasma temperature was achieved by magnetohydrodynamic instabilities at stagnation, when the ions and electrons could travel no further. At this point, all the kinetic energy should have been used up and the plasma collapsed. But some unknown energy was still pushing back against the magnetic field.

The surprising explanation theorizes that Z’s magnetic energies create microturbulences that increase the kinetic energies of ions caught in the field’s grip. The extra jolt of kinetic energy then produces increased heat, as ions and their accompanying electrons release energy even after they should have been exhausted.

High temperatures previously had been assumed to be produced entirely by the kinetic flight and intersection of ions and electrons, unaided by accompanying microturbulent fields.

In these experiments, the work was done by magnetically imploding ions from stainless steel wires 55 to 80 mm in diameter.

The Z machine’s magnetic field is created by an electrical current of 20 million amperes. The current burns out the wires like a short circuit in an automobile burns out a fuse.

The temperatures are produced in unadorned, flat-roofed Bldg. 983 — about the size and shape of an aging high-school gymnasium — in Sandia Technical Area 4.

This work has already prompted other studies at Sandia and the University of Nevada, Reno.

-- By Neal Singer

Top of page


Red Storm is ranked the world’s most efficient supercomputer in two of six categories

By Neal Singer

 

A new series of measurements — the next step in evolution of criteria to more accurately determine the efficiency of supercomputers — has rated Sandia’s Red Storm computer — already judged sixth fastest in the world on the old but more commonly accepted Linpack test — the best in the world in two of six new categories, and very high in two other important categories.

The two first-place benchmarks measure the efficiency of keeping track of individual data (called random access memory), and of communicating data between processors. This is the equivalent of how well a good basketball team works its offense, rapidly passing the ball many times to score against a tough opponent.

NNSA Administrator Linton Brooks got an advance peek at the results and touted them publicly during his recent Sandia visit (Lab News, Feb. 17). Sandia president Tom Hunter proudly noted them in his talks to Sandians and the community last week (see pages 1 and 5).

To understand why success in the new categories is more definitive than the more easily understandable measurement of mere speed (and is not just shopping for a hard-to-understand test that gives the most favorable result for the home team), it’s probably worth a moment to examine how technical ratings were established in the first place.

In the mid-19th century, researchers had all they could do to figure out the speed with which electricity traversed a simple wire.

Much more complicated in the late 20th and early 21st centuries were measurements made of currents flowing along the intricate circuitry of a computer chip.

Basic task of supercomputer

Still more difficult was to arrive at a meaningful number describing the information flowing electrically between many chips intended to work together like an orchestra — each instrument coming in at exactly the right time to solve small portions of large problems, and then pass along that information to the next set of chips waiting to continue the symphony.

This is the basic task of a modern super- computer.

The only way to know whether all the pieces are playing together would be to check the output of each chip, and there are thousands and thousands of chips in computers processing information in parallel circuits. Such tests would be expensive and time-consuming.

Thus, in the early 1990s, supercomputer manufacturers distinguished the capabilities of their products by announcing Theoretical Peak numbers, says Sudip Dosangh (1420). This figure represented how fast a computer with many chips in parallel circuits could run if all processors worked perfectly and in unison. The number was best considered a hopeful estimate.

Next came the Linpack benchmark, which provided a real but relatively simple series of algorithms for a supercomputer to solve. Since 1993, that part of the world interested in supercomputers has watched for the new Linpack numbers, published every six months, to determine the 500 fastest computers in the world, and which entrant is the fastest of them all. For several years, this was the Sandia ASCI Red supercomputer.

More recently, the limitations of this approach have encouraged the Linpack founders, in conjunction with supercomputer manufacturers, to develop still more realistic tests. These indicate how well supercomputers handle essential functions like the passing between processors of large amounts of data necessary to solve real-world problems.

It is in this revised series of tests, called the High Performance Computing Challenge (HPCC) test suite, that Sandia’s Red Storm supercomputer — funded by NNSA’s Advanced Simulation & Computing (ASC) program — has done extremely well.

Rob Leland, director of Computing and Network Services Center 4300, offers this example of a complicated problem: “Suppose your computer is modeling a car crash,” he told the Lab News. “You’re doing calculations about when the windshield is going to break. And then the hood goes through it. This is a very discontinuous event: Out of the blue, something else enters the picture dramatically.”

“You have to remesh every point [of your visualization],” agrees John Zepper (4320).

Fundamental problem solved

Continues Rob, “This is the fundamental problem that Sandia solved in Red Storm: how to monitor what’s coming at you, in every stage of your calculations. You need very good communications infrastructure, because the information is concise, very intense. You need a lot of bandwidth and low latency [to be able to transmit a lot of information with minimum delays], and because the incoming information is very unpredictable, you have to be good [read, ‘aware’] in every direction.”

Rob gives particular credit to Steve Plimpton (1412) and Courtenay Vaughan (1422) for their contributions to solving these problems.

David Womble, acting director of Computation, Computers, and Math Dept. 1400, uses another metaphor. “The question,” he says, “is how much traffic can you move how fast through crowded city streets.” Red Storm, he says, does so well because it has “a balance that doesn’t exist in other machines between communication bandwidth [the ability of a processor to get data it needs from anywhere in the machine quickly] and floating point computation [how fast each processor can do the additions, multiplications, and other operations it needs to do in solving problems].”

More technically, Red Storm posted 1.8 TB/sec (1.8 trillion bytes per second) on one HPCC test: an interconnect bandwidth challenge called PTRANS, for parallel matrix transpose. This test, requiring repeated “reads,” “stores,” and communications among processors, is a measure of the total communication capacity of the internal interconnect. Sandia’s achievement in this category represents 40 times more communications power per teraflop (trillion floating point operations per second) than the PTRANS result posted by IBM’s Blue Gene system that has more than 10 times as many processors. Red Storm is the first computer to surpass the 1 tera-

byte-per-second (1 TB/sec) performance mark measuring communications among processors — a measure that indicates the capacity of the network to communicate when dealing with the most complex situations.

Random access benchmark

The “random access” benchmark checks performance in moving individual data rather than large arrays of data. Moving individual data quickly and well means that the computer can handle chaotic situations efficiently.

The computer has already modeled how much explosive power it would take to destroy an asteroid targeting Earth, how a raging fire would affect critical machinery, and elements of Earth’s atmosphere, in addition to the basic stockpile calculations the machine is designed to address.

It would be effective in visualizing complex defense-related events like an aircraft crashing with nuclear weapons on it, says Jim Tomkins (1420).

Red Storm also did very well in categories it did not win, says Courtenay, finishing second in the world behind Blue Gene in fft (“Fast Fourier Transform,” a method of transforming data into frequencies or logarithmic forms easier to work with); and third behind Purple and Blue Gene in the “streams” category (total memory bandwidth measurement). Higher memory bandwidth helps prevent processors from being starved for data.

The two remaining tests involve the effectiveness of individual chips, rather than overall computer design.

In a normalization of benchmarks, which involves dividing them by the Linpack speed, Jim found that Red Storm had the best ratio. That is, Red Storm — of all the supercomputers — was best balanced to do real work.

An unusual feature of Red Storm’s architecture, says Jim, is that the computer can do both classified and unclassified work with the physical throwing of a secure switch, similar to the way a railroad switch can divert a train from one track to another. “That’s important at Sandia because we have a whole community here that does science. We can allocate part or even the whole machine to a science problem, and then move to DOE interests and do secure work.” The secure transfer does not require any movement of discs. There are no hard drives in any Red Storm cabinet.

“We get the value of a big machine that can do classified and unclassified,” says John. The capability of the machine to put its entire computing weight behind science jobs enabled one Sandia researcher to get an entire year’s worth of calculations done in a month, he says.

Red Storm’s architecture was designed by Jim and Bill Camp. The pair’s work has helped Sandia partner Cray Computers Inc. already sell 15 copies of the supercomputer in various sizes to US government agencies and universities, Canadians, and overseas customers in England, Switzerland, and Japan.

Cray holds Sandia licenses to reproduce Red Storm architecture and some system software, says Jim. “The operating system was written here, but the IO [input-output] is Cray’s.”

Sandia is paid by Cray at a per-node amount for those Cray installs elsewhere -- Neal Singer

Top of page


Labs-developed coatings can immobilize, clean up radiation-contaminated surfaces

By John German

 

Lab researchers have developed two sprayon/peel-off coatings that could be used by federal emergency response teams to prevent the further spread of radiological contamination and, later, cleanse radionuclides from contaminated surfaces following a dirty bomb attack or other radiation incident.

The first, a containment coating, is impervious to weather once it dries and can be used by the earliest federal responders to ensure radionuclides stay in place until evidence is gathered and cleanup begins, says project manager Joe Jones of Radiological Consequence Management & Response Dept. 6874.

The second, a hydrogel for decontamination of porous surfaces, can help restore radiation-contaminated construction materials — such as concrete, brick, marble, and granite — to usable condition, says principal investigator and inventor Bob Moore of Advanced Nuclear Concepts Dept. 6872. The decontamination hydrogel was developed jointly with Mark Tucker (6245). Chemical getters in the liquid hydrogel solution quickly grab onto radionuclides in the pores of the materials and hold the contaminants in their molecular structures until the hydrogel dries.

Once dried, both coatings can be peeled off the surface and disposed of as radiological waste.

Chemically both coatings contain advanced water-soluble polymers with an oxidative crosslinking additive such as sodium borate. The crosslinker allows the polymers to remain as a liquid — individual molecules suspended in a water-solvent solution — until activated by, for example, the oxygen in open air.

Once activated, the polymers begin to chemically join into strands (for the containment coating) or balls (for the hydrogel). A network of these polymer chains or balls form as the water and solvents evaporate from the structure, leaving a hardened, water-insoluble plastic than can be peeled off a surface.

Both coatings go on like paint and dry like the latex of a party balloon. Drying times for both coatings can be tailored to need, from less than an hour to more than a day.

Sandia developed the containment coating as part of a Department of Homeland Security program to secure a scene following a radiation incident.

Sandia initially developed the hydrogel as part of a DARPA call for proposals in 2004.

Although DARPA has chosen not to move forward with the hydrogel, the Sandia researchers say both technologies are ripe for commercialization.

“Basically the first responder community has said they want more tools in their tool boxes to deal with a broader range of threats, including a dirty bomb incident,” says Bob.

“We needed something that dries fast and is easily removable and meets the needs of the earliest federal responders,” says Joe. “In laboratory tests, both coatings have been effective for their intended purposes.”

Ideally, adds Bob, the two technologies could be combined into a single containment-decon coating product.

The Sandia team has sought to minimize costs by using inexpensive, off-the-shelf chemicals as constituents.

Patent applications are being prepared for both coatings. -- John German

Top of page