skip to: onlinetools | mainnavigation | content | footer

Newsroom

SANDIA LAB NEWS

Lab News -- February 1, 2008

February 1 , 2008

LabNews 02/01/2008PDF (500 kb)

International scientists weigh new definition for kilogram

By Michael Padilla

The kilogram is losing weight and many international scientists agree that it’s time to redefine it.

Scientists are hoping to redefine the kilogram by basing it on standards of universal constants rather than on an artifact standard.

The International Prototype Kilogram (IPK) or “Le Grand K,” made in the 1880s, is a bar of platinum-iridium alloy kept in a vault near Paris.

“The idea is to replace the single master kilogram with something based on physical constants, rather than an artifact that could be damaged accidentally,” says mechanical engineer Hy Tran (2541), a project leader at the Primary Standards Laboratory (PSL) at Sandia.

Of the seven units of measurements in the International System, or SI, the kilogram is the only base still defined by a physical object. In addition, copies of the kilogram have changed over time by either gaining or losing weight as compared to the standard kilogram.

The purpose of redefining the kilogram is based on risk reduction, says Hy.

“In the long term, the redefinition — especially if performed correctly — is beneficial because of risk reduction and because it may enable better measurements in the future,” he says.

Definition only thing to change

By replacing the master kilogram — Le Grand K —with a unit based on physical constants, researchers at multiple laboratories and at national measurement institutes could establish traceability, he says.

Hy says the kilogram will remain the kilogram; it’s only the way it will be defined that will change. He says the earliest the kilogram would be redefined is 2011.

“If and when the redefinition takes place, it will be done in such a fashion as to have minimal or no practical impact with other measured quantities,” Hy says. “In other words, if it is redefined so as to ensure better than 10 parts per billion agreement — rather than 20 parts per billion agreement — then we will see no major changes immediately.”

Based on the current formal definition of the kilogram (the mass of the 1 kilogram prototype) and experimental dissemination to standards labs, the uncertainty (95 percent confidence) in PSL’s kilogram is about 40 parts per billion, compared to the IPK.

One part per billion is about the ratio of the area of a square 3/32 inch on a side, with respect to the area of a regulation NFL football field (including the endzones, or 120 yards by 53-1/3 yards), Hy says.

The target originally proposed by the Bureau International des Poids et Mesures (International Bureau of Weights & Measures) was to get one of the alternative kilogram definitions, such as the experimental measurement of force on the watt balance (or counting atoms on the silicon sphere), and deriving the kilogram, matched to experimental measurements of the prototype kilogram to within 20 parts per billion.

Resolving the issues

Sandia physicist Harold Parks (2542) agrees that the redefinition of the kilogram is inevitable and says there are a couple of issues that need to be resolved before it’s redefined.

“The watt balance method of defining the kilogram makes the most sense for those of us in electrical metrology and so far it is the most accurate,” he says. “But other proposals, such as those based on counting the number of atoms in a silicon crystal, are being considered.”

The watt balance is based on an idea that compares electrical and mechanical power with a high accuracy, he says.

Conflicts between the results of the watt balance and the atom counting experiments will also need to be resolved, Parks says.

“The NIST (National Institute of Standards and Technology) watt balance experiment has achieved the accuracy needed to redefine the kilogram, but the experiment will need to be confirmed by other groups in order for the results to be fully accepted,” he says.

Impact to the Sandia community

Hy says redefining the kilogram will have little impact on the Primary Standards Lab or the broader nuclear weapons complex. The lab develops and maintains primary standards traceable to national standards and calibrates and certifies customer reference standards.

“It should not affect PSL or the complex if the international metrology community ensures that they fully consider the uncertainties, the necessary experimental apparatus to realize the kilogram, and implementation issues prior to agreeing to the redefinition,” Hy says.

In preparation for the change, PSL staff members are staying up to date in research in metrology and standards practices. Staff also participate in standards activities in order to ensure that any transition would be smooth. -- Michael Padilla

Top of page
Return to Lab News home page


One million trillion computations per second envisioned by Sandia and Oak Ridge researchers

 

By Neal Singer

Ten years ago, people worldwide were astounded at the emergence of a teraflop supercomputer — that would be Sandia’s ASCI Red — able in one second to perform a trillion mathematical operations.

More recently, bloggers seem stunned that a machine capable of petaflop computing — a thousand times faster than a teraflop — could soon break the next barrier of a thousand trillion mathematical operations a second.

Now, almost without taking a breath, and before the world has actually achieved a petaflop supercomputer, a joint Institute for Advanced Architectures newly launched at Sandia and Oak Ridge national laboratories is charged with laying the groundwork for an exascale computer.

A thousand times faster than a petaflop, it would perform a million trillion arithmetic calculations per second.

A million trillion

What is the need for a machine to do that many calculations that fast?

Says Sandia Center 1400 Director James Peery, “An exascale computer is essential to perform more accurate simulations that, in turn, support solutions for emerging science and engineering challenges in national defense, energy assurance, advanced materials, climate, and medicine.”

Such machines would be better detectives of real-world conditions, able to help researchers more closely examine the interactions of larger numbers of particles over time periods divided into smaller segments.

Supported by NNSA and DOE’s Office of Science, the institute — a DOE Center of Excellence — is funded in FY08 by congressional mandate at $7.4 million.

The idea behind the institute — itself under consideration for a year and a half prior to its opening — is “to close critical gaps between theoretical peak performance and actual performance on current supercomputers,” says Sandia project lead Sudip Dosanjh (1420).

“We believe this can be done by developing novel and innovative computer architectures.”

One aim, he says, is to reduce or eliminate the growing mismatch between data movement and processing speeds.

Processing speed refers to the rapidity with which a processor can manipulate data to solve its part of a larger problem. Data movement refers to the act of getting data from a computer’s memory to its processing chip and then back again. The larger the machine, the farther away from a processor the data may be stored and the slower the movement of data.

“In an exascale computer, data might be tens of thousands of processors away from the processor that wants it,” says Sandia computer architect Doug Doerfler (1422). “But until that processor gets its data, it has nothing useful to do. One key to scalability is to make sure all processors have something to work on at all times.”

Splitting processors, increasing speed

Compounding the problem is new technology that has enabled designers to split a processor into first two, then four, and now eight cores on a single die. Some special-purpose processors have 24 or more cores on a die. Sudip suggests there might eventually be hundreds operating in parallel on a single chip.

“In order to continue to make progress in running scientific applications at these [very large] scales,” says Jeff Nichols, who heads the Oak Ridge branch of the institute, “we need to address our ability to maintain the balance between the hardware and the software. There are huge software and programming challenges and our goal is to do the critical R&D to close some of the gaps.”

Operating in parallel means that each core can work its part of the puzzle simultaneously with other cores on a chip, greatly increasing the speed at which a processor operates on data. The method does not require faster clock speeds, measured in faster gigahertz, which would generate unmanageable amounts of heat to dissipate as well as current leakage.

(As a side note, the new method bolsters the continued relevance of Moore’s Law, the 1965 observation of Intel cofounder Gordon Moore that the number of transistors placed on a single computer chip will double approximately every two years.)

Power considerations

Another problem for the institute is to reduce the amount of power needed to run a future exascale computer.

“The electrical power needed with today’s technologies would be many tens of megawatts — a significant fraction of a power plant. A megawatt can cost as much as a million dollars a year,” says Sudip. “We want to bring that down.”

Sandia and Oak Ridge will work together on these and other problems, he says. “Although all of our efforts will be collaborative, in some areas Sandia will take the lead and Oak Ridge may lead in others, depending on who has the most expertise in a given discipline.” In addition, a key component of the institute will be the involvement of industry and universities.

A spontaneous demonstration of wide interest in faster computing was evidenced in the response to an invitation-only workshop, “Memory Opportunities for High-Performing Computing,” sponsored in January by the institute.

Workshop organizers James Ang, Richard Murphy, and Arun Rodrigues (all 1422) planned for 25 participants but nearly 50 attended. Attendees represented the national labs, DOE, the National Science Foundation, the National Security Agency, the Defense Advanced Research Projects Agency, and leading manufacturers of processors and supercomputing systems. Robert Meisner (NNSA) and Fred Johnson (Office of Science) served on the program committee. -- Neal Singer

Top of page
Return to Lab News home page