Sandia LabNews

'Human presence detector' device fails controlledtests conducted by Sandia


A Sandia double-blind test of an instrument that its manufacturer said could detect the presence of human beings at a distance through any material found no evidence that it could do so.

The test results showed that the DKL LifeGuard Model 2 "human presence detector" failed to meet its published specifications and its performance was no better than random chance.

DOE’s Office of Safeguards and Security asked Sandia to evaluate the performance of the DKL LifeGuard, from DielectroKinetic Laboratories, LLC. The company advertised that some models of the device could detect living human beings at distances of up to 500 meters through any material. (The model tested has a published range of 20 meters.)

Such a capability, if demonstrated, could be a tremendous help in search and rescue, law enforcement, and security. Word about the device had gained attention in government circles. Its list price ranges from $6,000 to $15,000, depending on the model.

The question posed to Sandia was simply: "Does this device work?"

Sandians Dale Murray, Floyd Spencer, and Debra Spencer designed a double-blind test to determine whether the device would perform as advertised. Dale is an electrical engineer in Entry Control and Contraband Detection Technology Dept. 5848. Floyd is a statistician in Statistics and Human Factors Dept. 12323. Debra is an analyst in Advanced Systems Integration Dept. 5861.

"It was not your usual test," Dale says of the DOE request. On the other hand, it was related to the responsibilities he has as project leader for Sandia’s and DOE’s entry-control project. "We work in the area of access control, and other departments in our center deal with intrusion-detection sensors," he says.

Dale says the device consists essentially of a black rectangular box about 3 inches tall, 1 inch thick, and 8 inches long. Out of the bottom pops a handle so that the box swings freely. There’s also an antenna, a small laser like that in a lecture pointer, and a red LED light. There are some electronics inside.

Controlled test in remote area

A formal test protocol was first established. The Sandians conducted the test March 20 in a remote area (the old NATO site) south of Sandia. Five large plastic packing crates were set up in a line at 30-foot intervals. The goal was to see if the test operator, using the instrument, could detect in which of the five crates a human being was hiding. The operator was provided by the company and was a high-ranking member of DKL management.

The test was double-blind and random. Neither the instrument operator nor the three Sandia investigators (it is a fine point, but Debbie and Floyd were not present during the actual testing) would know which crate had the human test subject in it until after the results were tabulated.

The test set-up manager used a sealed, randomly generated test schedule to direct the test target (the human being) into one of the five containers. Using the device, the test operator would then attempt to determine which container he was in. He "scanned" the crates from a distance of 50 feet, well within its stated capabilities.

First, a baseline evaluation was done to see if the instrument was operating correctly. The test operator and the investigators were all allowed to see which container the human test subject entered.

Under this noncontrolled condition – when the instrument operator already knew which crate contained the human – he was quickly successful 10 times in 10 trials. The same was true, of course, for every one else present.

All subsequent tests were controlled. The test operator was not allowed to view which crate the test subject entered. Nor could the investigators. (They were confined inside a nearby instrument trailer during that time.) No one learned the full results until the entire set of tests was completed.

For each trial, one human test target was inside one of the five crates. The operator had a one-in-five probability of success by chance alone. Under these conditions, say the Sandians, the test operator began taking a much longer time to "scan" the crates.

This series of tests were spread over about four-and-a-half hours of total set-up and test time. There were six successes in 25 trials.

A third series of tests was similar but a little more complicated. Multiple human targets could be randomly hidden in crates – but again each crate had a probability of 1/5 of having an occupant. Once again the results were consistent with random chance.

A number of rationalizations

The Sandians say the test operator offered a number of rationalizations for the difficulty of detecting the test subject and for the chance test results under controlled conditions. He said, for example, that the "sharp edges" of the crate were distorting the field and were interfering with the detection. However, the Sandia investigators point out that the manufacturer’s published capabilities for the device include statements such as "penetrates all forms of camouflaging" and "no known countermeasures."

Although the empirical test was the core of the Sandia analysis, the Sandia team also briefly examined the DKL product literature about the advertised physics behind its operation. The product literature says the instrument antenna detects the electrical field generated by the beating human heart, but the Sandian team found the idea put forth for that process "clearly wrong."

The Sandians also point out that the heart beats at a rate of 1.2 to 2.0 hertz and the wavelength of two hertz is 93,150 miles. "The 15-inch antenna on the LifeGuard is entirely inadequate for receiving signals of that wavelength," they report.

"These points about the physics of the device support the conclusion of the results of the empirical tests," they say. "The device cannot perform any better than chance."

Dale, Floyd, and Debra have written and submitted to DOE a 10-page report titled "Double-Blind Evaluation of the DKL LifeGuard Model 2."

The Sandians came to a clear conclusion:

"Our evaluation of the DKL LifeGuard, although brief, leads us to conclude that the device performs no better than random chance. Although we only had time to evaluate the device with one test operator, that test operator was from the DKL organization, was selected by the manufacturer to perform that evaluation, and spent considerable time trying to use the device to the best of his ability. Thus, we conclude that no other test operator would be able to establish a better performance of the instrument except by chance."