Whole lotta shakin’ goin’ on
It took decades for technology to catch up with the math Sandia researcher David Smallwood worked out to control vibration table shakers.
David, a retiree who consults at the Labs, knew that shaking in all directions at once was the key to realistic parts testing. Now Sandia is putting the algorithms he developed more than 30 years ago to the test by shaking up nuclear weapon components.
Vibration machines are crucial to test the forces that make things fall apart in the bumpy real world, from small components to complete systems like airplanes or nuclear weapons. The machines are important to the aerospace and automotive industries, and have been in use since their invention in Germany in the late 1920s.
Large, high-frequency vibration machines that shake things in several directions simultaneously are relatively new. David’s algorithms made them possible, along with developments in digital controls, sophisticated sensors, faster computers with more memory, and better mechanical designs.
The standard vibration machine has a single axis that shakes things in one direction at a time. But parts sometimes fail when the real world bounces them from multiple directions: east-west, north-south, up-down, and rotations along each of those axes, what’s known as six degrees of freedom or 6DOF.
“If you tested it in each direction separately, you could get a totally different kind of failure,” says Sandia systems engineer Davinia Rizzo (1557), part of a team working on test specifications for a large high-frequency 6DOF vibration machine installed at Sandia last year, one of only two in the US.
Think of 6DOF and single-axis in the context of the pat-your-head, rub-your-stomach exercise for kids. They can all pat their heads or rub their stomachs separately. “But when you combine them, you discover an undetected failure — they can’t do one or the other or the timing is off or they rub their head and pat their stomach,” Davinia says. “It’s the same with single-axis and 6DOF. You move in one direction and the test unit appears fine. You move in the other and it appears fine. But when you move all directions at once, you discover an issue. We’ve demonstrated this behavior in the lab.”
6DOF could revolutionize testing
Sandia wants to use 6DOF to qualify weapons components and revolutionize the way it does mechanical testing. Better tests could discover currently unknown paths to failure and reduce test time and cost.
“We’re mimicking rides on airplanes, rockets, or in the back of a truck to ensure components or systems that we’re testing are going to survive their environment before we fly them,” says Kevin Cross (1521), who’s in charge of Sandia’s vibration lab. “It’s one of our tools to prove reliability standards that we have to meet for our components.”
David, who started doing vibration testing five decades ago at White Sands Missile Range as a New Mexico State University undergraduate, says researchers recognized long ago that single-axis vibration testing wasn’t enough. “It did not represent the real world,” says David, who consults with the 6DOF team.
Multi-axis shaking was the goal from the earliest days of testing. Norman Hunter, another consultant to the team, worked on Sandia’s pioneering efforts in the late 1960s and early 1970s to run two shakers concurrently using analog controls.
That didn’t work at higher frequencies. “Things kind of fell apart,” David says. “I used to joke that Norm would sit there with his hand on the abort button so when the system went unstable he could stop it.”
David developed breakthrough algorithms
Sandia researchers began exploring early versions of digital controllers. In 1978, David developed algorithms outlining digital control of vibration on multiple shakers, the first publication of the math needed to do that. His concept remains the foundation for today’s multi-axis vibration, controllers.
The breakthrough came when he figured out how to derive correlated or partially correlated multiple signals in real time. “That’s what we had to do for a control system for a shaker,” David says. “You can’t put something out, wait to do some calculations, and then put something else out. The system insists that you have continuous output.”
Sandia built a system in the early 1980s to drive two digitally controlled shakers. It worked, but wasn’t practical because computers of the time were too slow.
Seattle-based Team Corp. came up with a 6DOF shaker design about a decade ago. “I looked at it and said, ‘That might actually work,’” David recalls. The company built a small 6DOF machine as a demonstration and research tool. After getting feedback, it developed its large 6DOF machine, capable of testing items up to 50 pounds.
The machine has 12 barrel-like electrodynamic shakers, four on each side for the horizontal X and Y axes and four underneath for the Z, or vertical, axis. Using the various shakers together in different configurations achieves rotations around each axis. The shakers, which exert 4,000 pounds of force per axis, drive a 30- by 30- by 14-inch rectangular block in the center where a test piece sits.
The machine is meant for component- or subsystem-level tests. It doesn’t have enough force for very large items, and augments rather than replaces Sandia’s larger single-axis shakers. Single-axis machines do separate tests at individual axes and experimentalists combine those to arrive at multiple-direction results.
New technology brings new challenges
The very newness of 6DOF poses challenges. “There are tons of questions about how to use 6DOF in our testing philosophy, what to use for specifications, and how to control 6DOF machines,” Davinia says. She’s part of the team studying minimum drive, an approach David developed.
“The idea behind minimum drive is that nature likes minimum-energy solutions,” David says. The team wants to find the minimum inputs needed to accelerate the system to required levels at various frequencies. “The assumption is this is close to what nature would do,” he says. “We are trying to maximize the capability of the shaker system by mimicking nature.”
Norman says the challenge is replicating real-world environments for multiple directions and developing specifications for minimum drive. “I think we also really need to learn a lot about the quirks of controlling these multiple degrees of freedom simultaneously. We’re still fairly new at that,” says Norman, who spent decades doing vibration testing at Sandia and Los Alamos national laboratories.
Sandia has performed two experimental 6DOF tests of nuclear weapons components, one for the B61-12 and one for the W88 ALT 370, says Laura Jacobs (1521), 6DOF research lead.
“These tests are a much better representation of what happens in the field so we can create better computational models and we can have more confidence in our designs,” she says.
The 6DOF system is unlike any other, so it requires different ways of specifying tests — sometimes without all the data needed from the field, Laura says. “A big part of preparing for the test is determining what we need, and then how to achieve what we need.”
Using computational modeling to help fill gaps
Some field tests haven’t been done; others don’t capture everything that could happen. The team wants to determine how to run a successful test when they lack complete field test information. Team members have turned to computational modeling to figure out from the existing data exactly what the tests need to achieve, Laura says.
Kevin says researchers have begun combining field test data from the X, Y, and Z axes for simultaneous directional testing. But rotational data doesn’t exist, and without it, no one’s sure how to design a rotational test, he says.
Still, he says, “we can prove that just doing three axes together is a better representation of a real-world environment even without the rotations.”
The latest international standards for vibration tests list what Davinia describes as a generic placeholder for multi-axis testing. “At least it recognizes multi-axis testing as an acceptable type of test to do, which is a big breakthrough,” she says.
“It’s going to be cool to develop this capability and have the research and the math and the evidence behind it to prove how this partners with single-axis testing and takes us to richer understanding,” Davinia says. Then she laughs. “Until 50 years from now when they come up with new technology and we go through all of this again.”
-- Sue Major Holmes
The eyes have it: Sandia teams with industry to improve human-data interaction
Intelligence analysts working to identify national security threats in warzones or airports or elsewhere often flip through multiple images to create a video-like effect. They also may toggle between images at lightning speed, pan across images, zoom in and out, or view videos or other moving records.
These dynamic images demand software and hardware tools that will help intelligence experts analyze the images more effectively and efficiently extract useful information from vast amounts of quickly changing data, says Laura McNamara (5346), an applied anthropologist at Sandia who has studied how certain analysts perform their jobs.
“Our core problem is designing computational information systems that make people better at getting meaningful information from those data sets, which are large and diverse and coming in quickly in high-stress environments,” Laura says.
A first step toward technological solutions for government agencies and industry grappling with this problem is a Cooperative Research and Development Agreement that Sandia has signed with EyeTracking Inc., a San Diego small business that specializes in eye tracking data collection and analysis.
“Both Sandia and EyeTracking are being helped by a direct link between each other,” says EyeTracking president James Weatherhead. “The hope is for both sides to come out with these tools and feed solutions back to different government agencies.”
Eye tracking monitors gazes, measures workload
In general, eye tracking measures the eyes’ activity by monitoring a viewer’s gaze on a computer screen, noting where viewers look and what they ignore and timing
when they blink. Current tools work well analyzing static images, like the children’s picture book Where’s Waldo, and for video images where researchers anticipate
content of interest, for example the placement of a product in a movie.
Sandia researcher Laura Matzen (1463) says such eye tracking data has been used in laboratory environments to study how people reason and differences between the ways experts and novices use information, but now the Labs needs to study real-world, or dynamic, environments.
If EyeTracking and Sandia can figure out ways to provide improved data analysis for dynamic images, Laura Matzen says researchers can:
- design enhanced experiments or field studies using dynamic images;
- compare how people or groups of people interact with dynamic visual data;
- advance cognitive science research to explore how expertise affects visual cognition, which could be used to create more effective training programs; and
- inform new system designs, for example, to help scale up certain types of surveillance by partially automating some analyst steps or highlighting anomalies to help analysts notice them or make sense of them more quickly.
EyeTracking provides hardware, software, Sandia offers access to analysts
EyeTracking is the exclusive distributor of the FOVIO Eye Tracker, a camera about the size of a soda can that’s placed under a computer monitor to track viewers’ eye movements. The company was started by Sandra Marshall, a cognitive psychologist from San Diego State University, who has worked with colleagues to develop software packages for collecting and analyzing eye tracking data.
Under the agreement, researchers Dan Morrow (5346) and Mike Haass (1461) are working with EyeTracking to figure out how to capture within tens of milliseconds the content beneath the point on a screen where a viewer is looking.
“How soon does the analyst look at the target region? How long to they linger there? Do they ever get there?” Dan asks. “If they are dwelling in another area, then we might go back after the fact to figure out why they are doing that.”
Until now, eye tracking research has shown how viewers react to stimuli on the screen. For example, a bare, black tree against a snow-covered scene will naturally attract attention. This type of bottom-up visual attention, where the viewer is reacting to stimuli, is well understood, Laura Matzen says.
But what if the viewer is looking at the scene with a task in mind, like finding a golf ball in the snow? They might glance at the tree quickly, but then their gaze goes to the snow to search for the golf ball. This type of top-down visual cognition is not well understood, and Sandia hopes to develop models that predict where analysts will look, she says.
Sandia researchers have worked with intelligence analysts to better understand how they do their jobs. In one experiment, they filmed them and asked them to describe their thought processes at points in the video, but because their visual task strategies had become automatic over the course of their careers, they couldn’t accurately describe how they did their jobs, Laura McNamara says.
“We know a lot about information processing, the physiology and neuroscience of visual processing,” she says. “How do we take that and apply it in these highly dynamic and real-world environments? The technologies are developed around a laboratory model as opposed to these real-world task environments.”
Partnership could lead to software designs that keep end user in mind
Laura McNamara says researchers need to anticipate analysts’ decisions in real-world environments to create a model of top-down visual decision-making. “We want to understand how fixation on something leads to analyst decisions, such as detouring to get information from a different source,” she says. “Right now, there’s no way to do that kind of complex information foraging modeling and incorporate eye tracking. You can’t do it, unless you want to go back and hand-code every single fixation.”
That’s “incredibly tedious,” Dan says, so he and Haass are exploring how to match time-stamped data with the content the viewer is focused on as they toggle, zoom, or pan through their work day.
“You might build this great radar, for example, but if you haven’t thought through how that data is interpreted, it’s not going to be successful because it’s the whole system including the human analyst that creates mission success,” Dan says.
The CRADA and several other projects at Sandia aim to strengthen the connections between humans and technology and to design systems with the end user in mind, Laura McNamara says.
“Where this could end up going is ensuring that as we invest money on information and analysis environments for intelligence analysts who are facing this firehose of information, we don’t give them software that increases their cognitive and perceptual load or that they just can’t use,” she says.
-- Heather Clark
‘Everyone needs a space to work’: Sandia gains Arctic airspace to further research in many fields
by Neal Singer
The Coast Guard, oil companies, climate researchers, and unmanned aircraft and robotic vehicle manufacturers all share an interest in the changing Arctic.
Helping further those interests, Sandia researcher Mark Ivey (6913) and Sandia colleagues worked closely with DOE program managers in the Office of Science to secure formal approval for a block of Federal Aviation Administration (FAA) special-use air space. On May 28, FAA published notice of W-220, a new Warning Area in which Sandia-approved
participants can gather data on clouds and atmospheric constituents, practice search-and-rescues at sea, and track the northward movement of retreating sea ice.
“Everyone needs a space to work,” says Mark.
The 40-mile-wide, 700-mile-long air space stretches from just offshore at Oliktok Point, the northern end of the road system on the North American continent, to about 400 miles short of the North Pole.
The authorization by FAA rewarded roughly five years of patient effort by Mark, his team, and their program managers who supported the efforts. “In 2004, we were granted a 4-mile-diameter Restricted Area around Oliktok Point, our base of operations for atmospheric measurements,” Mark says. Restricted areas are established in US airspace; warning areas apply to international airspace. “That was renewed in 2010. But we saw the possibility of more extensive, ongoing experiments, with renewed interest in operating offshore.”
Careful consideration by FAA
Still, it was clear to everyone involved that tethered, sensor-laden balloons hanging invisibly in a low cloud would present a danger to aircraft, particularly the many small and privately operated aircraft in Alaska.
So Mark, along with his predecessor, retired Sandia climate researcher Bernie Zak, applied and reapplied, revised, and reapplied again on behalf of DOE to be granted the larger air space, which extends over international waters and provides a significant safety margin to experimenters by warning pilots who might be considering entering the area.
The FAA’s hesitations, Mark says, weren’t arbitrary. “The FAA looks at these requests pretty carefully because if they didn’t, the whole country could soon be carved up into restricted air space, making flying a nightmare.”
Barrow, the other hub of activity for Mark’s team on the North Slope of Alaska, was too busy an aviation area to be considered for special-use air space. But even at remote Oliktok Point, pilots protested at a meeting hosted by the FAA. “They said there’s roughly 1,000 miles of coastline in the Alaskan Arctic with only two instrumented airports, Deadhorse and Barrow. ‘Look what you’re doing,’ they said, ‘these two instrumented airports are on either side of your proposed warning area. Suppose a pilot has a problem and one airport is fogged in; how does the pilot get to the other airport?’”
A smaller set of fences
To solve the problem, the warning area was divided into 16 areas: eight horizontal areas, each with two vertical layers. One vertical layer extends from zero to 2,000 feet and the other from 2,000 feet to 10,000 feet. For further ease of transport, eight subdivisions extend south to north. “That created a smaller set of fences that people can get around if they need to,” Mark says. “We also worked out ways that pilots could contact us to find out where current research activities were located.”
Then the question was raised in committee as to whether the warning area boundaries should follow longitudinal lines, so it would get smaller as it approached the Pole, dovetailing with Canada and Russia. After discussion, the area was set at 40 miles across from north to south. The area therefore stops short of the Pole to avoid intruding on the airspace of other countries.
“I saw gaining the warning area as a big win for DOE’s Atmospheric Radiation Measurement (ARM) facilities, because it opens up the Arctic for new Office of Science research efforts,” says Mark. “It makes it possible to do things out there it wouldn't be possible to do otherwise."
The facility, according to a description from DOE, “is hosting a research campaign designed to demonstrate how small, low-cost, unmanned aerial systems can be used to study and measure clouds and aerosols in the cold and harsh Arctic atmosphere.”
Sandia, which started the Arctic effort, now manages Oliktok Point for the Office of Science and the ARM Program, because, Mark says, "the FAA awards special-use airspace only to other federal agencies."
But other users have other interests. The first proj-ect to make use of the restricted air space is the Coast Guard Research and Development Center’s (RDC) Arctic Technology Evaluation 2015 search-and-rescue exercise (SAREX), a Cooperative Research and Development (CRADA) initiative involving the oil company ConocoPhillips and the Coast Guard Research and Development Center. Other partners include Insitu/Boeing, Era Helicopters, the National Oceanic and Atmospheric Administration, and multiple operational Coast Guard units. All these entities are working with Sandia on a joint effort that involves interoperability between manned and unmanned aircraft systems (UAS), sometimes referred to as drones, to conduct search and rescue operations.
Looking for Thermal Oscar
“The Coast Guard is concerned about search-and-rescue in the Arctic; they haven’t had a year-round presence there but they’re interested,” says Mark. “What’s changed in recent years is a lot of near-shore oil exploration and production activity, including helicopter operations.”
For the exercise, Coast Guard Cutter Healy deployed a six-person life raft and Thermal Oscar, an RDC-developed floating dummy outfitted with a heat source that makes it visible to infrared sensors, as search-and-rescue (SAR) targets for the UAS to locate. The UAS launched from land at Oliktok Point and transited out to the special-use airspace via an altitude reservation established by the FAA. Control of the UAS was passed to operators on CGC Healy to execute a search action plan to locate the SAR targets. Once the UAS was over the targets, CGC Healy passed their position to manned aircraft on shore and vectored the aircraft in for recovery efforts.
“It’s testing rescue communications, among other things. Here on the North Slope, we don’t have the satellite coverage or the infrastructure of the lower 48,” says Mark. “Somewhere offshore, Insitu/Boeing passed control of ScanEagle to someone on the Healy, which is a big deal in the drone world, takes nontrivial technology, and could be important in future search-and-rescues.”
The technology and practices implemented during Arctic Shield will provide useful information for future ARM/DOE UAS research activities, says Mark. “For example, electronics technologist Todd Houchens (6913) monitored radar scopes at NORAD during Arctic Shield to check air traffic near Oliktok point prior to UAS launch, and two FAA representatives on site at Oliktok provided helpful suggestions for future operations.
“And the whole event is taking place as safely as possible within this new warning area,” Mark says.
For more information about ARM, see www.arm.gov.
-- Neal Singer