Publications

Results 1–25 of 36
Skip to search filters

LIDAR for heliostat optical error assessment

AIP Conference Proceedings

Little, Charles; Small, Daniel E.; Yellowhair, Julius

This project extends Sandia's experience in Light Detection And Ranging (LiDAR) to gain an understanding of the abilities and limits of using 3D laser scanning to capture the relative canting angles between heliostat mirror surfaces in 3D space to an accuracy sufficient to measure canting errors. To the authors' knowledge, this approach has never been developed or implemented for this purpose. The goal is to be able to automatically perform a 3D scan, retrieve the data, and use computational geometry and a-priori mechanical knowledge of the heliostats (facet arrangement and size) to filter and isolate the facets, and fit planar models to the facet surfaces. FARO FocusS70 laser range scanners are used, which provide a dense data coverage of the scan area in the form of a 3D point-cloud. Each point has the 3D coordinates of the surface position illuminated by the device as it scans the laser beam over an area, both in azimuth and elevation. These scans can contain millions of points in total. The initial plan was to primarily use the back side of the heliostat to capture the mirror (the back side being opaque). It was not expected to capture high-quality data from the reflective front side. The discovery that the front side did, indeed, yield surface data was surprising. This is a function of the soiling, or collected dust, on the mirror surface. Typical point counts on the mirror facets are seen to be between 10k - 100k points per facet, depending on the facet area and the scan point density. By collecting facet surface points, the data can be used to calculate an individual planar fit per facet, the normals of which correlate directly with the facet pointing angle. Comparisons with neighboring facets yield the canting angles. The process includes software which automatically: 1) controls the LiDAR scanner and downloads the resultant scan data, 2) isolates the heliostat data from the full scan, 3) filters the points associated with each individual facet, and 4) calculates the planar fit and relative canting angles for each facet. The goal of this work has been to develop this system to measure heliostat canting errors to less than 0.25 mrad accuracy, with processing time under 5 minutes per heliostat. A future goal is to place this or a comparable sensor on an autonomous platform, along with the software system, to collect and analyze heliostats in the field for tracking and canting errors in real time. This work complements Sandia's strategic thrust in autonomy for CSP collector systems.

More Details

Biologically Inspired Interception on an Unmanned System

Chance, Frances S.; Little, Charles; McKenzie, Marcus M.; Dellana, Ryan A.; Small, Daniel E.; Gayle, Thomas R.; Novick, David K.

Borrowing from nature, neural-inspired interception algorithms were implemented onboard a vehicle. To maximize success, work was conducted in parallel within a simulated environment and on physical hardware. The intercept vehicle used only optical imaging to detect and track the target. A successful outcome is the proof-of-concept demonstration of a neural-inspired algorithm autonomously guiding a vehicle to intercept a moving target. This work tried to establish the key parameters for the intercept algorithm (sensors and vehicle) and expand the knowledge and capabilities of implementing neural-inspired algorithms in simulation and on hardware.

More Details

LiDAR For Heliostat Optical Assessment (Final Report)

Small, Daniel E.; Little, Charles

This project has sought to develop new uses for surveying-quality Light Detecting and Ranging (LiDAR) 3D scanning sensors in the automatic/autonomous assessment of optical errors in largescale concentrating solar power heliostat fields. Past experiments have demonstrated the ability of a 3D-LiDAR to acquire highly accurate point cloud measurements across several Sandia NSTTF heliostats. The goal of this project is to expand upon this work to see if and how it can be used in large commercial heliostat fields.

More Details

Eyes On the Ground (Final Report)

Brost, Randolph B.; Little, Charles; McDaniel, Michael M.; Peter-Stein, Natacha P.; Wade, James R.

This report summarizes the work performed under the Sandia LDRD project "Eyes on the Ground: Visual Verification for On-Site Inspection." The goal of the project was to develop methods and tools to assist an IAEA inspector in assessing visual and other information encountered during an inspection. Effective IAEA inspections are key to verifying states' compliance with nuclear non-proliferation treaties. In the course of this work we developed a taxonomy of candidate inspector assistance tasks, selected key tasks to focus on, identified hardware and software solution approaches, and made progress in implementing them. In particular, we demonstrated the use of multiple types of 3-d scanning technology applied to simulated inspection environments, and implemented a preliminary prototype of a novel inspector assistance tool. This report summarizes the project's major accomplishments, and gathers the abstracts and references for the publication and reports that were prepared as part of this work. We then describe work in progress that is not yet ready for publication. Approved for public release; further dissemination unlimited.

More Details

Eyes On the Ground: Path Forward Analysis

Brost, Randolph B.; Little, Charles; Peter-Stein, Natacha P.; Wade, James R.

A previous report assesses our progress to date on the Eyes On the Ground project, and reviews lessons learned. In this report, we address the implications of those lessons in defining the most productive path forward for the remainder of the project. We propose two main concepts: Interactive Diagnosis and Model-Driven Assistance. Among these, the Model-Driven Assistance concept appears the most promising. The Model-Driven Assistance concept is based on an approximate but useful model of a facility, which provides a unified representation for storing, viewing, and analyzing data that is known about the facility. This representation provides value to both inspectors and IAEA headquarters, and facilitates communication between the two. The concept further includes a lightweight, portable field tool to aid the inspector in executing a variety of inspection tasks, including capture of images and 3-d scan data. We develop a detailed description of this concept, including its system components, functionality, and example use cases. The envisioned tool would provide value by reducing inspector cognitive load, streamlining inspection tasks, and facilitating communication between the inspector and teams at IAEA headquarters. We conclude by enumerating the top implementation priorities to pursue in the remaining limited time of the project. Approved for public release; further dissemination unlimited.

More Details

Eyes On the Ground: Year 2 Assessment

Brost, Randolph B.; Little, Charles; McDaniel, Michael M.; McLendon, William C.; Wade, James R.

The goal of the Eyes On the Ground project is to develop tools to aid IAEA inspectors. Our original vision was to produce a tool that would take three-dimensional measurements of an unknown piece of equipment, construct a semantic representation of the measured object, and then use the resulting data to infer possible explanations of equipment function. We report our tests of a 3-d laser scanner to obtain 3-d point cloud data, and subsequent tests of software to convert the resulting point clouds into primitive geometric objects such as planes and cylinders. These tests successfully identified pipes of moderate diameter and planar surfaces, but also incurred significant noise. We also investigated the IAEA inspector task context, and learned that task constraints may present significant obstacles to using 3-d laser scanners. We further learned that equipment scale and enclosing cases may confound our original goal of equipment diagnosis. Meanwhile, we also surveyed the rapidly evolving field of 3-d measurement technology, and identified alternative sensor modalities that may prove more suitable for inspector use in a safeguards context. We conclude with a detailed discussion of lessons learned and the resulting implications for project goals. Approved for public release; further dissemination unlimited.

More Details

Enhanced physical security through a command-intent driven multi-agent sensor network

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Love, Joshua; Amai, Wendy; Blada, Timothy; Little, Charles; Neely, Jason; Buerger, Stephen B.

Sandia’s Intelligent Systems, Robotics, and Cybernetics group (ISRC) created the Sandia Architecture for Heterogeneous Unmanned System Control (SAHUC) to demonstrate how heterogeneous multi-agent teams could be used for tactical operations including the protection of high-consequence sites. Advances in multi-agent autonomy and unmanned systems have provided revolutionary new capabilities that can be leveraged for physical security applications. SAHUC applies these capabilities to produce a command-intent driven, autonomously adapting, multi-agent mobile sensor network. This network could enhance the security of high-consequence sites; it can be quickly and intuitively re-tasked to rapidly adapt to changing security conditions. The SAHUC architecture, GUI, autonomy layers, and implementation are explored. Results from experiments and a demonstration are also discussed.

More Details

The Sandia architecture for heterogeneous unmanned system control (SAHUC)

Proceedings of SPIE - The International Society for Optical Engineering

Love, Joshua; Amai, Wendy; Blada, Timothy; Little, Charles; Neely, Jason; Buerger, Stephen B.

The Sandia Architecture for Heterogeneous Unmanned System Control (SAHUC) was produced as part of a three year internally funded project performed by Sandia's Intelligent Systems, Robotics, and Cybernetics group (ISRC). ISRC created SAHUC to demonstrate how teams of Unmanned Systems (UMS) can be used for small-unit tactical operations incorporated into the protection of high-consequence sites. Advances in Unmanned Systems have provided crucial autonomy capabilities that can be leveraged and adapted to physical security applications. SAHUC applies these capabilities to provide a distributed ISR network for site security. This network can be rapidly re-tasked to respond to changing security conditions. The SAHUC architecture contains multiple levels of control. At the highest level a human operator inputs objectives for the network to accomplish. The heterogeneous unmanned systems automatically decide which agents can perform which objectives and then decide the best global assignment. The assignment algorithm is based upon coarse metrics that can be produced quickly. Responsiveness was deemed more crucial than optimality for responding to time-critical physical security threats. Lower levels of control take the assigned objective, perform online path planning, execute the desired plan, and stream data (LIDAR, video, GPS) back for display on the user interface. SAHUC also retains an override capability, allowing the human operator to modify all autonomous decisions whenever necessary. SAHUC has been implemented and tested with UAVs, UGVs, and GPS-tagged blue/red force actors. The final demonstration illustrated how a small fleet, commanded by a remote human operator, could aid in securing a facility and responding to an intruder.

More Details
Results 1–25 of 36
Results 1–25 of 36