Publications

Results 1–25 of 39

Search results

Jump to search filters

LIDAR for heliostat optical error assessment

AIP Conference Proceedings

Small, Daniel E.; Little, Charles

This project extends Sandia's experience in Light Detection And Ranging (LiDAR) to gain an understanding of the abilities and limits of using 3D laser scanning to capture the relative canting angles between heliostat mirror surfaces in 3D space to an accuracy sufficient to measure canting errors. To the authors' knowledge, this approach has never been developed or implemented for this purpose. The goal is to be able to automatically perform a 3D scan, retrieve the data, and use computational geometry and a-priori mechanical knowledge of the heliostats (facet arrangement and size) to filter and isolate the facets, and fit planar models to the facet surfaces. FARO FocusS70 laser range scanners are used, which provide a dense data coverage of the scan area in the form of a 3D point-cloud. Each point has the 3D coordinates of the surface position illuminated by the device as it scans the laser beam over an area, both in azimuth and elevation. These scans can contain millions of points in total. The initial plan was to primarily use the back side of the heliostat to capture the mirror (the back side being opaque). It was not expected to capture high-quality data from the reflective front side. The discovery that the front side did, indeed, yield surface data was surprising. This is a function of the soiling, or collected dust, on the mirror surface. Typical point counts on the mirror facets are seen to be between 10k - 100k points per facet, depending on the facet area and the scan point density. By collecting facet surface points, the data can be used to calculate an individual planar fit per facet, the normals of which correlate directly with the facet pointing angle. Comparisons with neighboring facets yield the canting angles. The process includes software which automatically: 1) controls the LiDAR scanner and downloads the resultant scan data, 2) isolates the heliostat data from the full scan, 3) filters the points associated with each individual facet, and 4) calculates the planar fit and relative canting angles for each facet. The goal of this work has been to develop this system to measure heliostat canting errors to less than 0.25 mrad accuracy, with processing time under 5 minutes per heliostat. A future goal is to place this or a comparable sensor on an autonomous platform, along with the software system, to collect and analyze heliostats in the field for tracking and canting errors in real time. This work complements Sandia's strategic thrust in autonomy for CSP collector systems.

More Details

Biologically Inspired Interception on an Unmanned System

Chance, Frances S.; Little, Charles; Mckenzie, Marcus; Dellana, Ryan A.; Small, Daniel E.; Gayle, Thomas R.; Novick, David K.

Borrowing from nature, neural-inspired interception algorithms were implemented onboard a vehicle. To maximize success, work was conducted in parallel within a simulated environment and on physical hardware. The intercept vehicle used only optical imaging to detect and track the target. A successful outcome is the proof-of-concept demonstration of a neural-inspired algorithm autonomously guiding a vehicle to intercept a moving target. This work tried to establish the key parameters for the intercept algorithm (sensors and vehicle) and expand the knowledge and capabilities of implementing neural-inspired algorithms in simulation and on hardware.

More Details

Eyes On the Ground (Final Report)

Brost, Randolph B.; Little, Charles; McDaniel, Michael M.; Peter-Stein, Natacha P.; Wade, James R.

This report summarizes the work performed under the Sandia LDRD project "Eyes on the Ground: Visual Verification for On-Site Inspection." The goal of the project was to develop methods and tools to assist an IAEA inspector in assessing visual and other information encountered during an inspection. Effective IAEA inspections are key to verifying states' compliance with nuclear non-proliferation treaties. In the course of this work we developed a taxonomy of candidate inspector assistance tasks, selected key tasks to focus on, identified hardware and software solution approaches, and made progress in implementing them. In particular, we demonstrated the use of multiple types of 3-d scanning technology applied to simulated inspection environments, and implemented a preliminary prototype of a novel inspector assistance tool. This report summarizes the project's major accomplishments, and gathers the abstracts and references for the publication and reports that were prepared as part of this work. We then describe work in progress that is not yet ready for publication. Approved for public release; further dissemination unlimited.

More Details

Eyes On the Ground: Path Forward Analysis

Brost, Randolph B.; Little, Charles; Peter-Stein, Natacha P.; Wade, James R.

A previous report assesses our progress to date on the Eyes On the Ground project, and reviews lessons learned. In this report, we address the implications of those lessons in defining the most productive path forward for the remainder of the project. We propose two main concepts: Interactive Diagnosis and Model-Driven Assistance. Among these, the Model-Driven Assistance concept appears the most promising. The Model-Driven Assistance concept is based on an approximate but useful model of a facility, which provides a unified representation for storing, viewing, and analyzing data that is known about the facility. This representation provides value to both inspectors and IAEA headquarters, and facilitates communication between the two. The concept further includes a lightweight, portable field tool to aid the inspector in executing a variety of inspection tasks, including capture of images and 3-d scan data. We develop a detailed description of this concept, including its system components, functionality, and example use cases. The envisioned tool would provide value by reducing inspector cognitive load, streamlining inspection tasks, and facilitating communication between the inspector and teams at IAEA headquarters. We conclude by enumerating the top implementation priorities to pursue in the remaining limited time of the project. Approved for public release; further dissemination unlimited.

More Details

Eyes On the Ground: Year 2 Assessment

Brost, Randolph B.; Little, Charles; McDaniel, Michael M.; McLendon, William C.; Wade, James R.

The goal of the Eyes On the Ground project is to develop tools to aid IAEA inspectors. Our original vision was to produce a tool that would take three-dimensional measurements of an unknown piece of equipment, construct a semantic representation of the measured object, and then use the resulting data to infer possible explanations of equipment function. We report our tests of a 3-d laser scanner to obtain 3-d point cloud data, and subsequent tests of software to convert the resulting point clouds into primitive geometric objects such as planes and cylinders. These tests successfully identified pipes of moderate diameter and planar surfaces, but also incurred significant noise. We also investigated the IAEA inspector task context, and learned that task constraints may present significant obstacles to using 3-d laser scanners. We further learned that equipment scale and enclosing cases may confound our original goal of equipment diagnosis. Meanwhile, we also surveyed the rapidly evolving field of 3-d measurement technology, and identified alternative sensor modalities that may prove more suitable for inspector use in a safeguards context. We conclude with a detailed discussion of lessons learned and the resulting implications for project goals. Approved for public release; further dissemination unlimited.

More Details

Irreversibility of Image Transform using Feature Descriptors

Little, Charles; Tucker, James D.; Wilson, Christopher W.; Weber, Thomas M.

Our work in radiographic image matching has centered on the use of SURF (Speeded Up Robust Features) for feature detection, and FLANN (Fast Learning Artificial Neural Network) for feature matching. We discovered that while the SURF process does return information on location, scale, and rotation for each detected feature, they are not essential for image matching. The nature of the remaining feature detection data does not appear to contain any useful information in terms of reconstructing a useful portion of an image, and therefore is not amenable to reconstructing the original image. This led us to wonder if, in fact, we had discovered an irreversible process; the original image could not be constructed from the remaining feature data. Additional detail on the derivation of the image processing and matching algorithms and the irreversibility hypothesis are available in the final SAND Report documenting our previous LDRD work (SAND2015-9665 “Processing Radiation Images Behind an Information Barrier for Automatic Warhead Authentication” Little, Wilson, Weber and Novick, 2015).

More Details
Results 1–25 of 39
Results 1–25 of 39