Publications

31 Results
Skip to search filters

Eyes On the Ground: Year 2 Assessment

Brost, Randolph B.; Little, Charles; McDaniel, Michael M.; McLendon, William C.; Wade, James R.

The goal of the Eyes On the Ground project is to develop tools to aid IAEA inspectors. Our original vision was to produce a tool that would take three-dimensional measurements of an unknown piece of equipment, construct a semantic representation of the measured object, and then use the resulting data to infer possible explanations of equipment function. We report our tests of a 3-d laser scanner to obtain 3-d point cloud data, and subsequent tests of software to convert the resulting point clouds into primitive geometric objects such as planes and cylinders. These tests successfully identified pipes of moderate diameter and planar surfaces, but also incurred significant noise. We also investigated the IAEA inspector task context, and learned that task constraints may present significant obstacles to using 3-d laser scanners. We further learned that equipment scale and enclosing cases may confound our original goal of equipment diagnosis. Meanwhile, we also surveyed the rapidly evolving field of 3-d measurement technology, and identified alternative sensor modalities that may prove more suitable for inspector use in a safeguards context. We conclude with a detailed discussion of lessons learned and the resulting implications for project goals. Approved for public release; further dissemination unlimited.

More Details

Path Network Recovery Using Remote Sensing Data and Geospatial-Temporal Semantic Graphs

McLendon, William C.; Brost, Randolph B.

Remote sensing systems produce large volumes of high-resolution images that are difficult to search. The GeoGraphy (pronounced Geo-Graph-y) framework [2, 20] encodes remote sensing imagery into a geospatial-temporal semantic graph representation to enable high level semantic searches to be performed. Typically scene objects such as buildings and trees tend to be shaped like blocks with few holes, but other shapes generated from path networks tend to have a large number of holes and can span a large geographic region due to their connectedness. For example, we have a dataset covering the city of Philadelphia in which there is a single road network node spanning a 6 mile x 8 mile region. Even a simple question such as "find two houses near the same street" might give unexpected results. More generally, nodes arising from networks of paths (roads, sidewalks, trails, etc.) require additional processing to make them useful for searches in GeoGraphy. We have assigned the term Path Network Recovery to this process. Path Network Recovery is a three-step process involving (1) partitioning the network node into segments, (2) repairing broken path segments interrupted by occlusions or sensor noise, and (3) adding path-aware search semantics into GeoQuestions. This report covers the path network recovery process, how it is used, and some example use cases of the current capabilities.

More Details

A computational framework for ontologically storing and analyzing very large overhead image sets

Proceedings of the 3rd ACM SIGSPATIAL International Workshop on Analytics for Big Geospatial Data, BigSpatial 2014

Brost, Randolph B.; Rintoul, Mark D.; McLendon, William C.; Strip, David R.; Parekh, Ojas D.; Woodbridge, Diane W.

We describe a computational approach to remote sensing image analysis that addresses many of the classic problems associated with storage, search, and query. This process starts by automatically annotating the fundamental objects in the image data set that will be used as a basis for an ontology, including both the objects (such as building, road, water, etc.) and their spatial and temporal relationships (is within 100 m of, is surrounded by, has changed in the past year, etc.). Data sets that can include multiple time slices of the same area are then processed using automated tools that reduce the images to the objects and relationships defined in an ontology based on the primitive objects, and this representation is stored in a geospatial-temporal semantic graph. Image searches are then defined in terms of the ontology (e.g. find a building greater than 103 m2 that borders a body of water), and the graph is searched for such relationships. This approach also enables the incorporation of non-image data that is related to the ontology. We demonstrate through an initial implementation of the entire system on large data sets (109 - 1011 pixels) that this system is robust against variations in di?erent image collection parameters, provides a way for analysts to query data sets in a more natural way, and can greatly reduce the memory footprint of the search.

More Details

Encoding and analyzing aerial imagery using geospatial semantic graphs

Rintoul, Mark D.; Watson, Jean-Paul W.; McLendon, William C.; Parekh, Ojas D.

While collection capabilities have yielded an ever-increasing volume of aerial imagery, analytic techniques for identifying patterns in and extracting relevant information from this data have seriously lagged. The vast majority of imagery is never examined, due to a combination of the limited bandwidth of human analysts and limitations of existing analysis tools. In this report, we describe an alternative, novel approach to both encoding and analyzing aerial imagery, using the concept of a geospatial semantic graph. The advantages of our approach are twofold. First, intuitive templates can be easily specified in terms of the domain language in which an analyst converses. These templates can be used to automatically and efficiently search large graph databases, for specific patterns of interest. Second, unsupervised machine learning techniques can be applied to automatically identify patterns in the graph databases, exposing recurring motifs in imagery. We illustrate our approach using real-world data for Anne Arundel County, Maryland, and compare the performance of our approach to that of an expert human analyst.

More Details

LDRD final report :

McLendon, William C.; Brost, Randolph B.

Modeling geospatial information with semantic graphs enables search for sites of interest based on relationships between features, without requiring strong a priori models of feature shape or other intrinsic properties. Geospatial semantic graphs can be constructed from raw sensor data with suitable preprocessing to obtain a discretized representation. This report describes initial work toward extending geospatial semantic graphs to include temporal information, and initial results applying semantic graph techniques to SAR image data. We describe an efficient graph structure that includes geospatial and temporal information, which is designed to support simultaneous spatial and temporal search queries. We also report a preliminary implementation of feature recognition, semantic graph modeling, and graph search based on input SAR data. The report concludes with lessons learned and suggestions for future improvements.

More Details

Brief announcement: Subgraph Isomorphism on a MultiThreaded shared memory architecture

Annual ACM Symposium on Parallelism in Algorithms and Architectures

Ralph, Claire C.; Leung, Vitus J.; McLendon, William C.

Graph algorithms tend to suffer poor performance due to the irregularity of access patterns within general graph data structures, arising from poor data locality, which translates to high memory latency. The result is that advances in high-performance solutions for graph algorithms are most likely to come through advances in both architectures and algorithms. Specialized MMT shared memory machines offer a potentially transformative environment in which to approach the problem. Here, we explore the challenges of implementing Subgraph Isomorphism (SI) algorithms based on the Ullmann and VF2 algorithms in the Cray XMT environment, where issues of memory contention, scheduling, and compiler parallelizability must be optimized. Copyright is held by the author/owner(s).

More Details

SNL software manual for the ACS Data Analytics Project

Stearley, Jon S.; Robinson, David G.; Hooper, Russell H.; Stickland, Michael S.; McLendon, William C.; Rodrigues, Arun

In the ACS Data Analytics Project (also known as 'YumYum'), a supercomputer is modeled as a graph of components and dependencies, jobs and faults are simulated, and component fault rates are estimated using the graph structure and job pass/fail outcomes. This report documents the successful completion of all SNL deliverables and tasks, describes the software written by SNL for the project, and presents the data it generates. Readers should understand what the software tools are, how they fit together, and how to use them to reproduce the presented data and additional experiments as desired. The SNL YumYum tools provide the novel simulation and inference capabilities desired by ACS. SNL also developed and implemented a new algorithm, which provides faster estimates, at finer component granularity, on arbitrary directed acyclic graphs.

More Details

Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016

Schoenwald, David A.; Richardson, Bryan T.; Riehm, Andrew C.; Wolfenbarger, Paul W.; Adams, Brian M.; Reno, Matthew J.; Hansen, Clifford H.; Oldfield, Ron A.; Stamp, Jason E.; Stein, Joshua S.; Hoekstra, Robert J.; Munoz-Ramos, Karina M.; McLendon, William C.; Russo, Thomas V.; Phillips, Laurence R.

Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

More Details

Graph algorithms in the titan toolkit

McLendon, William C.

Graph algorithms are a key component in a wide variety of intelligence analysis activities. The Graph-Based Informatics for Non-Proliferation and Counter-Terrorism project addresses the critical need of making these graph algorithms accessible to Sandia analysts in a manner that is both intuitive and effective. Specifically we describe the design and implementation of an open source toolkit for doing graph analysis, informatics, and visualization that provides Sandia with novel analysis capability for non-proliferation and counter-terrorism.

More Details
31 Results
31 Results