Enhanced Physical Security through a Command-Intent Driven Multi-Agent Sensor Network
Abstract not provided.
Abstract not provided.
Proceedings of SPIE - The International Society for Optical Engineering
The widespread adoption of aerial, ground and sea-borne unmanned systems (UMS) for national security applications provides many advantages; however, effectively controlling large numbers of UMS in complex environments with modest manpower is a significant challenge. A control architecture and associated control methods are under development to allow a single user to control a team of multiple heterogeneous UMS as they conduct multi-faceted (i.e. multi-objective) missions in real time. The control architecture is hierarchical, modular and layered and enables operator interaction at each layer, ensuring the human operator is in close control of the unmanned team at all times. The architecture and key data structures are introduced. Two approaches to distributed collaborative control of heterogeneous unmanned systems are described, including an extension of homogeneous swarm control and a novel application of distributed model predictive control. Initial results are presented, demonstrating heterogeneous UMS teams conducting collaborative missions. Future work will focus on interacting with dynamic targets, integrating alternative control layers, and enabling a deeper and more intimate level of real-time operator control. © 2012 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE).
Abstract not provided.
Abstract not provided.
This Lab-Directed Research and Development (LDRD) sought to develop technology that enhances scenario construction speed, entity behavior robustness, and scalability in Live-Virtual-Constructive (LVC) simulation. We investigated issues in both simulation architecture and behavior modeling. We developed path-planning technology that improves the ability to express intent in the planning task while still permitting an efficient search algorithm. An LVC simulation demonstrated how this enables 'one-click' layout of squad tactical paths, as well as dynamic re-planning for simulated squads and for real and simulated mobile robots. We identified human response latencies that can be exploited in parallel/distributed architectures. We did an experimental study to determine where parallelization would be productive in Umbra-based force-on-force (FOF) simulations. We developed and implemented a data-driven simulation composition approach that solves entity class hierarchy issues and supports assurance of simulation fairness. Finally, we proposed a flexible framework to enable integration of multiple behavior modeling components that model working memory phenomena with different degrees of sophistication.
Abstract not provided.
Abstract not provided.
This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and template datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.
Proceedings - International Carnahan Conference on Security Technology
Two-dimensional facial recognition has, traditionally, been an attractive biometric, however, the accuracy of 2D facial recognition (FR) is performance limited and insufficient when confronted with extensive numbers of people to screen and identify, and the numerous appearances that a 2D face can exhibit. In efforts to overcome many of the issues limiting 2D FR technology, researchers are beginning to focus their attention on 3D FR technology. In this paper, an analysis of a 3D FR system being developed at Sandia National Laboratories is performed. The study involves the use of 200 subjects on which verification (one-to-one) matches are performed using a single probe database (one correct match per subject) and 30 subjects on which identification matches are performed. The system is evaluated in terms of probability of detection (Pd) and probability of false accepts (FAR). The results presented will aid in providing an initial understanding of the performance of 3D FR © 2004 IEEE.
This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and template datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.
Abstract not provided.
Proceedings-IEEE International Conference on Robotics and Automation
The need to register data is abundant in applications such as: world modeling, part inspection and manufacturing, object recognition, pose estimation, robotic navigation, and reverse engineering. Registration occurs by aligning the regions that are common to multiple images. The largest difficulty in performing this registration is dealing with outliers and local minima while remaining efficient. A commonly used technique, iterative closest point, is efficient but is unable to deal with outliers or avoid local minima. Another commonly used optimization algorithm, simulated annealing, is effective at dealing with local minima but is very slow. Therefore, the algorithm developed in this paper is a hybrid algorithm that combines the speed of iterative closest point with the robustness of simulated annealing. Additionally, a robust error function is incorporated to deal with outliers. This algorithm is incorporated into a complete modeling system that inputs two sets of range data, registers the sets, and outputs a composite model.
Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.
This project visualizes characterization data in a 3D setting, in real time. Real time in this sense means collecting the data and presenting it before it delays the user, and processing faster than the acquisition systems so no bottlenecks occur. The goals have been to build a volumetric viewer to display 3D data, demonstrate projecting other data, such as images, onto the 3D data, and display both the 3D and projected images as fast as the data became available. The authors have examined several ways to display 3D surface data. The most effective was generating polygonal surface meshes. They have created surface maps form a continuous stream of 3D range data, fused image data onto the geometry, and displayed the data with a standard 3D rendering package. In parallel with this, they have developed a method to project real-time images onto the surface created. A key component is mapping the data on the correct surfaces, which requires a-priori positional information along with accurate calibration of the camera and lens system.
This work builds upon established Sandia intelligent systems technology to develop a unique approach for the integration of intelligent system control into the US Highway and urban transportation systems. The Sandia developed concept of the COPILOT controller integrates a human driver with computer control to increase human performance while reducing reliance on detailed driver attention. This research extends Sandia expertise in sensor based, real-time control of robotics systems to high speed transportation systems. Knowledge in the form of maps and performance characteristics of vehicles provides the automatic decision making intelligence needed to plan optimum routes, maintain safe driving speeds and distances, avoid collisions, and conserve fuel.
The ability to successfully use and interact with a computerized world model is dependent on the ability to create an accurate world model. The goal of this project was to develop a prototype system to remotely deploy sensors into a workspace, collect surface information, and rapidly build an accurate world model of that workspace. A key consideration was that the workspace areas are typically hazardous environments, where it is difficult or impossible for humans to enter. Therefore, the system needed to be fully remote, with no external connections. To accomplish this goal, an electric, mobile platform with battery power sufficient for both the platform and sensor electronics was procured and 3D range sensors were deployed on the platform to capture surface data within the workspace. A radio Ethernet connection was used to provide communications to the vehicle and all on-board electronics. Video from on-board cameras was also transmitted to the base station and used to teleoperate the vehicle. Range data generated by the on-board 3D sensors was transformed into surface maps, or models. Registering the sensor location to a consistent reference frame as the platform moved through the workspace allowed construction of a detailed 3D world model of the extended workspace.
The ability to use an interactive world model, whether it is for robotics simulation or most other virtual graphical environments, relies on the users ability to create an accurate world model. Typically this is a tedious process, requiring many hours to create 3-D CAD models of the surfaces within a workspace. The goal of this ongoing project is to develop usable methods to rapidly build world models of real world workspaces. This brings structure to an unstructured environment and allows graphical based robotics control to be accomplished in a reasonable time frame when traditional CAD modelling is not enough. To accomplish this, 3D range sensors are deployed to capture surface data within the workspace. This data is then transformed into surface maps, or models. A 3D world model of the workspace is built quickly and accurately, without ever having to put people in the environment.