Our goal was to create an immersive, texture rich 3-D model within minutes of entering a room using nothing more than a laptop and a consumer color and depth camera (e.g., the Microsoft Kinect & ASUS sensor). The sensor generates a noisy cloud of 300 thousand 3-D points thirty times per second which must be converted into a textured polygonal mesh and correlated with prior readings. The interface needs to be quick, simple, and interactive so that rapid model building becomes a standard tool in the investigator’s toolkit.
Forensics analysis, crime scene investigation, emergency response, and building restoration all require a scene model. The model provides a 3-D context for sensor measurements, length and volume calculations, operational planning, and coverage analysis. In the past, models were derived from simple floor plans or created using simple sketch-up tools but these lacked the visual detail needed to properly describe a room. Three-dimensional survey equipment is available but the equipment is extremely expensive and the time required to generate a model is prohibitive. Our goal is to develop technology so that every response team can afford the equipment and be able to generate models without requiring a special modeling team.
Features & Benefits
The ASUS/Kinect sensor is less than $200 and provides positioning accuracy within centimeters. The software can be deployed on any laptop with a high-end graphics card. A room can be modeled within a few minutes. Because of the speed and low cost of acquisition, models can be created upon initial entry creating a detailed snapshot of a scene before objects are moved.
3-D model building is being developed to both enhance the capabilities of Sandia’s Building Restoration Operations Optimization Model (BROOM) and to support robot-based navigation and planning.