There is a desire to detect and assess unmanned aerial systems (UAS) with a high probability of detection and low nuisance alarm rates in numerous fields of security. Currently available solutions rely upon exploiting electronic signals emitted from the UAS. While these methods may enable some degree of security, they fail to address the emerging domain of autonomous UAS that do not transmit or receive information during the course of a mission. We examine frequency analysis of pixel fluctuation over time to exploit the temporal frequency signature present in imagery data of UAS. This signature is present for autonomous or controlled multirotor UAS and allows for lower pixels-on-target detection. The methodology also acts as a method of assessment due to the distinct frequency signatures of UAS when examined against the standard nuisance alarms such as birds or non-UAS electronic signal emitters. The temporal frequency analysis method is paired with machine learning algorithms to demonstrate a UAS detection and assessment method that requires minimal human interaction. The use of the machine learning algorithm allows each necessary human assess to increase the likelihood of autonomous assessment, allowing for increased system performance over time.
Deep learning techniques have demonstrated the ability to perform a variety of object recognition tasks using visible imager data; however, deep learning has not been implemented as a means to autonomously detect and assess targets of interest in a physical security system. We demonstrate the use of transfer learning on a convolutional neural network (CNN) to significantly reduce training time while keeping detection accuracy of physical security relevant targets high. Unlike many detection algorithms employed by video analytics within physical security systems, this method does not rely on temporal data to construct a background scene; targets of interest can halt motion indefinitely and still be detected by the implemented CNN. A key advantage of using deep learning is the ability for a network to improve over time. Periodic retraining can lead to better detection and higher confidence rates. We investigate training data size versus CNN test accuracy using physical security video data. Due to the large number of visible imagers, significant volume of data collected daily, and currently deployed human in the loop ground truth data, physical security systems present a unique environment that is well suited for analysis via CNNs. This could lead to the creation of algorithmic element that reduces human burden and decreases human analyzed nuisance alarms.
The evaluation of optical system performance in fog conditions typically requires field testing. This can be challenging due to the unpredictable nature of fog generation and the temporal and spatial nonuniformity of the phenomenon itself. We describe the Sandia National Laboratories fog chamber, a new test facility that enables the repeatable generation of fog within a 55m×3m×3m (L×W×H) environment, and demonstrate the fog chamber through a series of optical tests. These tests are performed to evaluate system image quality, determine meteorological optical range (MOR), and measure the number of particles in the atmosphere. Relationships between typical optical quality metrics, MOR values, and total number of fog particles are described using the data obtained from the fog chamber and repeated over a series of three tests.
Computational imagers fundamentally enable new optical hardware through the use of both physical and algorithmic elements. We report on the creation of a static lensless computational imaging system enabled by this paradigm.
We report on the design of a refracting prism array element for use in a computational lensless imaging system. The technique discussed enables creation of a refracting element that maximizes signal on a detector region.
The modeling and simulation of non-traditional imaging systems require holistic consideration of the end-to-end system. We demonstrate this approach through a tolerance analysis of a random scattering lensless imaging system.
The modeling and simulation of non-traditional imaging systems require holistic consideration of the end-to-end system. We demonstrate this approach through a tolerance analysis of a random scattering lensless imaging system.
Lensless imaging systems have the potential to provide new capabilities for lower size and weight configuration than traditional imaging systems. Lensless imagers frequently utilize computational imaging techniques, which moves the complexity of the system away from optical subcomponents and into a calibration process whereby the measurement matrix is estimated. We report on the design, simulation, and prototyping of a lensless imaging system that utilizes a 3D printed optically transparent random scattering element. Development of end-to-end system simulations, which includes simulations of the calibration process, as well as the data processing algorithm used to generate an image from the raw data are presented. These simulations utilize GPU-based raytracing software, and parallelized minimization algorithms to bring complete system simulation times down to the order of seconds. Hardware prototype results are presented, and practical lessons such as the effect of sensor noise on reconstructed image quality are discussed. System performance metrics are proposed and evaluated to discuss image quality in a manner that is relatable to traditional image quality metrics. Various hardware instantiations are discussed.
This report contains analysis of unmanned aerial systems as imaged by visible, short-wave infrared, mid-wave infrared, and long-wave infrared passive devices. Testing was conducted at the Nevada National Security Site (NNSS) during the week of August 15, 2016. Target images in all spectral bands are shown and contrast versus background is reported. Calculations are performed to determine estimated pixels-on-target for detection and assessment levels, and the number of pixels needed to cover a hemisphere for detection or assessment at defined distances. Background clutter challenges are qualitatively discussed for different spectral bands, and low contrast scenarios are highlighted for long-wave infrared imagers.
Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.