Open Set Recognition of Aircraft in Aerial Imagery using Synthetic Template Models
Abstract not provided.
Abstract not provided.
Proceedings of SPIE - The International Society for Optical Engineering
Fast, accurate and robust automatic target recognition (ATR) in optical aerial imagery can provide game-changing advantages to military commanders and personnel. ATR algorithms must reject non-targets with a high degree of confidence in a world with an infinite number of possible input images. Furthermore, they must learn to recognize new targets without requiring massive data collections. Whereas most machine learning algorithms classify data in a closed set manner by mapping inputs to a fixed set of training classes, open set recognizers incorporate constraints that allow for inputs to be labelled as unknown. We have adapted two template-based open set recognizers to use computer generated synthetic images of military aircraft as training data, to provide a baseline for military-grade ATR: (1) a frequentist approach based on probabilistic fusion of extracted image features, and (2) an open set extension to the one-class support vector machine (SVM). These algorithms both use histograms of oriented gradients (HOG) as features as well as artificial augmentation of both real and synthetic image chips to take advantage of minimal training data. Our results show that open set recognizers trained with synthetic data and tested with real data can successfully discriminate real target inputs from non-targets. However, there is still a requirement for some knowledge of the real target in order to calibrate the relationship between synthetic template and target score distributions. We conclude by proposing algorithm modifications that may improve the ability of synthetic data to represent real data.
Abstract not provided.
Human Vision and Electronic Imaging 2016, HVEI 2016
In this study, eye tracking metrics and visual saliency maps were used to assess analysts' interactions with synthetic aperture radar (SAR) imagery. Participants with varying levels of experience with SAR imagery completed a target detection task while their eye movements and behavioral responses were recorded. The resulting gaze maps were compared with maps of bottom-up visual saliency and with maps of automatically detected image features The results showed striking differences between professional SAR analysis and novices in terms of how their visual search patterns related to the visual saliency of features in the imagery. They also revealed patterns that reflect the utility of various features in the images for the professional analysts These findings have implications for system design andfor the design and use of automatic feature classification algorithms.
Human Vision and Electronic Imaging 2016, HVEI 2016
In this study, eye tracking metrics and visual saliency maps were used to assess analysts' interactions with synthetic aperture radar (SAR) imagery. Participants with varying levels of experience with SAR imagery completed a target detection task while their eye movements and behavioral responses were recorded. The resulting gaze maps were compared with maps of bottom-up visual saliency and with maps of automatically detected image features The results showed striking differences between professional SAR analysis and novices in terms of how their visual search patterns related to the visual saliency of features in the imagery. They also revealed patterns that reflect the utility of various features in the images for the professional analysts These findings have implications for system design andfor the design and use of automatic feature classification algorithms.
Proceedings of SPIE - The International Society for Optical Engineering
Coherent change detection (CCD) provides a way for analysts and detectors to find ephemeral features that would otherwise be invisible in traditional synthetic aperture radar (SAR) imagery. However, CCD can produce false alarms in regions of the image that have low SNR and high vegetation areas. The method proposed looks to eliminate these false alarm regions by creating a mask which can then be applied to change products. This is done by utilizing both the magnitude and coherence feature statistics of a scene. For each feature, the image is segmented into groups of similar pixels called superpixels. The method then utilizes a training phase to model each terrain that the user deems as capable of supporting change and statistically comparing superpixels in the image to the modeled terrain types. Finally, the method combines the features using probabilistic fusion to create a mask that a user can threshold and apply to a change product for human analysis or automatic feature detectors.