Automatic Recognition of Malicious Intent Indicators
Abstract not provided.
Abstract not provided.
Abstract not provided.
This paper explores the possibility of separating and classifying remotely-sensed multispectral data from rocks and minerals onto seven geological rock-type groups. These groups are extracted from the general categories of metamorphic, igneous and sedimentary rocks. The study is performed under ideal conditions for which the data is generated according to laboratory hyperspectral data for the members, which are, in turn, passed through the Multi-spectral Thermal Imager (MTI) filters yielding 15 bands. The main challenge in separability is the small size of the training data sets, which initially did not permit direct application of Bayesian decision theory. To enable Bayseian classification, the original training data is linearly perturbed with the addition minerals, vegetation, soil, water and other valid impurities. As a result, the size of the training data is significantly increased and accurate estimates of the covariance matrices are achieved. In addition, a set of reduced (five) linearly-extracted canonical features that are optimal in providing the most important information about the data is determined. An alternative nonlinear feature-selection method is also employed based on spectral indices comprising a small subset of all possible ratios between bands. By applying three optimization strategies, combinations of two and three ratios are found that provide reliable separability and classification between all seven groups according to the Bhattacharyya distance. To set a benchmark to which the MTI capability in rock classification can be compared, an optimization strategy is performed for the selection of optimal multispectral filters, other than the MTI filters, and an improvement in classification is predicted.
In this paper we investigate the applicability of the feature extraction mechanisms found in the neurophysiology of mammals to the problem of object recognition in synthetic aperture radar imagery. Our approach is to present multiple views of objects to be recognized to a two-stage self-organizing neural network architecture. The first stage, a two-layer Neocognitron, performs feature extraction in each layer The resulting feature vectors are presented to the second stage, an ART-2A classifier self-organizing neural network which clusters the features into multiple object categories. The feature extraction operators resulting from the self-organization process are compared to the feature extraction mechanisms found in the neurophysiology of vision. In a previous paper, the Neocognitron was trained on raw SAR imagery. The architecture was able to recognize a simulated vehicle at arbitrary azimuthal orientations at a single depression angle while rejecting clutter as well as other vehicles. Feature extraction on raw imagery yielded features that were robust but very difficult to interpret. In this paper we report the results of some new experiments in which the self-organization process is applied separately to shadow and bright returns from objects to be recognized. Feature extraction on shadow returns yield oriented contrast edge operators suggestive of bipartite simple cells observed in the striate cortex of mammals. Feature extraction on the specularity patterns in bright returns yield a collection of operators resembling a twodimensional Haar basis set. We compare the performance of the earlier two-stage neural network trained on raw imagery with a modified network using the new feature set.