Publications

Results 1–25 of 45

Search results

Jump to search filters

A likelihood ratio test for shrinkage covariance estimators

Proceedings of SPIE - The International Society for Optical Engineering

Anderson, Dylan Z.; Vanderlaan, John D.

In this paper, we develop a nested chi-squared likelihood ratio test for selecting among shrinkage-regularized covariance estimators for background modeling in hyperspectral imagery. Critical to many target and anomaly detection algorithms is the modeling and estimation of the underlying background signal present in the data. This is especially important in hyperspectral imagery, wherein the signals of interest often represent only a small fraction of the observed variance, for example when targets of interest are subpixel. This background is often modeled by a local or global multivariate Gaussian distribution, which necessitates estimating a covariance matrix. Maximum likelihood estimation of this matrix often overfits the available data, particularly in high dimensional settings such as hyperspectral imagery, yielding subpar detection results. Instead, shrinkage estimators are often used to regularize the estimate. Shrinkage estimators linearly combine the overfit covariance with an underfit shrinkage target, thereby producing a well-fit estimator. These estimators introduce a shrinkage parameter, which controls the relative weighting between the covariance and shrinkage target. There have been many proposed methods for setting this parameter, but comparing these methods and shrinkage values is often performed with a cross-validation procedure, which can be computationally expensive and highly sample inefficient. Drawing from Bayesian regression methods, we compute the degrees of freedom of a covariance estimate using eigenvalue thresholding and employ a nested chi-squared likelihood ratio test for comparing estimators. This likelihood ratio test requires no cross-validation procedure and enables direct comparison of different shrinkage estimates, which is computationally efficient.

More Details

Data Fusion via Neural Network Entropy Minimization for Target Detection and Multi-Sensor Event Classification

Linville, Lisa; Anderson, Dylan Z.; Michalenko, Joshua J.; Garcia, Jorge A.

Broadly applicable solutions to multimodal and multisensory fusion problems across domains remain a challenge because effective solutions often require substantive domain knowledge and engineering. The chief questions that arise for data fusion are in when to share information from different data sources, and how to accomplish the integration of information. The solutions explored in this work remain agnostic to input representation and terminal decision fusion approaches by sharing information through the learning objective as a compound objective function. The objective function this work uses assumes a one-to-one learning paradigm within a one-to-many domain which allows the assumption that consistency can be enforced across the one-to-many dimension. The domains and tasks we explore in this work include multi-sensor fusion for seismic event location and multimodal hyperspectral target discrimination. We find that our domain- informed consistency objectives are challenging to implement in stable and successful learning because of intersections between inherent data complexity and practical parameter optimization. While multimodal hyperspectral target discrimination was not enhanced across a range of different experiments by the fusion strategies put forward in this work, seismic event location benefited substantially, but only for label-limited scenarios.

More Details

Data Fusion of Very High Resolution Hyperspectral and Polarimetric SAR Imagery for Terrain Classification

West, Roger D.; Yocky, David A.; Foulk, James W.; Anderson, Dylan Z.; Redman, Brian J.

Performing terrain classification with data from heterogeneous imaging modalities is a very challenging problem. The challenge is further compounded by very high spatial resolution. (In this paper we consider very high spatial resolution to be much less than a meter.) At very high resolution many additional complications arise, such as geometric differences in imaging modalities and heightened pixel-by-pixel variability due to inhomogeneity within terrain classes. In this paper we consider the fusion of very high resolution hyperspectral imaging (HSI) and polarimetric synthetic aperture radar (PolSAR) data. We introduce a framework that utilizes the probabilistic feature fusion (PFF) one-class classifier for data fusion and demonstrate the effect of making pixelwise, superpixel, and pixelwise voting (within a superpixel) terrain classification decisions. We show that fusing imaging modality data sets, combined with pixelwise voting within the spatial extent of superpixels, gives a robust terrain classification framework that gives a good balance between quantitative and qualitative results.

More Details

Semisupervised learning for seismic monitoring applications

Seismological Research Letters

Linville, Lisa; Anderson, Dylan Z.; Galasso, Jennifer; Michalenko, Joshua J.; Draelos, Timothy J.

The impressive performance that deep neural networks demonstrate on a range of seismic monitoring tasks depends largely on the availability of event catalogs that have been manually curated over many years or decades. However, the quality, duration, and availability of seismic event catalogs vary significantly across the range of monitoring operations, regions, and objectives. Semisupervised learning (SSL) enables learning from both labeled and unlabeled data and provides a framework to leverage the abundance of unreviewed seismic data for training deep neural networks on a variety of target tasks. We apply two SSL algorithms (mean-teacher and virtual adversarial training) as well as a novel hybrid technique (exponential average adversarial training) to seismic event classification to examine how unlabeled data with SSL can enhance model performance. In general, we find that SSL can perform as well as supervised learning with fewer labels. We also observe in some scenarios that almost half of the benefits of SSL are the result of the meaningful regularization enforced through SSL techniques and may not be attributable to unlabeled data directly. Lastly, the benefits from unlabeled data scale with the difficulty of the predictive task when we evaluate the use of unlabeled data to characterize sources in new geographic regions. In geographic areas where supervised model performance is low, SSL significantly increases the accuracy of source-type classification using unlabeled data.

More Details

Optical and Polarimetric SAR Data Fusion Terrain Classification Using Probabilistic Feature Fusion

International Geoscience and Remote Sensing Symposium IGARSS

West, Roger D.; Yocky, David A.; Redman, Brian J.; Foulk, James W.; Anderson, Dylan Z.

Deciding on an imaging modality for terrain classification can be a challenging problem. For some terrain classes a given sensing modality may discriminate well, but may not have the same performance on other classes that a different sensor may be able to easily separate. The most effective terrain classification will utilize the abilities of multiple sensing modalities. The challenge of utilizing multiple sensing modalities is then determining how to combine the information in a meaningful and useful way. In this paper, we introduce a framework for effectively combining data from optical and polarimetric synthetic aperture radar sensing modalities. We demonstrate the fusion framework for two vegetation classes and two ground classes and show that fusing data from both imaging modalities has the potential to improve terrain classification from either modality, alone.

More Details

Optical and Polarimetric SAR Data Fusion Terrain Classification Using Probabilistic Feature Fusion

International Geoscience and Remote Sensing Symposium (IGARSS)

West, Roger D.; Yocky, David A.; Redman, Brian J.; Foulk, James W.; Anderson, Dylan Z.

Deciding on an imaging modality for terrain classification can be a challenging problem. For some terrain classes a given sensing modality may discriminate well, but may not have the same performance on other classes that a different sensor may be able to easily separate. The most effective terrain classification will utilize the abilities of multiple sensing modalities. The challenge of utilizing multiple sensing modalities is then determining how to combine the information in a meaningful and useful way. In this paper, we introduce a framework for effectively combining data from optical and polarimetric synthetic aperture radar sensing modalities. We demonstrate the fusion framework for two vegetation classes and two ground classes and show that fusing data from both imaging modalities has the potential to improve terrain classification from either modality, alone.

More Details

Multimodal Data Fusion via Entropy Minimization

International Geoscience and Remote Sensing Symposium (IGARSS)

Michalenko, Joshua J.; Linville, Lisa; Anderson, Dylan Z.

The use of gradient-based data-driven models to solve a range of real-world remote sensing problems can in practice be limited by the uniformity of available data. Use of data from disparate sensor types, resolutions, and qualities typically requires compromises based on assumptions that are made prior to model training and may not necessarily be optimal given over-arching objectives. For example, while deep neural networks (NNs) are state-of-the-art in a variety of target detection problems, training them typically requires either limiting the training data to a subset over which uniformity can be enforced or training independent models which subsequently require additional score fusion. The method we introduce here seeks to leverage the benefits of both approaches by allowing correlated inputs from different data sources to co-influence preferred model solutions, while maintaining flexibility over missing and mismatching data. In this paper, we propose a new data fusion technique for gradient updated models based on entropy minimization and experimentally validate it on a hyperspectral target detection dataset. We demonstrate superior performance compared to currently available techniques and highlight the value of the proposed method for data regimes with missing data.

More Details

Multimodal Data Fusion via Entropy Minimization

Linville, Lisa; Michalenko, Joshua J.; Anderson, Dylan Z.

The use of gradient-based data-driven models to solve a range of real-world remote sensing problems can in practice be limited by the uniformity of available data. Use of data from disparate sensor types, resolutions, and qualities typically requires compromises based on assumptions that are made prior to model training and may not necessarily be optimal given over-arching objectives. For example, while deep neural networks (NNs) are state-of-the-art in a variety of target detection problems, training them typically requires either limiting the training data to a subset over which uniformity can be enforced or training independent models which subsequently require additional score fusion. The method we introduce here seeks to leverage the benefits of both approaches by allowing correlated inputs from different data sources to co-influence preferred model solutions, while maintaining flexibility over missing and mismatching data. In this work we propose a new data fusion technique for gradient updated models based on entropy minimization and experimentally validate it on a hyperspectral target detection dataset. We demonstrate superior performance compared to currently available techniques using a range of realistic data scenarios, where available data has limited spacial overlap and resolution.

More Details

Robust terrain classification of high spatial resolution remote sensing data employing probabilistic feature fusion and pixelwise voting

Proceedings of SPIE - The International Society for Optical Engineering

West, Roger D.; Redman, Brian J.; Yocky, David A.; Foulk, James W.; Anderson, Dylan Z.

There are several factors that should be considered for robust terrain classification. We address the issue of high pixel-wise variability within terrain classes from remote sensing modalities, when the spatial resolution is less than one meter. Our proposed method segments an image into superpixels, makes terrain classification decisions on the pixels within each superpixel using the probabilistic feature fusion (PFF) classifier, then makes a superpixel-level terrain classification decision by the majority vote of the pixels within the superpixel. We show that this method leads to improved terrain classification decisions. We demonstrate our method on optical, hyperspectral, and polarimetric synthetic aperture radar data.

More Details

Sandia-UT Academic Alliance Project Summary

Anderson, Dylan Z.

This project seeks to leverage various hyperspectral tensor products for the purposes of target classification/detection/prediction. In addition to hyperspectral, these products may be images, time series, geometries, or other modalities. The scenarios in which the targets of interest must be identified are typically from remote sensing platforms such as satellites. As such, there are numerous real-world constraints that drive algorithmic formulation. Cost, complexity, and feasibility of the algorithm should all be considered. Targets of interest are exceedingly rare, and collecting many data samples is prohibitively expensive. Furthermore, model interpretability is paramount due to the application space. The goal of this project is to develop a constrained supervised tensor factorization framework for use on hyperspectral data products. Supervised tensor factorizations already exist in the literature, although they have not seen widespread adoption in the remote sensing domain. The novelty of this project will be the formulation and inclusion of constraints that take into account mission considerations and physics based limits to learn a factorization that is both physically interpretable and mission deployable. This will represent a new contribution to the field of remote sensing for performing supervised learning tasks with hyperspectral data.

More Details

Spectral and polarimetric remote sensing for CBRNE applications

Proceedings of SPIE - The International Society for Optical Engineering

Anderson, Dylan Z.; Appelhans, Leah; Craven, Julia M.; Lacasse, Charles F.; Vigil, Steve; Dzur, Robert; Briggs, Trevor; Miller, Elizabeth; Schultz-Fellenz, Emily

Optical remote sensing has become a valuable tool in many application spaces because it can be unobtrusive, search large areas efficiently, and is increasingly accessible through commercially available products and systems. In the application space of chemical, biological, radiological, nuclear, and explosives (CBRNE) sensing, optical remote sensing can be an especially valuable tool because it enables data to be collected from a safe standoff distance. Data products and results from remote sensing collections can be combined with results from other methods to offer an integrated understanding of the nature of activities in an area of interest and may be used to inform in-situ verification techniques. This work will overview several independent research efforts focused on developing and leveraging spectral and polarimetric sensing techniques for CBRNE applications, including system development efforts, field deployment campaigns, and data exploitation and analysis results. While this body of work has primarily focused on the application spaces of chemical and underground nuclear explosion detection and characterization, the developed tools and techniques may have applicability to the broader CBRNE domain.

More Details
Results 1–25 of 45
Results 1–25 of 45