Publications

Results 1–25 of 44

Search results

Jump to search filters

Data Fusion via Neural Network Entropy Minimization for Target Detection and Multi-Sensor Event Classification

Linville, Lisa L.; Anderson, Dylan Z.; Michalenko, Joshua J.; Garcia, Jorge A.

Broadly applicable solutions to multimodal and multisensory fusion problems across domains remain a challenge because effective solutions often require substantive domain knowledge and engineering. The chief questions that arise for data fusion are in when to share information from different data sources, and how to accomplish the integration of information. The solutions explored in this work remain agnostic to input representation and terminal decision fusion approaches by sharing information through the learning objective as a compound objective function. The objective function this work uses assumes a one-to-one learning paradigm within a one-to-many domain which allows the assumption that consistency can be enforced across the one-to-many dimension. The domains and tasks we explore in this work include multi-sensor fusion for seismic event location and multimodal hyperspectral target discrimination. We find that our domain- informed consistency objectives are challenging to implement in stable and successful learning because of intersections between inherent data complexity and practical parameter optimization. While multimodal hyperspectral target discrimination was not enhanced across a range of different experiments by the fusion strategies put forward in this work, seismic event location benefited substantially, but only for label-limited scenarios.

More Details

Data Fusion of Very High Resolution Hyperspectral and Polarimetric SAR Imagery for Terrain Classification

West, Roger D.; Yocky, David A.; Laros, James H.; Anderson, Dylan Z.; Redman, Brian J.

Performing terrain classification with data from heterogeneous imaging modalities is a very challenging problem. The challenge is further compounded by very high spatial resolution. (In this paper we consider very high spatial resolution to be much less than a meter.) At very high resolution many additional complications arise, such as geometric differences in imaging modalities and heightened pixel-by-pixel variability due to inhomogeneity within terrain classes. In this paper we consider the fusion of very high resolution hyperspectral imaging (HSI) and polarimetric synthetic aperture radar (PolSAR) data. We introduce a framework that utilizes the probabilistic feature fusion (PFF) one-class classifier for data fusion and demonstrate the effect of making pixelwise, superpixel, and pixelwise voting (within a superpixel) terrain classification decisions. We show that fusing imaging modality data sets, combined with pixelwise voting within the spatial extent of superpixels, gives a robust terrain classification framework that gives a good balance between quantitative and qualitative results.

More Details

Semisupervised learning for seismic monitoring applications

Seismological Research Letters

Linville, Lisa L.; Anderson, Dylan Z.; Galasso, Jennifer G.; Michalenko, Joshua J.; Draelos, Timothy J.

The impressive performance that deep neural networks demonstrate on a range of seismic monitoring tasks depends largely on the availability of event catalogs that have been manually curated over many years or decades. However, the quality, duration, and availability of seismic event catalogs vary significantly across the range of monitoring operations, regions, and objectives. Semisupervised learning (SSL) enables learning from both labeled and unlabeled data and provides a framework to leverage the abundance of unreviewed seismic data for training deep neural networks on a variety of target tasks. We apply two SSL algorithms (mean-teacher and virtual adversarial training) as well as a novel hybrid technique (exponential average adversarial training) to seismic event classification to examine how unlabeled data with SSL can enhance model performance. In general, we find that SSL can perform as well as supervised learning with fewer labels. We also observe in some scenarios that almost half of the benefits of SSL are the result of the meaningful regularization enforced through SSL techniques and may not be attributable to unlabeled data directly. Lastly, the benefits from unlabeled data scale with the difficulty of the predictive task when we evaluate the use of unlabeled data to characterize sources in new geographic regions. In geographic areas where supervised model performance is low, SSL significantly increases the accuracy of source-type classification using unlabeled data.

More Details

Optical and Polarimetric SAR Data Fusion Terrain Classification Using Probabilistic Feature Fusion

International Geoscience and Remote Sensing Symposium (IGARSS)

West, Roger D.; Yocky, David A.; Redman, Brian J.; Laros, James H.; Anderson, Dylan Z.

Deciding on an imaging modality for terrain classification can be a challenging problem. For some terrain classes a given sensing modality may discriminate well, but may not have the same performance on other classes that a different sensor may be able to easily separate. The most effective terrain classification will utilize the abilities of multiple sensing modalities. The challenge of utilizing multiple sensing modalities is then determining how to combine the information in a meaningful and useful way. In this paper, we introduce a framework for effectively combining data from optical and polarimetric synthetic aperture radar sensing modalities. We demonstrate the fusion framework for two vegetation classes and two ground classes and show that fusing data from both imaging modalities has the potential to improve terrain classification from either modality, alone.

More Details

Optical and Polarimetric SAR Data Fusion Terrain Classification Using Probabilistic Feature Fusion

International Geoscience and Remote Sensing Symposium (IGARSS)

West, Roger D.; Yocky, David A.; Redman, Brian J.; Laros, James H.; Anderson, Dylan Z.

Deciding on an imaging modality for terrain classification can be a challenging problem. For some terrain classes a given sensing modality may discriminate well, but may not have the same performance on other classes that a different sensor may be able to easily separate. The most effective terrain classification will utilize the abilities of multiple sensing modalities. The challenge of utilizing multiple sensing modalities is then determining how to combine the information in a meaningful and useful way. In this paper, we introduce a framework for effectively combining data from optical and polarimetric synthetic aperture radar sensing modalities. We demonstrate the fusion framework for two vegetation classes and two ground classes and show that fusing data from both imaging modalities has the potential to improve terrain classification from either modality, alone.

More Details

Multimodal Data Fusion via Entropy Minimization

International Geoscience and Remote Sensing Symposium (IGARSS)

Michalenko, Joshua J.; Linville, Lisa L.; Anderson, Dylan Z.

The use of gradient-based data-driven models to solve a range of real-world remote sensing problems can in practice be limited by the uniformity of available data. Use of data from disparate sensor types, resolutions, and qualities typically requires compromises based on assumptions that are made prior to model training and may not necessarily be optimal given over-arching objectives. For example, while deep neural networks (NNs) are state-of-the-art in a variety of target detection problems, training them typically requires either limiting the training data to a subset over which uniformity can be enforced or training independent models which subsequently require additional score fusion. The method we introduce here seeks to leverage the benefits of both approaches by allowing correlated inputs from different data sources to co-influence preferred model solutions, while maintaining flexibility over missing and mismatching data. In this paper, we propose a new data fusion technique for gradient updated models based on entropy minimization and experimentally validate it on a hyperspectral target detection dataset. We demonstrate superior performance compared to currently available techniques and highlight the value of the proposed method for data regimes with missing data.

More Details

Multimodal Data Fusion via Entropy Minimization

Linville, Lisa L.; Michalenko, Joshua J.; Anderson, Dylan Z.

The use of gradient-based data-driven models to solve a range of real-world remote sensing problems can in practice be limited by the uniformity of available data. Use of data from disparate sensor types, resolutions, and qualities typically requires compromises based on assumptions that are made prior to model training and may not necessarily be optimal given over-arching objectives. For example, while deep neural networks (NNs) are state-of-the-art in a variety of target detection problems, training them typically requires either limiting the training data to a subset over which uniformity can be enforced or training independent models which subsequently require additional score fusion. The method we introduce here seeks to leverage the benefits of both approaches by allowing correlated inputs from different data sources to co-influence preferred model solutions, while maintaining flexibility over missing and mismatching data. In this work we propose a new data fusion technique for gradient updated models based on entropy minimization and experimentally validate it on a hyperspectral target detection dataset. We demonstrate superior performance compared to currently available techniques using a range of realistic data scenarios, where available data has limited spacial overlap and resolution.

More Details

Robust terrain classification of high spatial resolution remote sensing data employing probabilistic feature fusion and pixelwise voting

Proceedings of SPIE - The International Society for Optical Engineering

West, Roger D.; Redman, Brian J.; Yocky, David A.; Laros, James H.; Anderson, Dylan Z.

There are several factors that should be considered for robust terrain classification. We address the issue of high pixel-wise variability within terrain classes from remote sensing modalities, when the spatial resolution is less than one meter. Our proposed method segments an image into superpixels, makes terrain classification decisions on the pixels within each superpixel using the probabilistic feature fusion (PFF) classifier, then makes a superpixel-level terrain classification decision by the majority vote of the pixels within the superpixel. We show that this method leads to improved terrain classification decisions. We demonstrate our method on optical, hyperspectral, and polarimetric synthetic aperture radar data.

More Details

Sandia-UT Academic Alliance Project Summary

Anderson, Dylan Z.

This project seeks to leverage various hyperspectral tensor products for the purposes of target classification/detection/prediction. In addition to hyperspectral, these products may be images, time series, geometries, or other modalities. The scenarios in which the targets of interest must be identified are typically from remote sensing platforms such as satellites. As such, there are numerous real-world constraints that drive algorithmic formulation. Cost, complexity, and feasibility of the algorithm should all be considered. Targets of interest are exceedingly rare, and collecting many data samples is prohibitively expensive. Furthermore, model interpretability is paramount due to the application space. The goal of this project is to develop a constrained supervised tensor factorization framework for use on hyperspectral data products. Supervised tensor factorizations already exist in the literature, although they have not seen widespread adoption in the remote sensing domain. The novelty of this project will be the formulation and inclusion of constraints that take into account mission considerations and physics based limits to learn a factorization that is both physically interpretable and mission deployable. This will represent a new contribution to the field of remote sensing for performing supervised learning tasks with hyperspectral data.

More Details

Spectral and polarimetric remote sensing for CBRNE applications

Proceedings of SPIE - The International Society for Optical Engineering

Anderson, Dylan Z.; Appelhans, Leah A.; Craven, Julia M.; LaCasse, Charles F.; Vigil, Steven R.; Dzur, Robert; Briggs, Trevor; Miller, Elizabeth; Schultz-Fellenz, Emily

Optical remote sensing has become a valuable tool in many application spaces because it can be unobtrusive, search large areas efficiently, and is increasingly accessible through commercially available products and systems. In the application space of chemical, biological, radiological, nuclear, and explosives (CBRNE) sensing, optical remote sensing can be an especially valuable tool because it enables data to be collected from a safe standoff distance. Data products and results from remote sensing collections can be combined with results from other methods to offer an integrated understanding of the nature of activities in an area of interest and may be used to inform in-situ verification techniques. This work will overview several independent research efforts focused on developing and leveraging spectral and polarimetric sensing techniques for CBRNE applications, including system development efforts, field deployment campaigns, and data exploitation and analysis results. While this body of work has primarily focused on the application spaces of chemical and underground nuclear explosion detection and characterization, the developed tools and techniques may have applicability to the broader CBRNE domain.

More Details

Paired neural networks for hyperspectral target detection

Proceedings of SPIE - The International Society for Optical Engineering

Anderson, Dylan Z.; Zollweg, Joshua D.; Smith, Braden J.

Spectral matched filtering and its variants (e.g. Adaptive Coherence Estimator or ACE) rely on strong assumptions about target and background distributions. For instance, ACE assumes a Gaussian distribution of background and additive target model. In practice, natural spectral variation, due to effects such as material Bidirectional Reflectance Distribution Function, non-linear mixing with surrounding materials, or material impurities, degrade the performance of matched filter techniques and require an ever-increasing library of target templates measured under different conditions. In this work, we employ the contrastive loss function and paired neural networks to create data-driven target detectors that do not rely on strong assumptions about target and background distribution. Furthermore, by matching spectra to templates in a highly nonlinear fashion via neural networks, our target detectors exhibit improved performance and greater resiliency to natural spectral variation; this performance improvement comes with no increase in target template library size. We evaluate and compare our paired neural network detector to matched filter-based target detectors on a synthetic hyperspectral scene and the well-known Indian Pines AVIRIS hyperspectral image.

More Details
Results 1–25 of 44
Results 1–25 of 44