Space-based and airplane-based synthetic aperture RADAR (SAR) can monitor ground height using interferometric SAR (InSAR) collections. However, fielding the airplane-based SAR is expensive and coordinating the frequency and timing of ground experiments with space-based SAR is challenging. This research explored the possibility of using a small, mobile unmanned aerial vehicle- base (UAV) SAR to see if it could provide a quick and inexpensive InSAR option for the Source Physics Experiment (SPE) Phase III project. Firstly, a local feasibility collection using a UAV-based SAR showed that InSAR products and height measurements were possible, but that in-scene fiducials were needed to assist in digital elevation model (DEM) construction. Secondly, an InSAR collection was planned and executed over the SPE Phase III site using the same platform configuration. We found that the image formation by the SAR manufacturer creates discontinuities, and that noise impacted the generation and accuracy of height maps. These processing artifacts need to be overcome to generate an accurate height map.
Performing terrain classification with data from heterogeneous imaging modalities is a very challenging problem. The challenge is further compounded by very high spatial resolution. (In this paper we consider very high spatial resolution to be much less than a meter.) At very high resolution many additional complications arise, such as geometric differences in imaging modalities and heightened pixel-by-pixel variability due to inhomogeneity within terrain classes. In this paper we consider the fusion of very high resolution hyperspectral imaging (HSI) and polarimetric synthetic aperture radar (PolSAR) data. We introduce a framework that utilizes the probabilistic feature fusion (PFF) one-class classifier for data fusion and demonstrate the effect of making pixelwise, superpixel, and pixelwise voting (within a superpixel) terrain classification decisions. We show that fusing imaging modality data sets, combined with pixelwise voting within the spatial extent of superpixels, gives a robust terrain classification framework that gives a good balance between quantitative and qualitative results.
We present a deep learning image reconstruction method called AirNet-SNL for sparse view computed tomography. It combines iterative reconstruction and convolutional neural networks with end-to-end training. Our model reduces streak artifacts from filtered back-projection with limited data, and it trains on randomly generated shapes. This work shows promise to generalize learning image reconstruction.
Phase I of the Source Physics Experiment (SPE) series involved six underground chemical explosions, all of which were conducted at the same experimental pad. Research from the sixth explosion of the series (SPE-6) demonstrated that polarimetric synthetic aperture radar (PolSAR) is a viable technology for monitoring an underground chemical explosion when the geologic structure is Cretaceous granitic intrusive. It was shown that a durable signal is measurable by the H/A/alpha polarimetric decomposition parameters. After the SPE-6 experiment, the SPE program moved to the Phase II location, which is composed of dry alluvium geology (DAG). The loss of wavefront energy is greater through dry alluvium than through granite. In this article, we compare the SPE-6 analysis to the second DAG (DAG-2) experiment. We hypothesize that despite the geology at the DAG site being more challenging than at the Phase I location, combined with the DAG-2 experiment having a 3.37 times deeper scaled depth of burial than the SPE-6, a durable nonprompt signal is still measurable by a PolSAR sensor. We compare the PolSAR time-series measures from videoSAR frames, from the SPE-6 and DAG-2 experiments, with accelerometer data. We show which PolSAR measures are invariant to the two types of geology and which are geology dependent. We compare a coherent change detection (CCD) map from the DAG-2 experiment with the data from a fiber-optic distributed acoustic sensor to show the connection between the spatial extent of coherence loss in CCD maps and spallation caused by the explosion. Finally, we also analyze the spatial extent of the PolSAR measures from both explosions.
The Source Physics Experiment (SPE) Phase I conducted six underground chemical explosions at the same experimental pad with the goal of characterizing underground explosions to enhance the United States (U.S.) ability to detect and discriminate underground nuclear explosions (UNEs). A fully polarimetric synthetic aperture RADAR (PolSAR) collected imagery in VideoSAR mode during the fifth and sixth explosions in the series (SPE-5 and SPE-6). Previously, we reported the prompt PolSAR surface changes cause by SPE-5 and SPE-6 explosions within seconds or minutes of the underground chemical explosions, including a drop of spatial coherence and polarimetric scattering changes. Therein it was hypothesized that surface changes occurred when surface particles experienced upward acceleration greater than 1 g. Because the SPE site was instrumented with surface accelerometers, we explore that hypothesis and report our findings in this article. We equate explosion-caused prompt surface expressions measured by PolSAR to the prompt surface movement measured by accelerometers. We tie these findings to UNE detection by comparing the PolSAR and accelerometer results to empirical ground motion predictions derived from accelerometer recordings of UNEs collected prior to cessation of U.S. nuclear testing. We find the single threshold greater than 1 g hypothesis is not correct for it does not explain the PolSAR results. Our findings show PolSAR surface coherence spatial extent is highly correlated with surface velocity, both measured and predicted, and the resulting surface deformation extent is corroborated by accelerometer records and the predicted lateral spall extent. PolSAR scattering changes measured during SPE-6 are created by the prompt surface displacement being larger than the spall gap.
X-ray phase contrast imaging (XPCI) is a nondestructive evaluation technique that enables high-contrast detection of low-attenuation materials that are largely transparent in traditional radiography. Extending a grating-based Talbot-Lau XPCI system to three-dimensional imaging with computed tomography (CT) imposes two motion requirements: the analyzer grating must translate transverse to the optical axis to capture image sets for XPCI reconstruction, and the sample must rotate to capture angular data for CT reconstruction. The acquisition algorithm choice determines the order of movement and positioning of the two stages. The choice of the image acquisition algorithm for XPCI CT is instrumental to collecting high fidelity data for reconstruction. We investigate how data acquisition influences XPCI CT by comparing two simple data acquisition algorithms and determine that capturing a full phase-stepping image set for a CT projection before rotating the sample results in higher quality data.
Deciding on an imaging modality for terrain classification can be a challenging problem. For some terrain classes a given sensing modality may discriminate well, but may not have the same performance on other classes that a different sensor may be able to easily separate. The most effective terrain classification will utilize the abilities of multiple sensing modalities. The challenge of utilizing multiple sensing modalities is then determining how to combine the information in a meaningful and useful way. In this paper, we introduce a framework for effectively combining data from optical and polarimetric synthetic aperture radar sensing modalities. We demonstrate the fusion framework for two vegetation classes and two ground classes and show that fusing data from both imaging modalities has the potential to improve terrain classification from either modality, alone.
Deciding on an imaging modality for terrain classification can be a challenging problem. For some terrain classes a given sensing modality may discriminate well, but may not have the same performance on other classes that a different sensor may be able to easily separate. The most effective terrain classification will utilize the abilities of multiple sensing modalities. The challenge of utilizing multiple sensing modalities is then determining how to combine the information in a meaningful and useful way. In this paper, we introduce a framework for effectively combining data from optical and polarimetric synthetic aperture radar sensing modalities. We demonstrate the fusion framework for two vegetation classes and two ground classes and show that fusing data from both imaging modalities has the potential to improve terrain classification from either modality, alone.
There are several factors that should be considered for robust terrain classification. We address the issue of high pixel-wise variability within terrain classes from remote sensing modalities, when the spatial resolution is less than one meter. Our proposed method segments an image into superpixels, makes terrain classification decisions on the pixels within each superpixel using the probabilistic feature fusion (PFF) classifier, then makes a superpixel-level terrain classification decision by the majority vote of the pixels within the superpixel. We show that this method leads to improved terrain classification decisions. We demonstrate our method on optical, hyperspectral, and polarimetric synthetic aperture radar data.
In practical applications of automated terrain classification from high-resolution polarimetric synthetic aperture radar (PolSAR) imagery, different terrain types may inherently contain a high level of internal variability, as when a broadly defined class (e.g., 'trees') contains elements arising from multiple subclasses (pine, oak, and willow). In addition, real-world factors such as the time of year of a collection, the moisture content of the scene, the imaging geometry, and the radar system parameters can all increase the variability observed within each class. Such variability challenges the ability of classifiers to maintain a high level of sensitivity in recognizing diverse elements that are within-class, without sacrificing their selectivity in rejecting out-of-class elements. In an effort to gauge the degree to which classifiers respond robustly in the presence of intraclass variability and generalize to untrained scenes and conditions, we compare the performance of a suite of classifiers across six broad terrain categories from a large set of polarimetric synthetic aperture radar (PolSAR) image sets. The main contributions of this article are as follows: 1) an analysis of the robustness of a variety of current state-of-the art classification algorithms to intraclass variability found in PolSAR image sets, and 2) the associated PolSAR image and feature data that Sandia is releasing to the research community with this publication. The analysis of the classification algorithms we provide will serve as a benchmark of performance for the future PolSAR terrain classification algorithm research and development enabled by the image sets and data provided. By sharing our analysis and high-resolution fully polarimetric Sandia data with the research community, we enable others to develop and assess a new generation of robust terrain classification algorithms for PolSAR.
Sandia National Laboratories flew its Facility for Advanced RF and Algorithm Development X-Band (9.6-GHz center frequency), fully polarimetric synthetic aperture radar (PolSAR) in VideoSAR mode to collect complex-valued SAR imagery before, during, and after the sixth Source Physics Experiment's (SPE-6) underground explosion. The VideoSAR products generated from the data sets include 'movies' of single-and quad-polarization coherence maps, magnitude imagery, and polarimetric decompositions. Residual defocus, due to platform motion during data acquisition, was corrected with a digital elevation model-based autofocus algorithm. We generated and exploited the VideoSAR image products to characterize the surface movement effects caused by the underground explosion. Unlike seismic sensors, which measure local area seismic waves using sparse spacing and subterranean positioning, these VideoSAR products captured high-spatial resolution, 2-D, time-varying surface movement. The results from the fifth SPE (SPE-5) used single-polarimetric VideoSAR data. In this paper, we present single-polarimetric and fully polarimetric VideoSAR results while monitoring the SPE-6 underground chemical explosion. We show that fully polarimetric VideoSAR imaging provides a unique, coherent, time-varying measure of the surface expression of the SPE-6 underground chemical explosion. We include new surface characterization results from the measured PolSAR SPE-6 data via H/A/α polarimetric decomposition.
High-quality image products in an X-Ray Phase Contrast Imaging (XPCI) system can be produced with proper system hardware and data acquisition. However, it may be possible to further increase the quality of the image products by addressing subtleties and imperfections in both hardware and the data acquisition process. Noting that addressing these issues entirely in hardware and data acquisition may not be practical, a more prudent approach is to determine the balance of how the apparatus may reasonably be improved and what can be accomplished with image post-processing techniques. Given a proper signal model for XPCI data, image processing techniques can be developed to compensate for many of the image quality degradations associated with higher-order hardware and data acquisition imperfections. However, processing techniques also have limitations and cannot entirely compensate for sub-par hardware or inaccurate data acquisition practices. Understanding system and image processing technique limitations enables balancing between hardware, data acquisition, and image post-processing. In this paper, we present some of the higher-order image degradation effects we have found associated with subtle imperfections in both hardware and data acquisition. We also discuss and demonstrate how a combination of hardware, data acquisition processes, and image processing techniques can increase the quality of XPCI image products. Finally, we assess the requirements for high-quality XPCI images and propose reasonable system hardware modifications and the limits of certain image processing techniques.