Over the last 15 years, compressive sensing techniques have been developed which have the potential to greatly reduce the amount of data collected by systems while preserving the amount of information obtained. A cost of this efficiency is that a computationally-intensive optimization routine must be used to put the sensed data into a form that a person can interpret. At the same time, machine learning techniques have experienced tremendous growth as well. Machines have demonstrated the ability learn how to effectively perform tasks such as detection and classification at speeds much faster than humanly possible. Our goal in this project was to study the feasibility of using compressive sensing systems "at the edge." That is, how can compressive sensing sensors be deployed such that information is created at the remote sensor rather than sending raw data to a central processing location? Studies were performed to analyze whether machine learning could be done on the compressively sensed data in its raw form. If a machine is performing the task, is it possible to do so without putting the data into a human interpretable form? We show that this is possible for some systems, in particular a compressive sensing snapshot imaging spectrometer. Machine learning tasks were demonstrated to be more effective and more robust to noise when the machine learning algorithm worked on data in its raw form. This system is shown to outperform a traditional spectrometer. Techniques for reducing the complexity of the reconstruction routine were also analyzed. Techniques for such as data regularization, deep neural networks, and matrix completion were studied and shown to have benefits over traditional reconstruction techniques. In this project we showed that compressive sensing sensors are indeed feasible at the edge. As always, sensors and algorithms must be carefully tuned to work in the constrained environment. In this project we developed tools and techniques to enable those analyses.
A Compressive Sensing Snapshot Imaging Spectrometer (CSSIS) and its performance are described. The number of spectral bins recorded in a traditional tiled array spectrometer is limited to the number of filters. By properly designing the filters and leveraging compressive sensing techniques, more spectral bins can be reconstructed. Simulation results indicate that closely-spaced spectral sources that are not resolved with a traditional spectrometer can be resolved with the CSSIS. The nature of the filters used in the CSSIS enable higher signal-to-noise ratios in measured signals. The filters are spectrally broad relative to narrow-line filters used in traditional systems, and hence more light reaches the imaging sensor. This enables the CSSIS to outperform a traditional system in a classification task in the presence of noise. Simulation results on classifying in the compressive domain are shown. This obviates the need for the computationally-intensive spectral reconstruction algorithm.
We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that the PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l1-norm, l0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.
Compressive sensing shows promise for sensors that collect fewer samples than required by traditional Shannon-Nyquist sampling theory. Recent sensor designs for hyperspectral imaging encode light using spectral modulators such as spatial light modulators, liquid crystal phase retarders, and Fabry-Perot resonators. The hyperspectral imager consists of a filter array followed by a detector array. It encodes spectra with less measurements than the number of bands in the signal, making reconstruction an underdetermined problem. We propose a reconstruction algorithm for hyperspectral images encoded through spectral modulators. Our approach constrains pixels to be similar to their neighbors in space and wavelength, as natural images tend to vary smoothly, and it increases robustness to noise. It combines L1 minimization in the wavelet domain to enforce sparsity and total variation in the image domain for smoothness. The alternating direction method of multipliers (ADMM) simplifies the optimization procedure. Our algorithm constrains encoded, compressed hyperspectral images to be smooth in their reconstruction, and we present simulation results to illustrate our technique. This work improves the reconstruction of hyperspectral images from encoded, multiplexed, and sparse measurements.
Compressive sensing shows promise for sensors that collect fewer samples than required by traditional Shannon-Nyquist sampling theory. Recent sensor designs for hyperspectral imaging encode light using spectral modulators such as spatial light modulators, liquid crystal phase retarders, and Fabry-Perot resonators. The hyperspectral imager consists of a filter array followed by a detector array. It encodes spectra with less measurements than the number of bands in the signal, making reconstruction an underdetermined problem. We propose a reconstruction algorithm for hyperspectral images encoded through spectral modulators. Our approach constrains pixels to be similar to their neighbors in space and wavelength, as natural images tend to vary smoothly, and it increases robustness to noise. It combines L1 minimization in the wavelet domain to enforce sparsity and total variation in the image domain for smoothness. The alternating direction method of multipliers (ADMM) simplifies the optimization procedure. Our algorithm constrains encoded, compressed hyperspectral images to be smooth in their reconstruction, and we present simulation results to illustrate our technique. This work improves the reconstruction of hyperspectral images from encoded, multiplexed, and sparse measurements.
Distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that the PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l1-norm, l0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. As a result, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.