Finite Set Statistics Based Multitarget Tracking
Abstract not provided.
Abstract not provided.
IEEE Aerospace Conference Proceedings
A method for tracking streaking targets (targets whose signatures are spread across multiple pixels in a focal plane array) is developed. The outputs of a bank of matched filters are thresholded and then used for measurement extraction. The use of the Deep Target Extractor (DTE, previously called the MLPMHT) allows for tracking in the very low observable (VLO) environment common when a streaking target is present. A definition of moving target signal to noise ratio (MT-SNR) is also presented as a metric for trackability. The extraction algorithm and the DTE are then tested across several variables, including trajectory, MT-SNR, and streak length. The DTE and measurement extraction process performs remarkably well in this difficult tracking environment on these data features.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase significantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classification steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Chemometrics and Intelligent Laboratory Systems
Multivariate curve resolution (MCR) is a useful and important analysis tool for extracting quantitative information from hyperspectral image data. However, in the case of hyperspectral fluorescence microscope images acquired with CCD-type technologies, cosmic spikes and the presence of detector artifacts in the spectral data can make the extraction of the pure-component spectra and their relative concentrations challenging when applying MCR to the images. In this paper, we present new generalized and automated approaches for preprocessing spectral image data to improve the robustness of the MCR analysis of spectral images. These novel preprocessing steps remove cosmic spikes, correct for the presence of detector offsets and structured noise as well as select spectral and spatial regions to reduce the detrimental effects of detector noise. These preprocessing and MCR analysis techniques incorporate the use of an optical filter to prevent light from impinging on a small number of spectral pixels in the CCD detector. This dark spectral region can be incorporated into any spectral imaging system to enhance modeling of detector offset and structured noise components as well as the automated selection of spatial regions to restrict the analysis to only those regions containing viable spectral information. The success of these automated preprocessing methods combined with new MCR modeling approaches are demonstrated with realistically simulated data derived from spectral images of macrophage cells with green fluorescence protein (GFP). Further, we demonstrate using spectral images from the green alga, Chlorella, approaches for the analyses when fluorescent species with widely different relative spectral intensities are present in the image. We believe that the preprocessing and MCR approaches introduced in this paper can be generalized to several other hyperspectral image technologies and can improve the success of automated MCR analyses with little or no a priori information required about the spectral components present in the samples. © 2012 Elsevier B.V.
A considerable amount research is being conducted on microalgae, since microalgae are becoming a promising source of renewable energy. Most of this research is centered on lipid production in microalgae because microalgae produce triacylglycerol which is ideal for biodiesel fuels. Although we are interested in research to increase lipid production in algae, we are also interested in research to sustain healthy algal cultures in large scale biomass production farms or facilities. The early detection of fluctuations in algal health, productivity, and invasive predators must be developed to ensure that algae are an efficient and cost-effective source of biofuel. Therefore we are developing technologies to monitor the health of algae using spectroscopic measurements in the field. To do this, we have proposed to spectroscopically monitor large algal cultivations using LIDAR (Light Detection And Ranging) remote sensing technology. Before we can deploy this type of technology, we must first characterize the spectral bio-signatures that are related to algal health. Recently, we have adapted our confocal hyperspectral imaging microscope at Sandia to have two-photon excitation capabilities using a chameleon tunable laser. We are using this microscope to understand the spectroscopic signatures necessary to characterize microalgae at the cellular level prior to using these signatures to classify the health of bulk samples, with the eventual goal of using of LIDAR to monitor large scale ponds and raceways. By imaging algal cultures using a tunable laser to excite at several different wavelengths we will be able to select the optimal excitation/emission wavelengths needed to characterize algal cultures. To analyze the hyperspectral images generated from this two-photon microscope, we are using Multivariate Curve Resolution (MCR) algorithms to extract the spectral signatures and their associated relative intensities from the data. For this presentation, I will show our two-photon hyperspectral imaging results on a variety of microalgae species and show how these results can be used to characterize algal ponds and raceways.
Line of sight jitter in staring sensor data combined with scene information can obscure critical information for change analysis or target detection. Consequently before the data analysis, the jitter effects must be significantly reduced. Conventional principal component analysis (PCA) has been used to obtain basis vectors for background estimation; however PCA requires image frames that contain the jitter variation that is to be modeled. Since jitter is usually chaotic and asymmetric, a data set containing all the variation without the changes to be detected is typically not available. An alternative approach, Scene Kinetics Mitigation, first obtains an image of the scene. Then it computes derivatives of that image in the horizontal and vertical directions. The basis set for estimation of the background and the jitter consists of the image and its derivative factors. This approach has several advantages including: (1) only a small number of images are required to develop the model, (2) the model can estimate backgrounds with jitter different from the input training images, (3) the method is particularly effective for sub-pixel jitter, and (4) the model can be developed from images before the change detection process. In addition the scores from projecting the factors on the background provide estimates of the jitter magnitude and direction for registration of the images. In this paper we will present a discussion of the theoretical basis for this technique, provide examples of its application, and discuss its limitations.
Abstract not provided.
With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.
Abstract not provided.
Journal of Chemometrics
The combination of hyperspectral confocal fluorescence microscopy and multivariate curve resolution (MCR) provides an ideal system for improved quantitative imaging when multiple fluorophores are present. However, the presence of multiple noise sources limits the ability of MCR to accurately extract pure-component spectra when there is high spectral and/or spatial overlap between multiple fluorophores. Previously, MCR results were improved by weighting the spectral images for Poisson-distributed noise, but additional noise sources are often present. We have identified and quantified all the major noise sources in hyperspectral fluorescence images. Two primary noise sources were found: Poisson-distributed noise and detector-read noise. We present methods to quantify detector-read noise variance and to empirically determine the electron multiplying CCD (EMCCD) gain factor required to compute the Poisson noise variance. We have found that properly weighting spectral image data to account for both noise sources improved MCR accuracy. In this paper, we demonstrate three weighting schemes applied to a real hyperspectral corn leaf image and to simulated data based upon this same image. MCR applied to both real and simulated hyperspectral images weighted to compensate for the two major noise sources greatly improved the extracted pure emission spectra and their concentrations relative to MCR with either unweighted or Poisson-only weighted data. Thus, properly identifying and accounting for the major noise sources in hyperspectral images can serve to improve the MCR results. These methods are very general and can be applied to the multivariate analysis of spectral images whenever CCD or EMCCD detectors are used. Copyright © 2008 John Wiley & Sons, Ltd.
Abstract not provided.
Journal of Applied Spectroscopy
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
LMPC 2005 - Proceedings of the 2005 International Symposium on Liquid Metal Processing and Casting
A numerical model of the ESR process was used to study the effect of the various process parameters on the resulting temperature profiles, flow field, and pool shapes. The computational domain included the slag and ingot, while the electrode, crucible, and cooling water were considered as external boundary conditions. The model considered heat transfer, fluid flow, solidification, and electromagnetic effects. The predicted pool profiles were compared with experimental results obtained over a range of processing parameters from an industrial-scale 718 alloy ingot. The shape of the melt pool was marked by dropping nickel balls down the annulus of the crucible during melting. Thermocouples placed in the electrode monitored the electrode and slag temperature as melting progressed. The cooling water temperature and flow rate were also monitored. The resulting ingots were sectioned and etched to reveal the ingot macrostructure and the shape of the melt pool. Comparisons of the predicted and experimentally measured pool profiles show excellent agreement. The effect of processing parameters, including the slag cap thickness, on the temperature distribution and flow field are discussed. The results of a sensitivity study of thermophysical properties of the slag are also discussed.
Abstract not provided.
A variety of multivariate calibration algorithms for quantitative spectral analyses were investigated and compared, and new algorithms were developed in the course of this Laboratory Directed Research and Development project. We were able to demonstrate the ability of the hybrid classical least squares/partial least squares (CLSIPLS) calibration algorithms to maintain calibrations in the presence of spectrometer drift and to transfer calibrations between spectrometers from the same or different manufacturers. These methods were found to be as good or better in prediction ability as the commonly used partial least squares (PLS) method. We also present the theory for an entirely new class of algorithms labeled augmented classical least squares (ACLS) methods. New factor selection methods are developed and described for the ACLS algorithms. These factor selection methods are demonstrated using near-infrared spectra collected from a system of dilute aqueous solutions. The ACLS algorithm is also shown to provide improved ease of use and better prediction ability than PLS when transferring calibrations between near-infrared calibrations from the same manufacturer. Finally, simulations incorporating either ideal or realistic errors in the spectra were used to compare the prediction abilities of the new ACLS algorithm with that of PLS. We found that in the presence of realistic errors with non-uniform spectral error variance across spectral channels or with spectral errors correlated between frequency channels, ACLS methods generally out-performed the more commonly used PLS method. These results demonstrate the need for realistic error structure in simulations when the prediction abilities of various algorithms are compared. The combination of equal or superior prediction ability and the ease of use of the ACLS algorithms make the new ACLS methods the preferred algorithms to use for multivariate spectral calibrations.
Abstract not provided.
Abstract not provided.
Proposed for publication in Applied Spectroscopy.
A manuscript describing this work summarized below has been submitted to Applied Spectroscopy. Comparisons of prediction models from the new ACLS and PLS multivariate spectral analysis methods were conducted using simulated data with deviations from the idealized model. Simulated uncorrelated concentration errors, and uncorrelated and correlated spectral noise were included to evaluate the methods on situations representative of experimental data. The simulations were based on pure spectral components derived from real near-infrared spectra of multicomponent dilute aqueous solutions containing glucose, urea, ethanol, and NaCl in the concentration range from 0-500 mg/dL. The statistical significance of differences was evaluated using the Wilcoxon signed rank test. The prediction abilities with nonlinearities present were similar for both calibration methods although concentration noise, number of samples, and spectral noise distribution sometimes affected one method more than the other. In the case of ideal errors and in the presence of nonlinear spectral responses, the differences between the standard error of predictions of the two methods were sometimes statistically significant, but the differences were always small in magnitude. Importantly, SRACLS was found to be competitive with PLS when component concentrations were only known for a single component. Thus, SRACLS has a distinct advantage over standard CLS methods that require that all spectral components be included in the model. In contrast to simulations with ideal error, SRACLS often generated models with superior prediction performance relative to PLS when the simulations were more realistic and included either non-uniform errors and/or correlated errors. Since the generalized ACLS algorithm is compatible with the PACLS method that allows rapid updating of models during prediction, the powerful combination of PACLS with ACLS is very promising for rapidly maintaining and transferring models for system drift, spectrometer differences, and unmodeled components without the need for recalibration. The comparisons under different noise assumptions in the simulations obtained during this investigation emphasize the need to use realistic simulations when making comparisons between various multivariate calibration methods. Clearly, the conclusions of the relative performance of various methods were found to be dependent on how realistic the spectral errors were in the simulated data. Results demonstrating the simplicity and power of ACLS relative to PLS are presented in the following section.
Abstract not provided.
Abstract not provided.
TMS Annual Meeting
Optimal estimation theory has been applied to the problem of estimating process variables during vacuum arc remelting (VAR), a process widely used in the specialty metals industry to cast large ingots of segregation sensitive and/or reactive metal alloys. Four state variables were used to develop a simple state-space model of the VAR process: electrode gap (G), electrode mass (M), electrode position (X) and electrode melting rate (R). The optimal estimator consists of a Kalman filter that incorporates the model and uses electrode feed rate and measurement based estimates of G, M and X to produce optimal estimates of all four state variables. Simulations show that the filter provides estimates that have error variances between one and three orders-of-magnitude less than estimates based solely on measurements. Examples are presented that verify this for electrode gap, an extremely important control parameter for the process.
Applied Spectroscopy
Abstract not provided.
Metallurgical and Materials Transactions B
Electrode gap is a very important parameter for the safe and successful control of vacuum arc remelting (VAR), a process used extensively throughout the specialty metals industry for the production of nickel base alloys and aerospace titanium alloys. Optimal estimation theory has been applied to the problem of estimating electrode gap and a filter has been developed based on a model of the gap dynamics. Taking into account the uncertainty in the process inputs and noise in the measured process variables, the filter provides corrected estimates of electrode gap that have error variances two-to-three orders of magnitude less than estimates based solely on measurements for the sample times of interest. This is demonstrated through simulations and confined by tests on the VAR furnace at Sandia National Laboratories. Furthermore, the estimates are inherently stable against common process disturbances that affect electrode gap measurement and melting rate. This is not only important for preventing (or minimizing) the formation of solidification defects during VAR of nickel base alloys, but of importance for high current processing of titanium alloys where loss of gap control can lead to a catastrophic, explosive failure of the process.
Metallurgical Transactions B
This research involves the measurement of the electrical conductivity (K) for the ESR (electroslag remelting) slag (60 wt.% CaF{sub 2} - 20 wt.% CaO - 20 wt.% Al{sub 2}O{sub 3}) used in the decontamination of radioactive stainless steel. The electrical conductivity is measured with an improved high-accuracy-height-differential technique that requires no calibration. This method consists of making continuous AC impedance measurements over several successive depth increments of the coaxial cylindrical electrodes in the ESR slag. The electrical conductivity is then calculated from the slope of the plot of inverse impedance versus the depth of the electrodes in the slag. The improvements on the existing technique include an increased electrochemical cell geometry and the capability of measuring high precision depth increments and the associated impedances. These improvements allow this technique to be used for measuring the electrical conductivity of highly conductive slags such as the ESR slag. The volatilization rate and the volatile species of the ESR slag measured through thermogravimetric (TG) and mass spectroscopy analysis, respectively, reveal that the ESR slag composition essentially remains the same throughout the electrical conductivity experiments.
Applied Spectroscopy
A significant improvement to the classical least-squares (CLS) multivariate analysis method has been developed. The new method, called prediction-augmented classical least-squares (PACLS), removes the restriction for CLS that all interfering spectral species must be known and their concentrations included during the calibration. We demonstrate that PACLS can correct inadequate CLS models if spectral components left out of the calibration can be identified and if their 'spectral shapes' can be derived and added during a PACLS prediction step. The new PACLS method is demonstrated for a system of dilute aqueous solutions containing urea, creatinine, and NaCl analytes with and without temperature variations. We demonstrate that if CLS calibrations are performed with only a single analyte's concentrations, then there is little, if any, prediction ability. However, if pure-component spectra of analytes left out of the calibration are independently obtained and added during PACLS prediction, then the CLS prediction ability is corrected and predictions become comparable to that of a CLS calibration that contains all analyte concentrations. It is also demonstrated that constant-temperature CLS models can be used to predict variable-temperature data by employing the PACLS method augmented by the spectral shape of a temperature change of the water solvent. In this case, PACLS can also be used to predict sample temperature with a standard error of prediction of 0.07°C even though the calibration data did not contain temperature variations. The PACLS method is also shown to be capable of modeling system drift to maintain a calibration in the presence of spectrometer drift.
Applied Spectroscopy
The advent of inductively coupled plasma-atomic emission spectrometers (ICP-AES) equipped with charge-coupled-device (CCD) detector arrays allows the application of multivariate calibration methods to the quantitative analysis of spectral data. We have applied classical least squares (CLS) methods to the analysis of a variety of samples containing up to 12 elements plus an internal standard. The elements included in the calibration models were Ag, Al, As, Au, Cd, Cr, Cu, Fe, Ni, Pb, Pd, and Se. By performing the CLS analysis separately in each of 46 spectral windows and by pooling the CLS concentration results for each element in all windows in a statistically efficient manner, we have been able to significantly improve the accuracy and precision of the ICP-AES analyses relative to the univariate and single-window multivariate methods supplied with the spectrometer. This new multi-window CLS (MWCLS) approach simplifies the analyses by providing a single concentration determination for each element from all spectral windows. Thus, the analyst does not have to perform the tedious task of reviewing the results from each window in an attempt to decide the correct value among discrepant analyses in one or more windows for each element. Furthermore, it is not necessary to construct a spectral correction model for each window prior to calibration and analysis: When one or more interfering elements was present, the new MWCLS method was able to reduce prediction errors for a selected analyte by more than 2 orders of magnitude compared to the worst case single-window multivariate and univariate predictions. The MWCLS detection limits in the presence of multiple interferences are 15 rig/g (i.e., 15 ppb) or better for each element. In addition, errors with the new method are only slightly inflated when only a single target element is included in the calibration (i.e., knowledge of all other elements is excluded during calibration). The MWCLS method is found to be vastly superior to partial least squares (PLS) in this case of limited numbers of calibration samples.
With the demonstration of the viability of using the electroslag remelting process for the decontamination of radionuclides, interest has increased in examining the unique aspects associated with melting steel pipe electrodes. These electrodes consist of several nested pipes, welded concentrically to atop plate. Since these electrodes can be half as dense as a solid electrode, they present unique challenges to the standard algorithms used in controlling the melting process. Naturally the electrode must be driven down at a dramatically increased speed. However, since the heat transfer is greatly influenced and enhanced with the increased area to volume ratio, considerable variation in the melting rate of the pipes has been found. Standard control methods can become unstable as a result of the variation at increased speeds, particularly at shallow immersion depths. The key to good control lies in the understanding of the melting process. Several experiments were conducted to observe the characteristics of the melting using two different control modes. By using a pressure transducer to monitor the pressure inside the pipes, the venting of the air trapped inside the electrode was observed. The measurements reveal that for a considerable amount of time. the pipes are not completely immersed in the slag, allowing the gas inside to escape without the formation of bubbles. This result has implications for the voltage swing as well as for the decontamination reactions.