Publications

Results 26–50 of 54

Search results

Jump to search filters

Data-driven uncertainty quantification for multisensor analytics

Proceedings of SPIE - The International Society for Optical Engineering

Stracuzzi, David J.; Darling, Michael C.; Chen, Maximillian G.; Peterson, Matthew G.

We discuss uncertainty quantification in multisensor data integration and analysis, including estimation methods and the role of uncertainty in decision making and trust in automated analytics. The challenges associated with automatically aggregating information across multiple images, identifying subtle contextual cues, and detecting small changes in noisy activity patterns are well-established in the intelligence, surveillance, and reconnaissance (ISR) community. In practice, such questions cannot be adequately addressed with discrete counting, hard classifications, or yes/no answers. For a variety of reasons ranging from data quality to modeling assumptions to inadequate definitions of what constitutes "interesting" activity, variability is inherent in the output of automated analytics, yet it is rarely reported. Consideration of these uncertainties can provide nuance to automated analyses and engender trust in their results. In this work, we assert the importance of uncertainty quantification for automated data analytics and outline a research agenda. We begin by defining uncertainty in the context of machine learning and statistical data analysis, identify its sources, and motivate the importance and impact of its quantification. We then illustrate these issues and discuss methods for data-driven uncertainty quantification in the context of a multi-source image analysis example. We conclude by identifying several specific research issues and by discussing the potential long-term implications of uncertainty quantification for data analytics, including sensor tasking and analyst trust in automated analytics.

More Details

Adaptive Self-Tuning of Signal Detection Parameters

Draelos, Timothy J.; Peterson, Matthew G.; Knox, Hunter A.; Lawry, Benjamin J.; Philips-Alonge, Kristin; Ziegler, Abra; Chael, Eric; Young, Christopher J.; Faust, Aleksandra

The quality of automatic detections from sensor networks depends on a large number of data processing parameters that interact in complex ways. The largely manual process of identifying effective parameters is painstaking and does not guarantee that the resulting controls are the optimal configuration settings, yet achieving superior automatic detection of events is closely related to these parameters. We present an automated sensor tuning (AST) system that tunes effective parameter settings for each sensor detector to the current state of the environment by leveraging cooperation within a neighborhood of sensors. After a stabilization period, the AST algorithm can adapt in near real-time to changing conditions and automatically self-tune a signal detector to identify (detect) only signals from events of interest. The overall goal is to reduce the number of missed legitimate event detections and the number of false event detections. Our current work focuses on reducing false signal detections early in the seismic signal processing pipeline, which leads to fewer false events and has a significant impact on reducing analyst time and effort. Applicable both for existing sensor performance boosting and new sensor deployment, this system provides an important new method to automatically tune complex remote sensing systems. Systems tuned in this way will achieve better performance than is currently possible by manual tuning, and with much less time and effort devoted to the tuning process. With ground truth on detections from a seismic sensor network monitoring the Mount Erebus Volcano in Antarctica, we show that AST increases the probability of detection while decreasing false alarms.

More Details

Uncertainty Quantification for Machine Learning

Stracuzzi, David J.; Chen, Maximillian G.; Darling, Michael C.; Peterson, Matthew G.; Vollmer, Charles V.

In this paper, we assert the importance of uncertainty quantification for machine learning and sketch an initial research agenda. We define uncertainty in the context of machine learning, identify its sources, and motivate the importance and impact of its quantification. We then illustrate these issues with an image analysis example. The paper concludes by identifying several specific research issues and by discussing the potential long-term implications of uncertainty quantification for data analytics in general.

More Details
Results 26–50 of 54
Results 26–50 of 54