Publications Details
Improving and Assessing the Quality of Uncertainty Quantification in Deep Learning
Adams, Jason R.; Baiyasi, Rashad; Berman, Brandon; Darling, Michael C.; Ganter, Tyler; Michalenko, Joshua J.; Patel, Lekha; Ries, Daniel; Liang, Feng; Qian, Christopher; Roy, Krishna
Deep learning (DL) models have enjoyed increased attention in recent years because of their powerful predictive capabilities. While many successes have been achieved, standard deep learning methods suffer from a lack of uncertainty quantification (UQ). While the development of methods for producing UQ from DL models is an active area of current research, little attention has been given to the quality of the UQ produced by such methods. In order to deploy DL models to high-consequence applications, high-quality UQ is necessary. This report details the research and development conducted as part of a Laboratory Directed Research and Development (LDRD) project at Sandia National Laboratories. The focus of this project is to develop a framework of methods and metrics for the principled assessment of UQ quality in DL models. This report presents an overview of UQ quality assessment in traditional statistical modeling and describes why this approach is difficult to apply in DL contexts. An assessment on relatively simple simulated data is presented to demonstrate that UQ quality can differ greatly between DL models trained on the same data. A method for simulating image data that can then be used for UQ quality assessment is described. A general method for simulating realistic data for the purpose of assessing a model’s UQ quality is also presented. A Bayesian uncertainty framework for understanding uncertainty and existing metrics is described. Research that came out of collaborations with two university partners are discussed along with a software toolkit that is currently being developed to implement the UQ quality assessment framework as well as serve as a general guide to incorporating UQ into DL applications.