Deep learning has been successfully applied to the segmentation of 3D Computed Tomography (CT) scans. Establishing the credibility of these segmentations requires uncertainty quantification (UQ) to identify untrustworthy predictions. Recent UQ architectures include Monte Carlo dropout networks (MCDNs), which approximate deep Gaussian processes, and Bayesian neural networks (BNNs), which learn the distribution of the weight space. BNNs are advantageous over MCDNs for UQ but are thought to be computationally infeasible in high dimension, and neither architecture has produced interpretable geometric uncertainty maps. We propose a novel 3D Bayesian convolutional neural network (BCNN), the first deep learning method which generates statistically credible geometric uncertainty maps and scales for application to 3D data. We present experimental results on CT scans of graphite electrodes and laser-welded metals and show that our BCNN outperforms an MCDN in recent uncertainty metrics. The geometric uncertainty maps generated by our BCNN capture distributions of sigmoid values that are interpretable as confidence intervals, critical for applications that rely on deep learning for high-consequence decisions.
Development of new materials and predictive capabilities of component performance hinges on the ability to accurately digitize "as-built" geometries. X-ray computed tomography (CT) offers a non-destructive method of capturing these details but current methodologies are unable to produce the required fidelity for critical component certification. This project focused on discovering the limitations of existing CT reconstruction algorithms and exploring machine learning (ML) methodologies to overcome these limitations. We found that existing CT reconstruction methods are insufficient for Sandia's critical component certification process and that ML algorithms are a viable path forward to improving the quality of CT images.
Deep learning segmentation models are known to be sensitive to the scale, contrast, and distribution of pixel values when applied to Computed Tomography (CT) images. For material samples, scans are often obtained from a variety of scanning equipment and resolutions resulting in domain shift. The ability of segmentation models to generalize to examples from these shifted domains relies on how well the distribution of the training data represents the overall distribution of the target data. We present a method to overcome the challenges presented by domain shifts. Our results indicate that we can leverage a deep learning model trained on one domain to accurately segment similar materials at different resolutions by refining binary predictions using uncertainty quantification (UQ). We apply this technique to a set of unlabeled CT scans of woven composite materials with clear qualitative improvement of binary segmentations over the original deep learning predictions. In contrast to prior work, our technique enables refined segmentations without the expense of the additional training time and parameters associated with deep learning models used to address domain shift.