The complex nature of manufacturing processes stipulates electrodes to possess high variability with increased heterogeneity during production. X-ray computed tomography imaging has proved to be critical in visualizing the complicated stochastic particle distribution of as-manufactured electrodes in lithium-ion batteries. However, accurate prediction of their electrochemical performance necessitates precise evaluation of kinetic and transport properties from real electrodes. Image segmentation that characterizes voxels to particle/pore phase is often meticulous and fraught with subjectivity owing to a myriad of unconstrained choices and filter algorithms. We utilize a Bayesian convolutional neural network to tackle segmentation subjectivity and quantify its pertinent uncertainties. Otsu inter-variance and Blind/Referenceless Imaging Spatial Quality Evaluator are used to assess the relative image quality of grayscale tomograms, thus evaluating the uncertainty in the derived microstructural attributes. We analyze how image uncertainty is correlated with the uncertainties and magnitude of kinetic and transport properties of an electrode, further identifying pathways of uncertainty propagation within microstructural attributes. The coupled effect of spatial heterogeneity and microstructural anisotropy on the uncertainty quantification of transport parameters is also understood. This work demonstrates a novel methodology to extract microstructural descriptors from real electrode images through quantification of associated uncertainties and discerning the relative strength of their propagation, thus facilitating feedback to manufacturing processes from accurate image based electrochemical simulations.
As the prospect of exceeding global temperature targets set forth in the Paris Agreement becomes more likely, methods of climate intervention are increasingly being explored. With this increased interest there is a need for an assessment process to understand the range of impacts across different scenarios against a set of performance goals in order to support policy decisions. The methodology and tools developed for Performance Assessment (PA) for nuclear waste repositories shares many similarities with the needs and requirements for a framework for climate intervention. Using PA, we outline and test an evaluation framework for climate intervention, called Performance Assessment for Climate Intervention (PACI) with a focus on Stratospheric Aerosol Injection (SAI). We define a set of key technical components for the example PACI framework which include identifying performance goals, the extent of the system, and identifying which features, events, and processes are relevant and impactful to calculating model output for the system given the performance goals. Having identified a set of performance goals, the performance of the system, including uncertainty, can then be evaluated against these goals. Using the Geoengineering Large Ensemble (GLENS) scenario, we develop a set of performance goals for monthly temperature, precipitation, drought index, soil water, solar flux, and surface runoff. The assessment assumes that targets may be framed in the context of risk-risk via a risk ratio, or the ratio of the risk of exceeding the performance goal for the SAI scenario against the risk of exceeding the performance goal for the emissions scenario. From regional responses, across multiple climate variables, it is then possible to assess which pathway carries lower risk relative to the goals. The assessment is not comprehensive but rather a demonstration of the evaluation of an SAI scenario. Future work is needed to develop a more complete assessment that would provide additional simulations to cover parametric and aleatory uncertainty and enable a deeper understanding of impacts, informed scenario selection, and allow further refinements to the approach.
As Machine Learning (ML) continues to advance, it is being integrated into more systems. Often, the ML component represents a significant portion of the system that reduces the burden on the end user or significantly improves task performance. However, the ML component represents an unknown complex phenomenon that is learned from collected data without the need to be explicitly programmed. Despite the improvement in task performance, the models are often black boxes. Evaluating the credibility and the vulnerabilities of ML models poses a gap in current test and evaluation practice. For high consequence applications, the lack of testing and evaluation procedures represents a significant source of uncertainty and risk. To help reduce that risk, here we present considerations to evaluate systems embedded with an ML component within a red-teaming inspired methodology. We focus on (1) cyber vulnerabilities to an ML model, (2) evaluating performance gaps, and (3) adversarial ML vulnerabilities.
Physics-Based Reduced Order Models (ROMs) tend to rely on projection-based reduction. This family of approaches utilizes a series of responses of the full-order model to assemble a suitable basis, subsequently employed to formulate a set of equivalent, low-order equations through projection. However, in a nonlinear setting, physics-based ROMs require an additional approximation to circumvent the bottleneck of projecting and evaluating the nonlinear contributions on the reduced space. This scheme is termed hyper-reduction and enables substantial computational time reduction. The aforementioned hyper-reduction scheme implies a trade-off, relying on a necessary sacrifice on the accuracy of the nonlinear terms’ mapping to achieve rapid or even real-time evaluations of the ROM framework. Since time is essential, especially for digital twins representations in structural health monitoring applications, the hyper-reduction approximation serves as both a blessing and a curse. Our work scrutinizes the possibility of exploiting machine learning (ML) tools in place of hyper-reduction to derive more accurate surrogates of the nonlinear mapping. By retaining the POD-based reduction and introducing the machine learning-boosted surrogate(s) directly on the reduced coordinates, we aim to substitute the projection and update process of the nonlinear terms when integrating forward in time on the low-order dimension. Our approach explores a proof-of-concept case study based on a Nonlinear Auto-regressive neural network with eXogenous Inputs (NARX-NN), trying to potentially derive a superior physics-based ROM in terms of efficiency, suitable for (near) real-time evaluations. The proposed ML-boosted ROM (N3-pROM) is validated in a multi-degree of freedom shear frame under ground motion excitation featuring hysteretic nonlinearities.
This report is the final documentation for the one-year LDRD project 226360: Simulated X-ray Diffraction and Machine Learning for Optimizing Dynamic Experiment Analysis. As Sandia has successfully developed in-house X-ray diffraction tools for study of atomic structure in experiments, it has become increasingly important to develop computational analysis methods to support these experiments. When dynamically compressed lattices and orientations are not known a priori, the identification requires a cumbersome and sometimes intractable search of possible final states. These final states can include phase transition, deformation and mixed/evolving states. Our work consists of three parts: (1) development of an XRD simulation tool and use of traditional data science methods to match XRD patterns to experiments; (2) development of ML-based models capable of decomposing and identifying the lattice and orientation components of multicomponent experimental diffraction patterns; and (3) conducting experiments which showcase these new analysis tools in the study of phase transition mechanisms. Our target material has been cadmium sulfide, which exhibits complex orientation-dependent phase transformation mechanisms. In our current one-year LDRD, we have begun the analysis of high-quality c-axis CdS diffraction data from DCS and Thor experiments, which had until recently eluded orientation identification.
X-ray computed tomography is generally a primary step in characterization of defective electronic components, but is generally too slow to screen large lots of components. Super-resolution imaging approaches, in which higher-resolution data is inferred from lower-resolution images, have the potential to substantially reduce collection times for data volumes accessible via x-ray computed tomography. Here we seek to advance existing two-dimensional super-resolution approaches directly to three-dimensional computed tomography data. Multiple scan resolutions over a half order of magnitude of resolution were collected for four classes of commercial electronic components to serve as training data for a deep-learning, super-resolution network. A modular python framework for three-dimensional super-resolution of computed tomography data has been developed and trained over multiple classes of electronic components. Initial training and testing demonstrate the vast promise for these approaches, which have the potential for more than an order of magnitude reduction in collection time for electronic component screening.
Image-based simulation, the use of 3D images to calculate physical quantities, relies on image segmentation for geometry creation. However, this process introduces image segmentation uncertainty because different segmentation tools (both manual and machine-learning-based) will each produce a unique and valid segmentation. First, we demonstrate that these variations propagate into the physics simulations, compromising the resulting physics quantities. Second, we propose a general framework for rapidly quantifying segmentation uncertainty. Through the creation and sampling of segmentation uncertainty probability maps, we systematically and objectively create uncertainty distributions of the physics quantities. We show that physics quantity uncertainty distributions can follow a Normal distribution, but, in more complicated physics simulations, the resulting uncertainty distribution can be surprisingly nontrivial. We establish that bounding segmentation uncertainty can fail in these nontrivial situations. While our work does not eliminate segmentation uncertainty, it improves simulation credibility by making visible the previously unrecognized segmentation uncertainty plaguing image-based simulation.