Publications

Results 1–50 of 137

Search results

Jump to search filters

Stability and Convergence of Solutions to Stochastic Inverse Problems Using Approximate Probability Densities

International Journal for Uncertainty Quantification

Yen, Tian Y.; Wildey, Timothy; Butler, Troy; Spence, Rylan

Data-consistent inversion is designed to solve a class of stochastic inverse problems where the solution is a pullback of a probability measure specified on the outputs of a quantities of interest (QoI) map. Here, this work presents stability and convergence results for the case where finite QoI data result in an approximation of the solution as a density. Given their popularity in the literature, separate results are proven for three different approaches to measuring discrepancies between probability measures: f-divergences, integral probability metrics, and Lp metrics. In the context of integral probability metrics, we also introduce a pullback probability metric that is well-suited for data-consistent inversion. This fills a theoretical gap in the convergence and stability results for data-consistent inversion that have mostly focused on convergence of solutions associated with approximate maps. Numerical results are included to illustrate key theoretical results with intuitive and reproducible test problems that include a demonstration of convergence in the measure-theoretic "almost" sense.

More Details

ANALYSIS OF THE CHALLENGES IN DEVELOPING SAMPLE-BASED MULTIFIDELITY ESTIMATORS FOR NONDETERMINISTIC MODELS

International Journal for Uncertainty Quantification

Reuter, Bryan W.; Geraci, Gianluca; Wildey, Timothy

Multifidelity (MF) uncertainty quantification (UQ) seeks to leverage and fuse information from a collection of models to achieve greater statistical accuracy with respect to a single-fidelity counterpart, while maintaining an efficient use of computational resources. Despite many recent advancements in MF UQ, several challenges remain and these often limit its practical impact in certain application areas. In this manuscript, we focus on the challenges introduced by nondeterministic models to sampling MF UQ estimators. Nondeterministic models produce different responses for the same inputs, which means their outputs are effectively noisy. MF UQ is complicated by this noise since many state-of-the-art approaches rely on statistics, e.g., the correlation among models, to optimally fuse information and allocate computational resources. We demonstrate how the statistics of the quantities of interest, which impact the design, effectiveness, and use of existing MF UQ techniques, change as functions of the noise. With this in hand, we extend the unifying approximate control variate framework to account for nondeterminism, providing for the first time a rigorous means of comparing the effect of nondeterminism on different multifidelity estimators and analyzing their performance with respect to one another. Numerical examples are presented throughout the manuscript to illustrate and discuss the consequences of the presented theoretical results.

More Details

A Stochastic Reduced-Order Model for Statistical Microstructure Descriptors Evolution

Journal of Computing and Information Science in Engineering

Foulk, James W.; Sun, Jing; Liu, Dehao; Wang, Yan; Wildey, Timothy

Integrated computational materials engineering (ICME) models have been a crucial building block for modern materials development, relieving heavy reliance on experiments and significantly accelerating the materials design process. However, ICME models are also computationally expensive, particularly with respect to time integration for dynamics, which hinders the ability to study statistical ensembles and thermodynamic properties of large systems for long time scales. To alleviate the computational bottleneck, we propose to model the evolution of statistical microstructure descriptors as a continuous-time stochastic process using a non-linear Langevin equation, where the probability density function (PDF) of the statistical microstructure descriptors, which are also the quantities of interests (QoIs), is modeled by the Fokker-Planck equation. We discuss how to calibrate the drift and diffusion terms of the Fokker-Planck equation from the theoretical and computational perspectives. The calibrated Fokker-Planck equation can be used as a stochastic reduced-order model to simulate the microstructure evolution of statistical microstructure descriptors PDF. Considering statistical microstructure descriptors in the microstructure evolution as QoIs, we demonstrate our proposed methodology in three integrated computational materials engineering (ICME) models: kinetic Monte Carlo, phase field, and molecular dynamics simulations.

More Details

Microstructure-Sensitive Uncertainty Quantification for Crystal Plasticity Finite Element Constitutive Models Using Stochastic Collocation Methods

Frontiers in Materials

Foulk, James W.; Wildey, Timothy; Lim, Hojun

Uncertainty quantification (UQ) plays a major role in verification and validation for computational engineering models and simulations, and establishes trust in the predictive capability of computational models. In the materials science and engineering context, where the process-structure-property-performance linkage is well known to be the only road mapping from manufacturing to engineering performance, numerous integrated computational materials engineering (ICME) models have been developed across a wide spectrum of length-scales and time-scales to relieve the burden of resource-intensive experiments. Within the structure-property linkage, crystal plasticity finite element method (CPFEM) models have been widely used since they are one of a few ICME toolboxes that allows numerical predictions, providing the bridge from microstructure to materials properties and performances. Several constitutive models have been proposed in the last few decades to capture the mechanics and plasticity behavior of materials. While some UQ studies have been performed, the robustness and uncertainty of these constitutive models have not been rigorously established. In this work, we apply a stochastic collocation (SC) method, which is mathematically rigorous and has been widely used in the field of UQ, to quantify the uncertainty of three most commonly used constitutive models in CPFEM, namely phenomenological models (with and without twinning), and dislocation-density-based constitutive models, for three different types of crystal structures, namely face-centered cubic (fcc) copper (Cu), body-centered cubic (bcc) tungsten (W), and hexagonal close packing (hcp) magnesium (Mg). Our numerical results not only quantify the uncertainty of these constitutive models in stress-strain curve, but also analyze the global sensitivity of the underlying constitutive parameters with respect to the initial yield behavior, which may be helpful for robust constitutive model calibration works in the future.

More Details

Deployment of Multifidelity Uncertainty Quantification for Thermal Battery Assessment Part I: Algorithms and Single Cell Results

Eldred, Michael; Adams, Brian M.; Geraci, Gianluca; Portone, Teresa; Ridgway, Elliott M.; Stephens, John A.; Wildey, Timothy

This report documents the results of an FY22 ASC V&V level 2 milestone demonstrating new algorithms for multifidelity uncertainty quantification. Part I of the report describes the algorithms, studies their performance on a simple model problem, and then deploys the methods to a thermal battery example from the open literature. Part II (restricted distribution) applies the multifidelity UQ methods to specific thermal batteries of interest to the NNSA/ASC program.

More Details

aphBO-2GP-3B: a budgeted asynchronous parallel multi-acquisition functions for constrained Bayesian optimization on high-performing computing architecture

Structural and Multidisciplinary Optimization

Foulk, James W.; Wildey, Timothy; Furlan, John M.; Krishnan, Pagalthivarthi; Visintainer, Robert J.; Mccann, Scott

High-fidelity complex engineering simulations are often predictive, but also computationally expensive and often require substantial computational efforts. The mitigation of computational burden is usually enabled through parallelism in high-performance cluster (HPC) architecture. Optimization problems associated with these applications is a challenging problem due to the high computational cost of the high-fidelity simulations. In this paper, an asynchronous parallel constrained Bayesian optimization method is proposed to efficiently solve the computationally expensive simulation-based optimization problems on the HPC platform, with a budgeted computational resource, where the maximum number of simulations is a constant. The advantage of this method are three-fold. First, the efficiency of the Bayesian optimization is improved, where multiple input locations are evaluated parallel in an asynchronous manner to accelerate the optimization convergence with respect to physical runtime. This efficiency feature is further improved so that when each of the inputs is finished, another input is queried without waiting for the whole batch to complete. Second, the proposed method can handle both known and unknown constraints. Third, the proposed method samples several acquisition functions based on their rewards using a modified GP-Hedge scheme. The proposed framework is termed aphBO-2GP-3B, which means asynchronous parallel hedge Bayesian optimization with two Gaussian processes and three batches. The numerical performance of the proposed framework aphBO-2GP-3B is comprehensively benchmarked using 16 numerical examples, compared against other 6 parallel Bayesian optimization variants and 1 parallel Monte Carlo as a baseline, and demonstrated using two real-world high-fidelity expensive industrial applications. The first engineering application is based on finite element analysis (FEA) and the second one is based on computational fluid dynamics (CFD) simulations.

More Details

Solving Stochastic Inverse Problems for Property–Structure Linkages Using Data-Consistent Inversion and Machine Learning

JOM

Foulk, James W.; Wildey, Timothy

Determining process–structure–property linkages is one of the key objectives in material science, and uncertainty quantification plays a critical role in understanding both process–structure and structure–property linkages. In this work, we seek to learn a distribution of microstructure parameters that are consistent in the sense that the forward propagation of this distribution through a crystal plasticity finite element model matches a target distribution on materials properties. This stochastic inversion formulation infers a distribution of acceptable/consistent microstructures, as opposed to a deterministic solution, which expands the range of feasible designs in a probabilistic manner. To solve this stochastic inverse problem, we employ a recently developed uncertainty quantification framework based on push-forward probability measures, which combines techniques from measure theory and Bayes’ rule to define a unique and numerically stable solution. This approach requires making an initial prediction using an initial guess for the distribution on model inputs and solving a stochastic forward problem. To reduce the computational burden in solving both stochastic forward and stochastic inverse problems, we combine this approach with a machine learning Bayesian regression model based on Gaussian processes and demonstrate the proposed methodology on two representative case studies in structure–property linkages.

More Details

Optimal experimental design for prediction based on push-forward probability measures

Journal of Computational Physics

Wildey, Timothy; Butler, T.; Jakeman, John D.

Incorporating experimental data is essential for increasing the credibility of simulation-aided decision making and design. This paper presents a method which uses a computational model to guide the optimal acquisition of experimental data to produce data-informed predictions of quantities of interest (QoI). Many strategies for optimal experimental design (OED) select data that maximize some utility that measures the reduction in uncertainty of uncertain model parameters, for example the expected information gain between prior and posterior distributions of these parameters. In this paper, we seek to maximize the expected information gained from the push-forward of an initial (prior) density to the push-forward of the updated (posterior) density through the parameter-to-prediction map. The formulation presented is based upon the solution of a specific class of stochastic inverse problems which seeks a probability density that is consistent with the model and the data in the sense that the push-forward of this density through the parameter-to-observable map matches a target density on the observable data. While this stochastic inverse problem forms the mathematical basis for our approach, we develop a one-step algorithm, focused on push-forward probability measures, that leverages inference-for-prediction to bypass constructing the solution to the stochastic inverse problem. A number of numerical results are presented to demonstrate the utility of this optimal experimental design for prediction and facilitate comparison of our approach with traditional OED.

More Details

Multi-fidelity machine-learning with uncertainty quantification and Bayesian optimization for materials design: Application to ternary random alloys

Journal of Chemical Physics

Foulk, James W.; Wildey, Timothy; Tranchida, Julien; Thompson, A.P.

We present a scale-bridging approach based on a multi-fidelity (MF) machine-learning (ML) framework leveraging Gaussian processes (GP) to fuse atomistic computational model predictions across multiple levels of fidelity. Through the posterior variance of the MFGP, our framework naturally enables uncertainty quantification, providing estimates of confidence in the predictions. We used density functional theory as high-fidelity prediction, while a ML interatomic potential is used as low-fidelity prediction. Practical materials' design efficiency is demonstrated by reproducing the ternary composition dependence of a quantity of interest (bulk modulus) across the full aluminum-niobium-titanium ternary random alloy composition space. The MFGP is then coupled to a Bayesian optimization procedure, and the computational efficiency of this approach is demonstrated by performing an on-the-fly search for the global optimum of bulk modulus in the ternary composition space. The framework presented in this manuscript is the first application of MFGP to atomistic materials simulations fusing predictions between density functional theory and classical interatomic potential calculations.

More Details

An active learning high-throughput microstructure calibration framework for solving inverse structure–process problems in materials informatics

Acta Materialia

Foulk, James W.; Mitchell, John A.; Swiler, Laura P.; Wildey, Timothy

Determining a process–structure–property relationship is the holy grail of materials science, where both computational prediction in the forward direction and materials design in the inverse direction are essential. Problems in materials design are often considered in the context of process–property linkage by bypassing the materials structure, or in the context of structure–property linkage as in microstructure-sensitive design problems. However, there is a lack of research effort in studying materials design problems in the context of process–structure linkage, which has a great implication in reverse engineering. In this work, given a target microstructure, we propose an active learning high-throughput microstructure calibration framework to derive a set of processing parameters, which can produce an optimal microstructure that is statistically equivalent to the target microstructure. The proposed framework is formulated as a noisy multi-objective optimization problem, where each objective function measures a deterministic or statistical difference of the same microstructure descriptor between a candidate microstructure and a target microstructure. Furthermore, to significantly reduce the physical waiting wall-time, we enable the high-throughput feature of the microstructure calibration framework by adopting an asynchronously parallel Bayesian optimization by exploiting high-performance computing resources. Case studies in additive manufacturing and grain growth are used to demonstrate the applicability of the proposed framework, where kinetic Monte Carlo (kMC) simulation is used as a forward predictive model, such that for a given target microstructure, the target processing parameters that produced this microstructure are successfully recovered.

More Details

Data-consistent inversion for stochastic input-to-output maps

Inverse Problems

Wildey, Timothy; Butler, Troy; Yen, Tian Y.

Data-consistent inversion is a recently developed measure-theoretic framework for solving a stochastic inverse problem involving models of physical systems. The goal is to construct a probability measure on model inputs (i.e., parameters of interest) whose associated push-forward measure matches (i.e., is consistent with) a probability measure on the observable outputs of the model (i.e., quantities of interest). Previous implementations required the map from parameters of interest to quantities of interest to be deterministic. This work generalizes this framework for maps that are stochastic, i.e., contain uncertainties and variation not explainable by variations in uncertain parameters of interest. Generalizations of previous theorems of existence, uniqueness, and stability of the data-consistent solution are provided while new theoretical results address the stability of marginals on parameters of interest. A notable aspect of the algorithmic generalization is the ability to query the solution to generate independent identically distributed samples of the parameters of interest without requiring knowledge of the so-called stochastic parameters. This work therefore extends the applicability of the data-consistent inversion framework to a much wider class of problems. This includes those based on purely experimental and field data where only a subset of conditions are either controllable or can be documented between experiments while the underlying physics, measurement errors, and any additional covariates are either uncertain or not accounted for by the researcher. Numerical examples demonstrate application of this approach to systems with stochastic sources of uncertainties embedded within the modeling of a system and a numerical diagnostic is summarized that is useful for determining if a key assumption is verified among competing choices of stochastic maps.

More Details

sMF-BO-2CoGP: A sequential multi-fidelity constrained Bayesian optimization framework for design applications

Journal of Computing and Information Science in Engineering

Foulk, James W.; Wildey, Timothy; Mccann, Scott

Bayesian optimization (BO) is an effective surrogate-based method that has been widely used to optimize simulation-based applications. While the traditional Bayesian optimization approach only applies to single-fidelity models, many realistic applications provide multiple levels of fidelity with various levels of computational complexity and predictive capability. In this work, we propose a multi-fidelity Bayesian optimization method for design applications with both known and unknown constraints. The proposed framework, called sMF-BO-2CoGP, is built on a multi-level CoKriging method to predict the objective function. An external binary classifier, which we approximate using a separate CoKriging model, is used to distinguish between feasible and infeasible regions. Finally, the sMF-BO-2CoGP method is demonstrated using a series of analytical examples and a flip-chip application for design optimization to minimize the deformation due to warping under thermal loading conditions.

More Details

Reification of latent microstructures: On supervised unsupervised and semi-supervised deep learning applications for microstructures in materials informatics

Foulk, James W.; Rodgers, Theron M.; Wildey, Timothy

Machine learning (ML), including deep learning (DL), has become increasingly popular in the last few years due to its continually outstanding performance. In this context, we apply machine learning techniques to "learn" the microstructure using both supervised and unsupervised DL techniques. In particular, we focus (1) on the localization problem bridging (micro)structure (localized) property using supervised DL and (2) on the microstructure reconstruction problem in latent space using unsupervised DL. The goal of supervised and semi-supervised DL is to replace crystal plasticity finite element model (CPFEM) that maps from (micro)structure (localized) property, and implicitly the (micro)structure (homogenized) property relationships, while the goal of unsupervised DL is (1) to represent high-dimensional microstructure images in a non-linear low-dimensional manifold, and (2) to discover a way to interpolate microstructures via latent space associating with latent microstructure variables. At the heart of this report is the applications of several common DL architectures, including convolutional neural networks (CNN), autoencoder (AE), and generative adversarial network (GAN), to multiple microstructure datasets, and the quest of neural architecture search for optimal DL architectures.

More Details
Results 1–50 of 137
Results 1–50 of 137