Publications

40 Results
Skip to search filters

Monotonic Gaussian Process for Physics-Constrained Machine Learning With Materials Science Applications

Journal of Computing and Information Science in Engineering

Tran, Anh; Maupin, Kathryn A.; Rodgers, Theron R.

Physics-constrained machine learning is emerging as an important topic in the field of machine learning for physics. One of the most significant advantages of incorporating physics constraints into machine learning methods is that the resulting model requires significantly less data to train. By incorporating physical rules into the machine learning formulation itself, the predictions are expected to be physically plausible. Gaussian process (GP) is perhaps one of the most common methods in machine learning for small datasets. In this paper, we investigate the possibility of constraining a GP formulation with monotonicity on three different material datasets, where one experimental and two computational datasets are used. The monotonic GP is compared against the regular GP, where a significant reduction in the posterior variance is observed. The monotonic GP is strictly monotonic in the interpolation regime, but in the extrapolation regime, the monotonic effect starts fading away as one goes beyond the training dataset. Imposing monotonicity on the GP comes at a small accuracy cost, compared to the regular GP. The monotonic GP is perhaps most useful in applications where data are scarce and noisy, and monotonicity is supported by strong physical evidence.

More Details

Microstructure-Sensitive Uncertainty Quantification for Crystal Plasticity Finite Element Constitutive Models Using Stochastic Collocation Methods

Frontiers in Materials

Tran, Anh; Wildey, Tim; Lim, Hojun L.

Uncertainty quantification (UQ) plays a major role in verification and validation for computational engineering models and simulations, and establishes trust in the predictive capability of computational models. In the materials science and engineering context, where the process-structure-property-performance linkage is well known to be the only road mapping from manufacturing to engineering performance, numerous integrated computational materials engineering (ICME) models have been developed across a wide spectrum of length-scales and time-scales to relieve the burden of resource-intensive experiments. Within the structure-property linkage, crystal plasticity finite element method (CPFEM) models have been widely used since they are one of a few ICME toolboxes that allows numerical predictions, providing the bridge from microstructure to materials properties and performances. Several constitutive models have been proposed in the last few decades to capture the mechanics and plasticity behavior of materials. While some UQ studies have been performed, the robustness and uncertainty of these constitutive models have not been rigorously established. In this work, we apply a stochastic collocation (SC) method, which is mathematically rigorous and has been widely used in the field of UQ, to quantify the uncertainty of three most commonly used constitutive models in CPFEM, namely phenomenological models (with and without twinning), and dislocation-density-based constitutive models, for three different types of crystal structures, namely face-centered cubic (fcc) copper (Cu), body-centered cubic (bcc) tungsten (W), and hexagonal close packing (hcp) magnesium (Mg). Our numerical results not only quantify the uncertainty of these constitutive models in stress-strain curve, but also analyze the global sensitivity of the underlying constitutive parameters with respect to the initial yield behavior, which may be helpful for robust constitutive model calibration works in the future.

More Details

Towards Z-Next: The Integration of Theory, Experiments, and Computational Simulation in a Bayesian Data Assimilation Framework

Maupin, Kathryn A.; Tran, Anh; Lewis, William L.; Knapp, Patrick K.; Joseph, V.R.; Wu, C.F.J.; Glinsky, Michael G.; Valaitis, Sonata V.

Making reliable predictions in the presence of uncertainty is critical to high-consequence modeling and simulation activities, such as those encountered at Sandia National Laboratories. Surrogate or reduced-order models are often used to mitigate the expense of performing quality uncertainty analyses with high-fidelity, physics-based codes. However, phenomenological surrogate models do not always adhere to important physics and system properties. This project develops surrogate models that integrate physical theory with experimental data through a maximally-informative framework that accounts for the many uncertainties present in computational modeling problems. Correlations between relevant outputs are preserved through the use of multi-output or co-predictive surrogate models; known physical properties (specifically monotoncity) are also preserved; and unknown physics and phenomena are detected using a causal analysis. By endowing surrogate models with key properties of the physical system being studied, their predictive power is arguably enhanced, allowing for reliable simulations and analyses at a reduced computational cost.

More Details

A Stochastic Reduced-Order Model for Statistical Microstructure Descriptors Evolution

Journal of Computing and Information Science in Engineering

Tran, Anh; Sun, Jing S.; Liu, Dehao L.; Wang, Yan W.; Wildey, Timothy M.

Integrated computational materials engineering (ICME) models have been a crucial building block for modern materials development, relieving heavy reliance on experiments and significantly accelerating the materials design process. However, ICME models are also computationally expensive, particularly with respect to time integration for dynamics, which hinders the ability to study statistical ensembles and thermodynamic properties of large systems for long time scales. To alleviate the computational bottleneck, we propose to model the evolution of statistical microstructure descriptors as a continuous-time stochastic process using a non-linear Langevin equation, where the probability density function (PDF) of the statistical microstructure descriptors, which are also the quantities of interests (QoIs), is modeled by the Fokker–Planck equation. In this work, we discuss how to calibrate the drift and diffusion terms of the Fokker–Planck equation from the theoretical and computational perspectives. The calibrated Fokker–Planck equation can be used as a stochastic reduced-order model to simulate the microstructure evolution of statistical microstructure descriptors PDF. Considering statistical microstructure descriptors in the microstructure evolution as QoIs, we demonstrate our proposed methodology in three integrated computational materials engineering (ICME) models: kinetic Monte Carlo, phase field, and molecular dynamics simulations.

More Details

Orthogonal Polynomials Defined by Self-Similar Measures with Overlaps

Experimental Mathematics

Ngai, Sze M.; Tang, Wei; Tran, Anh; Yuan, Shuai

We study orthogonal polynomials with respect to self-similar measures, focusing on the class of infinite Bernoulli convolutions, which are defined by iterated function systems with overlaps, especially those defined by the Pisot, Garsia, and Salem numbers. By using an algorithm of Mantica, we obtain graphs of the coefficients of the 3-term recursion relation defining the orthogonal polynomials. We use these graphs to predict whether the singular infinite Bernoulli convolutions belong to the Nevai class. Based on our numerical results, we conjecture that all infinite Bernoulli convolutions with contraction ratios greater than or equal to 1/2 belong to Nevai’s class, regardless of the probability weights assigned to the self-similar measures.

More Details

Dakota, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis (V.6.16 User's Manual)

Adams, Brian H.; Bohnhoff, William B.; Dalbey, Keith D.; Ebeida, Mohamed S.; Eddy, John E.; Eldred, Michael E.; Hooper, Russell H.; Hough, Patricia H.; Hu, Kenneth H.; Jakeman, John J.; Khalil, Mohammad K.; Maupin, Kathryn M.; Monschke, Jason A.; Ridgway, Elliott R.; Rushdi, Ahmad A.; Seidl, Daniel S.; Stephens, John A.; Swiler, Laura P.; Tran, Anh; Winokur, Justin W.

The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details

srMO-BO-3GP: A sequential regularized multi-objective Bayesian optimization for constrained design applications using an uncertain Pareto classifier

Journal of Mechanical Design

Tran, Anh; Eldred, Michael S.; McCann, Scott M.; Wang, Yan W.

Bayesian optimization (BO) is an efficient and flexible global optimization framework that is applicable to a very wide range of engineering applications. To leverage the capability of the classical BO, many extensions, including multi-objective, multi-fidelity, parallelization, and latent-variable modeling, have been proposed to address the limitations of the classical BO framework. In this work, we propose a novel multi-objective BO formalism, called srMO-BO-3GP, to solve multi-objective optimization problems in a sequential setting. Three different Gaussian processes (GPs) are stacked together, where each of the GPs is assigned with a different task. The first GP is used to approximate a single-objective computed from the multi-objective definition, the second GP is used to learn the unknown constraints, and the third one is used to learn the uncertain Pareto frontier. At each iteration, a multi-objective augmented Tchebycheff function is adopted to convert multi-objective to single-objective, where the regularization with a regularized ridge term is also introduced to smooth the single-objective function. Finally, we couple the third GP along with the classical BO framework to explore the convergence and diversity of the Pareto frontier by the acquisition function for exploitation and exploration. The proposed framework is demonstrated using several numerical benchmark functions, as well as a thermomechanical finite element model for flip-chip package design optimization.

More Details

Solving Stochastic Inverse Problems for Structure-Property Linkages Using Data-Consistent Inversion

Minerals, Metals and Materials Series

Tran, Anh; Wildey, Tim

Process-structure-property relationships are the hallmark of materials science. Many integrated computational materials engineering (ICME) models have been developed at multiple length-scales and time-scales, where uncertainty quantification (UQ) plays an important role in quality assurance. In this paper, we applied our previous work [39] to learn a distribution of microstructure features that are consistent in the sense that the forward propagation of this distribution through a crystal plasticity finite element model (CPFEM) matches a target distribution on materials properties, which is given beforehand. To demonstrate the approach, DAMASK and DREAM.3D are employed to construct Hall-Petch relationship for a twinning-induced plasticity (TWIP) steel, where the average grain size distribution is inferred, given a distribution of offset yield strength.

More Details

Solving Inverse Problems for Process-Structure Linkages Using Asynchronous Parallel Bayesian Optimization

Minerals, Metals and Materials Series

Tran, Anh; Wildey, Tim

Process-structure linkage is one of the most important topics in materials science due to the fact that virtually all information related to the materials, including manufacturing processes, lies in the microstructure itself. Therefore, to learn more about the process, one must start by thoroughly examining the microstructure. This gives rise to inverse problems in the context of process-structure linkages, which attempt to identify the processes that were used to manufacturing the given microstructure. In this work, we present an inverse problem for structure-process linkages which we solve using asynchronous parallel Bayesian optimization which exploits parallel computing resources. We demonstrate the effectiveness of the method using kinetic Monte Carlo model for grain growth simulation.

More Details

Solving Stochastic Inverse Problems for Property–Structure Linkages Using Data-Consistent Inversion and Machine Learning

JOM

Tran, Anh; Wildey, Tim

Determining process–structure–property linkages is one of the key objectives in material science, and uncertainty quantification plays a critical role in understanding both process–structure and structure–property linkages. In this work, we seek to learn a distribution of microstructure parameters that are consistent in the sense that the forward propagation of this distribution through a crystal plasticity finite element model matches a target distribution on materials properties. This stochastic inversion formulation infers a distribution of acceptable/consistent microstructures, as opposed to a deterministic solution, which expands the range of feasible designs in a probabilistic manner. To solve this stochastic inverse problem, we employ a recently developed uncertainty quantification framework based on push-forward probability measures, which combines techniques from measure theory and Bayes’ rule to define a unique and numerically stable solution. This approach requires making an initial prediction using an initial guess for the distribution on model inputs and solving a stochastic forward problem. To reduce the computational burden in solving both stochastic forward and stochastic inverse problems, we combine this approach with a machine learning Bayesian regression model based on Gaussian processes and demonstrate the proposed methodology on two representative case studies in structure–property linkages.

More Details

Scalable3-BO: Big data meets HPC - A scalable asynchronous parallel high-dimensional Bayesian optimization framework on supercomputers

Proceedings of the ASME Design Engineering Technical Conference

Tran, Anh

Bayesian optimization (BO) is a flexible and powerful framework that is suitable for computationally expensive simulation-based applications and guarantees statistical convergence to the global optimum. While remaining as one of the most popular optimization methods, its capability is hindered by the size of data, the dimensionality of the considered problem, and the nature of sequential optimization. These scalability issues are intertwined with each other and must be tackled simultaneously. In this work, we propose the Scalable3-BO framework, which employs sparse GP as the underlying surrogate model to scope with Big Data and is equipped with a random embedding to efficiently optimize high-dimensional problems with low effective dimensionality. The Scalable3-BO framework is further leveraged with asynchronous parallelization feature, which fully exploits the computational resource on HPC within a computational budget. As a result, the proposed Scalable3-BO framework is scalable in three independent perspectives: with respect to data size, dimensionality, and computational resource on HPC. The goal of this work is to push the frontiers of BO beyond its well-known scalability issues and minimize the wall-clock waiting time for optimizing high-dimensional computationally expensive applications. We demonstrate the capability of Scalable3-BO with 1 million data points, 10,000-dimensional problems, with 20 concurrent workers in an HPC environment.

More Details

Multi-fidelity machine-learning with uncertainty quantification and Bayesian optimization for materials design: Application to ternary random alloys

Journal of Chemical Physics

Tran, Anh; Wildey, Timothy M.; Tranchida, Julien G.; Thompson, Aidan P.

We present a scale-bridging approach based on a multi-fidelity (MF) machine-learning (ML) framework leveraging Gaussian processes (GP) to fuse atomistic computational model predictions across multiple levels of fidelity. Through the posterior variance of the MFGP, our framework naturally enables uncertainty quantification, providing estimates of confidence in the predictions. We used density functional theory as high-fidelity prediction, while a ML interatomic potential is used as low-fidelity prediction. Practical materials’ design efficiency is demonstrated by reproducing the ternary composition dependence of a quantity of interest (bulk modulus) across the full aluminum–niobium–titanium ternary random alloy composition space. The MFGP is then coupled to a Bayesian optimization procedure, and the computational efficiency of this approach is demonstrated by performing an on-the-fly search for the global optimum of bulk modulus in the ternary composition space. The framework presented in this manuscript is the first application of MFGP to atomistic materials simulations fusing predictions between density functional theory and classical interatomic potential calculations.

More Details

An active learning high-throughput microstructure calibration framework for solving inverse structure–process problems in materials informatics

Acta Materialia

Tran, Anh; Mitchell, John A.; Swiler, Laura P.; Wildey, Tim

Determining a process–structure–property relationship is the holy grail of materials science, where both computational prediction in the forward direction and materials design in the inverse direction are essential. Problems in materials design are often considered in the context of process–property linkage by bypassing the materials structure, or in the context of structure–property linkage as in microstructure-sensitive design problems. However, there is a lack of research effort in studying materials design problems in the context of process–structure linkage, which has a great implication in reverse engineering. In this work, given a target microstructure, we propose an active learning high-throughput microstructure calibration framework to derive a set of processing parameters, which can produce an optimal microstructure that is statistically equivalent to the target microstructure. The proposed framework is formulated as a noisy multi-objective optimization problem, where each objective function measures a deterministic or statistical difference of the same microstructure descriptor between a candidate microstructure and a target microstructure. Furthermore, to significantly reduce the physical waiting wall-time, we enable the high-throughput feature of the microstructure calibration framework by adopting an asynchronously parallel Bayesian optimization by exploiting high-performance computing resources. Case studies in additive manufacturing and grain growth are used to demonstrate the applicability of the proposed framework, where kinetic Monte Carlo (kMC) simulation is used as a forward predictive model, such that for a given target microstructure, the target processing parameters that produced this microstructure are successfully recovered.

More Details

sMF-BO-2CoGP: A sequential multi-fidelity constrained Bayesian optimization framework for design applications

Journal of Computing and Information Science in Engineering

Tran, Anh; Wildey, Tim; McCann, Scott

Bayesian optimization (BO) is an efiective surrogate-based method that has been widely used to optimize simulation-based applications. While the traditional Bayesian optimization approach only applies to single-fidelity models, many realistic applications provide multiple levels of fidelity with various computational complexity and predictive capability. In this work, we propose a multi-fidelity Bayesian optimization method for design applications with both known and unknown constraints. The proposed framework, called sMF-BO-2CoGP, is built on a multi-level CoKriging method to predict the objective function. An external binary classifier, which we approximate using a separate CoKriging model, is used to distinguish between feasible and infeasible regions. The sMF-BO-2CoGP method is demonstrated using a series of analytical examples, and a fiip-chip application for design optimization to minimize the deformation due to warping under thermal loading conditions.

More Details

srMO-BO-3GP: A sequential regularized multi-objective constrained Bayesian optimization for design applications

Proceedings of the ASME Design Engineering Technical Conference

Tran, Anh; Eldred, Michael S.; McCann, Scott; Wang, Yan

Bayesian optimization (BO) is an efficient and flexible global optimization framework that is applicable to a very wide range of engineering applications. To leverage the capability of the classical BO, many extensions, including multi-objective, multi-fidelity, parallelization, and latent-variable modeling, have been proposed to address the limitations of the classical BO framework. In this work, we propose a novel multi-objective (MO) extension, called srMOBO-3GP, to solve the MO optimization problems in a sequential setting. Three different Gaussian processes (GPs) are stacked together, where each of the GP is assigned with a different task: the first GP is used to approximate a single-objective computed from the MO definition, the second GP is used to learn the unknown constraints, and the third GP is used to learn the uncertain Pareto frontier. At each iteration, a MO augmented Tchebycheff function converting MO to single-objective is adopted and extended with a regularized ridge term, where the regularization is introduced to smooth the single-objective function. Finally, we couple the third GP along with the classical BO framework to explore the richness and diversity of the Pareto frontier by the exploitation and exploration acquisition function. The proposed framework is demonstrated using several numerical benchmark functions, as well as a thermomechanical finite element model for flip-chip package design optimization.

More Details

Data-driven high-fidelity 2D microstructure reconstruction via non-local patch-based image inpainting

Acta Materialia

Tran, Anh; Tran, Hoang

Microstructure reconstruction problems are usually limited to the representation with finitely many number of phases, e.g. binary and ternary. However, images of microstructure obtained through experimental, for example, using microscope, are often represented as a RGB or grayscale image. Because the phase-based representation is discrete, more rigid, and provides less flexibility in modeling the microstructure, as compared to RGB or grayscale image, there is a loss of information in the conversion. In this paper, a microstructure reconstruction method, which produces images at the fidelity of experimental microscopy, i.e. RGB or grayscale image, is proposed without introducing any physics-based microstructure descriptor. Furthermore, the image texture is preserved and the microstructure image is represented with continuous variables (as in RGB or grayscale images), instead of binary or categorical variables, which results in a high-fidelity image of microstructure reconstruction. The advantage of the proposed method is its quality of reconstruction, which can be applied to any other binary or multiphase 2D microstructure. The proposed method can be thought of as a subsampling approach to expand the microstructure dataset, while preserving its image texture. Moreover, the size of the reconstructed image is more flexible, compared to other machine learning microstructure reconstruction method, where the size must be fixed beforehand. In addition, the proposed method is capable of joining the microstructure images taken at different locations to reconstruct a larger microstructure image. A significant advantage of the proposed method is to remedy the data scarcity problem in materials science, where experimental data is scare and hard to obtain. The proposed method can also be applied to generate statistically equivalent microstructures, which has a strong implication in microstructure-related uncertainty quantification applications. The proposed microstructure reconstruction method is demonstrated with the UltraHigh Carbon Steel micrograph DataBase (UHCSDB).

More Details

WearGP: A computationally efficient machine learning framework for local erosive wear predictions via nodal Gaussian processes

Wear

Tran, Anh; Furlan, John M.; Pagalthivarthi, Krishnan V.; Visintainer, Robert J.; Wildey, Tim; Wang, Yan

Computational fluid dynamics (CFD)-based wear predictions are computationally expensive to evaluate, even with a high-performance computing infrastructure. Thus, it is difficult to provide accurate local wear predictions in a timely manner. Data-driven approaches provide a more computationally efficient way to approximate the CFD wear predictions without running the actual CFD wear models. In this paper, a machine learning (ML) approach, termed WearGP, is presented to approximate the 3D local wear predictions, using numerical wear predictions from steady-state CFD simulations as training and testing datasets. The proposed framework is built on Gaussian process (GP) and utilized to predict wear in a much shorter time. The WearGP framework can be segmented into three stages. At the first stage, the training dataset is built by using a number of CFD simulations in the order of O(102). At the second stage, the data cleansing and data mining processes are performed, where the nodal wear solutions are extracted from the solution database to build a training dataset. At the third stage, the wear predictions are made, using trained GP models. Two CFD case studies including 3D slurry pump impeller and casing are used to demonstrate the WearGP framework, in which 144 training and 40 testing data points are used to train and test the proposed method, respectively. The numerical accuracy, computational efficiency and effectiveness between the WearGP framework and CFD wear model for both slurry pump impellers and casings are compared. It is shown that the WearGP framework can achieve highly accurate results that are comparable with the CFD results, with a relatively small size training dataset, with a computational time reduction on the order of 105 to 106.

More Details

SBF-BO-2CoGP: A sequential bi-fidelity constrained Bayesian optimization for design applications

Proceedings of the ASME Design Engineering Technical Conference

Tran, Anh; Wildey, Tim; McCann, Scott

Bayesian optimization is an effective surrogate-based optimization method that has been widely used for simulation-based applications. However, the traditional Bayesian optimization (BO) method is only applicable to single-fidelity applications, whereas multiple levels of fidelity exist in reality. In this work, we propose a bi-fidelity known/unknown constrained Bayesian optimization method for design applications. The proposed framework, called sBF-BO-2CoGP, is built on a two-level CoKriging method to predict the objective function. An external binary classifier, which is also another CoKriging model, is used to distinguish between feasible and infeasible regions. The sBF-BO-2CoGP method is demonstrated using a numerical example and a flip-chip application for design optimization to minimize the warpage deformation under thermal loading conditions.

More Details
40 Results
40 Results