Publications

Results 1–25 of 108

Search results

Jump to search filters

Multi-fidelity Uncertainty Quantification for Homogenization Problems in Structure-Property Relationships from Crystal Plasticity Finite Elements

JOM

Laros, James H.; Robbe, Pieterjan; Lim, Hojun L.; Rodgers, Theron R.

Crystal plasticity finite element method (CPFEM) has been an integrated computational materials engineering (ICME) workhorse to study materials behaviors and structure-property relationships for the last few decades. These relations are mappings from the microstructure space to the materials properties space. Due to the stochastic and random nature of microstructures, there is always some uncertainty associated with materials properties, for example, in homogenized stress-strain curves. For critical applications with strong reliability needs, it is often desirable to quantify the microstructure-induced uncertainty in the context of structure-property relationships. However, this uncertainty quantification (UQ) problem often incurs a large computational cost because many statistically equivalent representative volume elements (SERVEs) are needed. In this article, we apply a multi-level Monte Carlo (MLMC) method to CPFEM to study the uncertainty in stress-strain curves, given an ensemble of SERVEs at multiple mesh resolutions. By using the information at coarse meshes, we show that it is possible to approximate the response at fine meshes with a much reduced computational cost. We focus on problems where the model output is multi-dimensional, which requires us to track multiple quantities of interest (QoIs) at the same time. Our numerical results show that MLMC can accelerate UQ tasks around 2.23×, compared to the classical Monte Carlo (MC) method, which is widely known as ensemble average in the CPFEM literature.

More Details

Microstructure-Based Modeling of Laser Beam Shaping During Additive Manufacturing

JOM

Moore, Robert M.; Orlandi, Giovanni; Rodgers, Theron R.; Moser, Daniel M.; Murdoch, Heather; Abdeljawad, Fadi

Recent experimental studies suggest the use of spatially extended laser beam profiles as a strategy to control the melt pool during laser powder bed fusion (LPBF) additive manufacturing. However, linkages connecting laser beam profiles to thermal fields and resultant microstructures have not been established. Herein, we employ a coupled thermal transport-Monte Carlo model to predict the evolution of temperature fields and grain microstructures during LPBF using Gaussian, ring, and Bessel beam profiles. Simulation results reveal that the ring-shaped beam yields lower temperatures compared with the Gaussian beam. Owing to the small melt pool size when using the Bessel beam, the grains are smaller in size and more equiaxed compared to those using the Gaussian and ring beams. Our approach provides future avenues to predict the impact of laser beam shaping on microstructure development during LPBF.

More Details

Calibration of thermal spray microstructure simulations using Bayesian optimization

Computational Materials Science

Montes de Oca Zapiain, David M.; Laros, James H.; Moore, Nathan W.; Rodgers, Theron R.

Thermal spray deposition is an inherently stochastic manufacturing process used for generating thick coatings of metals, ceramics and composites. The generated coatings exhibit hierarchically complex internal structures that affect the overall properties of the coating. The deposition process can be adequately simulated using rules-based process simulations. Nevertheless, in order for the simulation to accurately model particle spreading upon deposition, a set of predefined rules and parameters need to be calibrated to the specific material and processing conditions of interest. The calibration process is not trivial given the fact that many parameters do not correspond directly to experimentally measurable quantities. This work presents a protocol that automatically calibrates the parameters and rules of a given simulation in order to generate the synthetic microstructures with the closest statistics to an experimentally generated coating. Specifically, this work developed a protocol for tantalum coatings prepared using air plasma spray. The protocol starts by quantifying the internal structure using 2-point statistics and then representing it in a low-dimensional space using Principal Component Analysis. Subsequently, our protocol leverages Bayesian optimization to determine the parameters that yield the minimum distance between synthetic microstructure and the experimental coating in the low-dimensional space.

More Details

Parallel simulation via SPPARKS of on-lattice kinetic and Metropolis Monte Carlo models for materials processing

Modelling and Simulation in Materials Science and Engineering

Mitchell, John A.; Abdeljawad, Fadi; Battaile, Corbett C.; Garcia-Cardona, Cristina; Holm, Elizabeth A.; Homer, Eric R.; Madison, Jonathan D.; Rodgers, Theron R.; Thompson, Aidan P.; Tikare, Veena; Webb, Ed; Plimpton, Steven J.

SPPARKS is an open-source parallel simulation code for developing and running various kinds of on-lattice Monte Carlo models at the atomic or meso scales. It can be used to study the properties of solid-state materials as well as model their dynamic evolution during processing. The modular nature of the code allows new models and diagnostic computations to be added without modification to its core functionality, including its parallel algorithms. A variety of models for microstructural evolution (grain growth), solid-state diffusion, thin film deposition, and additive manufacturing (AM) processes are included in the code. SPPARKS can also be used to implement grid-based algorithms such as phase field or cellular automata models, to run either in tandem with a Monte Carlo method or independently. For very large systems such as AM applications, the Stitch I/O library is included, which enables only a small portion of a huge system to be resident in memory. In this paper we describe SPPARKS and its parallel algorithms and performance, explain how new Monte Carlo models can be added, and highlight a variety of applications which have been developed within the code.

More Details

Monotonic Gaussian Process for Physics-Constrained Machine Learning With Materials Science Applications

Journal of Computing and Information Science in Engineering

Laros, James H.; Maupin, Kathryn A.; Rodgers, Theron R.

Physics-constrained machine learning is emerging as an important topic in the field of machine learning for physics. One of the most significant advantages of incorporating physics constraints into machine learning methods is that the resulting model requires significantly less data to train. By incorporating physical rules into the machine learning formulation itself, the predictions are expected to be physically plausible. Gaussian process (GP) is perhaps one of the most common methods in machine learning for small datasets. In this paper, we investigate the possibility of constraining a GP formulation with monotonicity on three different material datasets, where one experimental and two computational datasets are used. The monotonic GP is compared against the regular GP, where a significant reduction in the posterior variance is observed. The monotonic GP is strictly monotonic in the interpolation regime, but in the extrapolation regime, the monotonic effect starts fading away as one goes beyond the training dataset. Imposing monotonicity on the GP comes at a small accuracy cost, compared to the regular GP. The monotonic GP is perhaps most useful in applications where data are scarce and noisy, and monotonicity is supported by strong physical evidence.

More Details

Multiscale analysis in solids with unseparated scales: fine-scale recovery, error estimation, and coarse-scale adaptivity

International Journal of Theoretical and Applied Multiscale Mechanics

Bishop, Joseph E.; Brown, Judith A.; Rodgers, Theron R.

There are several engineering applications in which the assumptions of homogenization and scale separation may be violated, in particular, for metallic structures constructed through additive manufacturing. Instead of resorting to direct numerical simulation of the macroscale system with an embedded fine scale, an alternative approach is to use an approximate macroscale constitutive model, but then estimate the model-form error using a posteriori error estimation techniques and subsequently adapt the macroscale model to reduce the error for a given boundary value problem and quantity of interest. Here, we investigate this approach to multiscale analysis in solids with unseparated scales using the example of an additively manufactured metallic structure consisting of a polycrystalline microstructure that is neither periodic nor statistically homogeneous. As a first step to the general nonlinear case, we focus here on linear elasticity in which each grain within the polycrystal is linear elastic but anisotropic.

More Details

Integrated computational materials engineering with monotonic Gaussian processes

Proceedings of the ASME Design Engineering Technical Conference

Laros, James H.; Maupin, Kathryn A.; Rodgers, Theron R.

Physics-constrained machine learning is emerging as an important topic in the field of machine learning for physics. One of the most significant advantages of incorporating physics constraints into machine learning methods is that the resulting machine learning model requires significantly fewer data to train. By incorporating physical rules into the machine learning formulation itself, the predictions are expected to be physically plausible. Gaussian process (GP) is perhaps one of the most common methods in machine learning for small datasets. In this paper, we investigate the possibility of constraining a GP formulation with monotonicity on two different material datasets, where one experimental and one computational dataset is used. The monotonic GP is compared against the regular GP, where a significant reduction in the posterior variance is observed. The monotonic GP is strictly monotonic in the interpolation regime, but in the extrapolation regime, the monotonic effect starts fading away as one goes beyond the training dataset. Imposing monotonicity on the GP comes at a small accuracy cost, compared to the regular GP. The monotonic GP is perhaps most useful in applications where data is scarce and noisy or when the dimensionality is high, and monotonicity is where supported by strong physical reasoning.

More Details
Results 1–25 of 108
Results 1–25 of 108