Laser powder bed fusion (LPBF) additive manufacturing makes near-net-shaped parts with reduced material cost and time, rising as a promising technology to fabricate Ti-6Al-4 V, a widely used titanium alloy in aerospace and medical industries. However, LPBF Ti-6Al-4 V parts produced with 67° rotation between layers, a scan strategy commonly used to reduce microstructure and property inhomogeneity, have varying grain morphologies and weak crystallographic textures that change depending on processing parameters. This study predicts LPBF Ti-6Al-4 V solidification at three energy levels using a finite difference-Monte Carlo method and validates the simulations with large-area electron backscatter diffraction (EBSD) scans. The developed model accurately shows that a 〈001〉 texture forms at low energy and a 〈111〉 texture occurs at higher energies parallel to the build direction but with a lower strength than the textures observed from EBSD. A validated and well-established method of combining spatial correlation and general spherical harmonics representation of texture is developed to calculate a difference score between simulations and experiments. The quantitative comparison enables effective fine-tuning of nucleation density (N0) input, which shows a nonlinear relationship with increasing energy level. Future improvements in texture prediction code and a more comprehensive study of N0 with different energy levels will further advance the optimization of LPBF Ti-6Al-4 V components. These developments contribute a novel understanding of crystallographic texture formation in LPBF Ti-6Al-4 V, the development of robust model validation and calibration pipeline methodologies, and provide a platform for mechanical property prediction and process parameter optimization.
Crystal plasticity finite element method (CPFEM) has been an integrated computational materials engineering (ICME) workhorse to study materials behaviors and structure-property relationships for the last few decades. These relations are mappings from the microstructure space to the materials properties space. Due to the stochastic and random nature of microstructures, there is always some uncertainty associated with materials properties, for example, in homogenized stress-strain curves. For critical applications with strong reliability needs, it is often desirable to quantify the microstructure-induced uncertainty in the context of structure-property relationships. However, this uncertainty quantification (UQ) problem often incurs a large computational cost because many statistically equivalent representative volume elements (SERVEs) are needed. In this article, we apply a multi-level Monte Carlo (MLMC) method to CPFEM to study the uncertainty in stress-strain curves, given an ensemble of SERVEs at multiple mesh resolutions. By using the information at coarse meshes, we show that it is possible to approximate the response at fine meshes with a much reduced computational cost. We focus on problems where the model output is multi-dimensional, which requires us to track multiple quantities of interest (QoIs) at the same time. Our numerical results show that MLMC can accelerate UQ tasks around 2.23×, compared to the classical Monte Carlo (MC) method, which is widely known as ensemble average in the CPFEM literature.
Recent experimental studies suggest the use of spatially extended laser beam profiles as a strategy to control the melt pool during laser powder bed fusion (LPBF) additive manufacturing. However, linkages connecting laser beam profiles to thermal fields and resultant microstructures have not been established. Herein, we employ a coupled thermal transport-Monte Carlo model to predict the evolution of temperature fields and grain microstructures during LPBF using Gaussian, ring, and Bessel beam profiles. Simulation results reveal that the ring-shaped beam yields lower temperatures compared with the Gaussian beam. Owing to the small melt pool size when using the Bessel beam, the grains are smaller in size and more equiaxed compared to those using the Gaussian and ring beams. Our approach provides future avenues to predict the impact of laser beam shaping on microstructure development during LPBF.
Thermal spray deposition is an inherently stochastic manufacturing process used for generating thick coatings of metals, ceramics and composites. The generated coatings exhibit hierarchically complex internal structures that affect the overall properties of the coating. The deposition process can be adequately simulated using rules-based process simulations. Nevertheless, in order for the simulation to accurately model particle spreading upon deposition, a set of predefined rules and parameters need to be calibrated to the specific material and processing conditions of interest. The calibration process is not trivial given the fact that many parameters do not correspond directly to experimentally measurable quantities. This work presents a protocol that automatically calibrates the parameters and rules of a given simulation in order to generate the synthetic microstructures with the closest statistics to an experimentally generated coating. Specifically, this work developed a protocol for tantalum coatings prepared using air plasma spray. The protocol starts by quantifying the internal structure using 2-point statistics and then representing it in a low-dimensional space using Principal Component Analysis. Subsequently, our protocol leverages Bayesian optimization to determine the parameters that yield the minimum distance between synthetic microstructure and the experimental coating in the low-dimensional space.
Heterogenous materials under shock compression can be expected to reach different shock states throughout the material according to local differences in microstructure and the history of wave propagation. Here, a compact, multiple-beam focusing optic assembly is used with high-speed velocimetry to interrogate the shock response of porous tantalum films prepared through thermal-spray deposition. The distribution of particle velocities across a shocked interface is compared to results obtained using a set of defocused interferometric beams that sampled the shock response over larger areas. The two methods produced velocity distributions along the shock plateau with the same mean, while a larger variance was measured with narrower beams. The finding was replicated using three-dimensional, mesoscopically resolved hydrodynamics simulations of solid tantalum with a pore structure mimicking statistical attributes of the material and accounting for radial divergence of the beams, with agreement across several impact velocities. Accounting for pore morphology in the simulations was found to be necessary for replicating the rise time of the shock plateau. The validated simulations were then used to show that while the average velocity along the shock plateau could be determined accurately with only a few interferometric beams, accurately determining the width of the velocity distribution, which here was approximately Gaussian, required a beam dimension much smaller than the spatial correlation lengthscale of the velocity field, here by a factor of ∼30×, with implications for the study of other porous materials.
Carbon capture is essential to meeting climate change mitigation goals. One approach currently being commercialized utilizes liquid-based solvents to capture CO2 directly from the atmosphere but is limited by slow absorption of CO2 into the liquid. Improved air/solvent liquid mixing increases CO2 absorption rate, and this increased CO2 absorption efficiency allows for smaller carbon capture systems with lower capital costs and better economic viability. In this project, we study the use of passive micromixers fabricated by metal additive manufacturing. The micromixer’s small-scale surface geometric features perturb and mix the liquid film to enhance mass transfer and CO2 absorption. In this project, we evaluated this hypothesis through computational and experimental studies. Computational investigations focused on developing capabilities to simulate thin film (~ 100μm) fluid flow on rough surfaces. Such thin films are in a surface-tension dominated regime and simulations in this regime are prone to instabilities. Improvements to the Nalu code completed in this project resulted in a 10x timestep stability improvement for these problems.
SPPARKS is an open-source parallel simulation code for developing and running various kinds of on-lattice Monte Carlo models at the atomic or meso scales. It can be used to study the properties of solid-state materials as well as model their dynamic evolution during processing. The modular nature of the code allows new models and diagnostic computations to be added without modification to its core functionality, including its parallel algorithms. A variety of models for microstructural evolution (grain growth), solid-state diffusion, thin film deposition, and additive manufacturing (AM) processes are included in the code. SPPARKS can also be used to implement grid-based algorithms such as phase field or cellular automata models, to run either in tandem with a Monte Carlo method or independently. For very large systems such as AM applications, the Stitch I/O library is included, which enables only a small portion of a huge system to be resident in memory. In this paper we describe SPPARKS and its parallel algorithms and performance, explain how new Monte Carlo models can be added, and highlight a variety of applications which have been developed within the code.
Physics-constrained machine learning is emerging as an important topic in the field of machine learning for physics. One of the most significant advantages of incorporating physics constraints into machine learning methods is that the resulting model requires significantly less data to train. By incorporating physical rules into the machine learning formulation itself, the predictions are expected to be physically plausible. Gaussian process (GP) is perhaps one of the most common methods in machine learning for small datasets. In this paper, we investigate the possibility of constraining a GP formulation with monotonicity on three different material datasets, where one experimental and two computational datasets are used. The monotonic GP is compared against the regular GP, where a significant reduction in the posterior variance is observed. The monotonic GP is strictly monotonic in the interpolation regime, but in the extrapolation regime, the monotonic effect starts fading away as one goes beyond the training dataset. Imposing monotonicity on the GP comes at a small accuracy cost, compared to the regular GP. The monotonic GP is perhaps most useful in applications where data are scarce and noisy, and monotonicity is supported by strong physical evidence.