Laser powder bed fusion (LPBF) additive manufacturing makes near-net-shaped parts with reduced material cost and time, rising as a promising technology to fabricate Ti-6Al-4 V, a widely used titanium alloy in aerospace and medical industries. However, LPBF Ti-6Al-4 V parts produced with 67° rotation between layers, a scan strategy commonly used to reduce microstructure and property inhomogeneity, have varying grain morphologies and weak crystallographic textures that change depending on processing parameters. This study predicts LPBF Ti-6Al-4 V solidification at three energy levels using a finite difference-Monte Carlo method and validates the simulations with large-area electron backscatter diffraction (EBSD) scans. The developed model accurately shows that a 〈001〉 texture forms at low energy and a 〈111〉 texture occurs at higher energies parallel to the build direction but with a lower strength than the textures observed from EBSD. A validated and well-established method of combining spatial correlation and general spherical harmonics representation of texture is developed to calculate a difference score between simulations and experiments. The quantitative comparison enables effective fine-tuning of nucleation density (N0) input, which shows a nonlinear relationship with increasing energy level. Future improvements in texture prediction code and a more comprehensive study of N0 with different energy levels will further advance the optimization of LPBF Ti-6Al-4 V components. These developments contribute a novel understanding of crystallographic texture formation in LPBF Ti-6Al-4 V, the development of robust model validation and calibration pipeline methodologies, and provide a platform for mechanical property prediction and process parameter optimization.
The current present in a galvanic couple can define its resistance or susceptibility to corrosion. However, as the current is dependent upon environmental, material, and geometrical parameters it is experimentally costly to measure. To reduce these costs, Finite Element (FE) simulations can be used to assess the cathodic current but also require experimental inputs to define boundary conditions. Due to these challenges, it is crucial to accelerate predictions and accurately predict the current output for different environments and geometries representative of in-service conditions. Machine learned surrogate models provides a means to accelerate corrosion predictions. However, a one-time cost is incurred in procuring the simulation and experimental dataset necessary to calibrate the surrogate model. Therefore, an active learning protocol is developed through calibration of a low-cost surrogate model for the cathodic current of an exemplar galvanic couple (AA7075-SS304) as a function of environmental and geometric parameters. The surrogate model is calibrated on a dataset of FE simulations, and calculates an acquisition function that identifies specific additional inputs with the maximum potential to improve the current predictions. This is accomplished through a staggered workflow that not only improves and refines prediction, but identifies the points at which the most information is gained, thus enabling expansion to a larger parameter space. The protocols developed and demonstrated in this work provide a powerful tool for screening various forms of corrosion under in-service conditions.
This is the poster I will present at the GRC Aqueous Corrosion meeting detailing our latest work on integrating Machine Learning into the Computational Calculations of Galvanic Corrosion
This is the seminar I will present at WCCM conference highlighting our latest research work on incorporating genetic programming to obtain data-driven strength models for complex materials.
Thermal spray deposition is an inherently stochastic manufacturing process used for generating thick coatings of metals, ceramics and composites. The generated coatings exhibit hierarchically complex internal structures that affect the overall properties of the coating. The deposition process can be adequately simulated using rules-based process simulations. Nevertheless, in order for the simulation to accurately model particle spreading upon deposition, a set of predefined rules and parameters need to be calibrated to the specific material and processing conditions of interest. The calibration process is not trivial given the fact that many parameters do not correspond directly to experimentally measurable quantities. This work presents a protocol that automatically calibrates the parameters and rules of a given simulation in order to generate the synthetic microstructures with the closest statistics to an experimentally generated coating. Specifically, this work developed a protocol for tantalum coatings prepared using air plasma spray. The protocol starts by quantifying the internal structure using 2-point statistics and then representing it in a low-dimensional space using Principal Component Analysis. Subsequently, our protocol leverages Bayesian optimization to determine the parameters that yield the minimum distance between synthetic microstructure and the experimental coating in the low-dimensional space.
Highlights Novel protocol for extracting knowledge from previously performed Finite Element corrosion simulations using machine learning. Obtain accurate predictions for corrosion current 5 orders of magnitude faster than Finite Element simulations. Accurate machine learning based model capable of performing an effective and efficient search over the multi-dimensional input space to identify areas/zones where corrosion is more (or less) noticeable.
The finite element method (FEM) is widely used to simulate a variety of physics phenomena. Approaches that integrate FEM with neural networks (NNs) are typically leveraged as an alternative to conducting expensive FEM simulations in order to reduce the computational cost without significantly sacrificing accuracy. However, these methods can produce biased predictions that deviate from those obtained with FEM, since these hybrid FEM-NN approaches rely on approximations trained using physically relevant quantities. In this work, an uncertainty estimation framework is introduced that leverages ensembles of Bayesian neural networks to produce diverse sets of predictions using a hybrid FEM-NN approach that approximates internal forces on a deforming solid body. The uncertainty estimator developed herein reliably infers upper bounds of bias/variance in the predictions for a wide range of interpolation and extrapolation cases using a three-element FEM-NN model of a bar undergoing plastic deformation. This proposed framework offers a powerful tool for assessing the reliability of physics-based surrogate models by establishing uncertainty estimates for predictions spanning a wide range of possible load cases.
Advances in machine learning (ML) have enabled the development of interatomic potentials that promise the accuracy of first principles methods and the low-cost, parallel efficiency of empirical potentials. However, ML-based potentials struggle to achieve transferability, i.e., provide consistent accuracy across configurations that differ from those used during training. In order to realize the promise of ML-based potentials, systematic and scalable approaches to generate diverse training sets need to be developed. This work creates a diverse training set for tungsten in an automated manner using an entropy optimization approach. Subsequently, multiple polynomial and neural network potentials are trained on the entropy-optimized dataset. A corresponding set of potentials are trained on an expert-curated dataset for tungsten for comparison. The models trained to the entropy-optimized data exhibited superior transferability compared to the expert-curated models. Furthermore, the models trained to the expert-curated set exhibited a significant decrease in performance when evaluated on out-of-sample configurations.
This report is the final documentation for the one-year LDRD project 226360: Simulated X-ray Diffraction and Machine Learning for Optimizing Dynamic Experiment Analysis. As Sandia has successfully developed in-house X-ray diffraction tools for study of atomic structure in experiments, it has become increasingly important to develop computational analysis methods to support these experiments. When dynamically compressed lattices and orientations are not known a priori, the identification requires a cumbersome and sometimes intractable search of possible final states. These final states can include phase transition, deformation and mixed/evolving states. Our work consists of three parts: (1) development of an XRD simulation tool and use of traditional data science methods to match XRD patterns to experiments; (2) development of ML-based models capable of decomposing and identifying the lattice and orientation components of multicomponent experimental diffraction patterns; and (3) conducting experiments which showcase these new analysis tools in the study of phase transition mechanisms. Our target material has been cadmium sulfide, which exhibits complex orientation-dependent phase transformation mechanisms. In our current one-year LDRD, we have begun the analysis of high-quality c-axis CdS diffraction data from DCS and Thor experiments, which had until recently eluded orientation identification.
This report includes a compilation of several slide presentations: 1) Interatomic Potentials for Materials Science and Beyond–Advances in Machine Learned Spectral Neighborhood Analysis Potentials (Wood); 2) Agile Materials Science and Advanced Manufacturing through AI/ML (de Oca Zapiain); 3) Machine Learning for DFT Calculations (Rajamanickam); 4) Structure-preserving ML discovery of a quantum-to-continuum codesign stack (Trask); and 5) IBM Overview of Accelerated Discovery Technology (Pitera)
Predicting the properties of grain boundaries poses a challenge because of the complex relationships between structural and chemical attributes both at the atomic and continuum scales. Grain boundary systems are typically characterized by parameters used to classify local atomic arrangements in order to extract features such as grain boundary energy or grain boundary strength. The present work utilizes a combination of high-throughput atomistic simulations, macroscopic and microscopic descriptors, and machine-learning techniques to characterize the energy and strength of silicon carbide grain boundaries. A diverse data set of symmetric tilt and twist grain boundaries are described using macroscopic metrics such as misorientation, the alignment of critical low-index planes, and the Schmid factor, but also in terms of microscopic metrics, by quantifying the local atomic structure and chemistry at the interface. These descriptors are used to create random-forest regression models, allowing for their relative importance to the grain boundary energy and decohesion stress to be better understood. Results show that while the energetics of the grain boundary were best described using the microscopic descriptors, the ability of the macroscopic descriptors to reasonably predict grain boundaries with low energy suggests a link between the crystallographic orientation and the resultant atomic structure that forms at the grain boundary within this regime. For grain boundary strength, neither microscopic nor macroscopic descriptors were able to fully capture the response individually. However, when both descriptor sets were utilized, the decohesion stress of the grain boundary could be accurately predicted. These results highlight the importance of considering both macroscopic and microscopic factors when constructing constitutive models for grain boundary systems, which has significant implications for both understanding the fundamental mechanisms at work and the ability to bridge length scales.
The phase-field method is a powerful and versatile computational approach for modeling the evolution of microstructures and associated properties for a wide variety of physical, chemical, and biological systems. However, existing high-fidelity phase-field models are inherently computationally expensive, requiring high-performance computing resources and sophisticated numerical integration schemes to achieve a useful degree of accuracy. In this paper, we present a computationally inexpensive, accurate, data-driven surrogate model that directly learns the microstructural evolution of targeted systems by combining phase-field and history-dependent machine-learning techniques. We integrate a statistically representative, low-dimensional description of the microstructure, obtained directly from phase-field simulations, with either a time-series multivariate adaptive regression splines autoregressive algorithm or a long short-term memory neural network. The neural-network-trained surrogate model shows the best performance and accurately predicts the nonlinear microstructure evolution of a two-phase mixture during spinodal decomposition in seconds, without the need for “on-the-fly” solutions of the phase-field equations of motion. We also show that the predictions from our machine-learned surrogate model can be fed directly as an input into a classical high-fidelity phase-field model in order to accelerate the high-fidelity phase-field simulations by leaping in time. Such machine-learned phase-field framework opens a promising path forward to use accelerated phase-field simulations for discovering, understanding, and predicting processing–microstructure–performance relationships.