An interpretable machine learning method, physics-informed genetic programming-based symbolic regression (P-GPSR), is integrated into a continuum thermodynamic approach to developing constitutive models. The proposed strategy for combining a thermodynamic analysis with P-GPSR is demonstrated by generating a yield function for an idealized material with voids, i.e., the Gurson yield function. First, a thermodynamic-based analysis is used to derive model requirements that are exploited in a custom P-GPSR implementation as fitness criteria or are strongly enforced in the solution. The P-GPSR implementation improved accuracy, generalizability, and training time compared to the same GPSR code without physics-informed fitness criteria. The yield function generated through the P-GPSR framework is in the form of a composite function that describes a class of materials and is characteristically more interpretable than GPSR-derived equations. The physical significance of the input functions learned by P-GPSR within the composite function is acquired from the thermodynamic analysis. Fundamental explanations of why the implemented P-GPSR capabilities improve results over a conventional GPSR algorithm are provided.
Computational simulation is increasingly relied upon for high/consequence engineering decisions, which necessitates a high confidence in the calibration of and predictions from complex material models. However, the calibration and validation of material models is often a discrete, multi-stage process that is decoupled from material characterization activities, which means the data collected does not always align with the data that is needed. To address this issue, an integrated workflow for delivering an enhanced characterization and calibration procedure—Interlaced Characterization and Calibration (ICC)—is introduced and demonstrated. Further, this framework leverages Bayesian optimal experimental design (BOED), which creates a line of communication between model calibration needs and data collection capabilities in order to optimize the information content gathered from the experiments for model calibration. Eventually, the ICC framework will be used in quasi real-time to actively control experiments of complex specimens for the calibration of a high-fidelity material model. This work presents the critical first piece of algorithm development and a demonstration in determining the optimal load path of a cruciform specimen with simulated data. Calibration results, obtained via Bayesian inference, from the integrated ICC approach are compared to calibrations performed by choosing the load path a priori based on human intuition, as is traditionally done. The calibration results are communicated through parameter uncertainties which are propagated to the model output space (i.e. stress–strain). In these exemplar problems, data generated within the ICC framework resulted in calibrated model parameters with reduced measures of uncertainty compared to the traditional approaches.
Update to prior 5.14 user manual. I think updates are minor and mostly in the Johnson-cook section. In there those updates are more writing and less on technical changes.
Here this paper introduces a publicly available PyTorch-ABAQUS deep-learning framework of a family of plasticity models where the yield surface is implicitly represented by a scalar-valued function. In particular, our focus is to introduce a practical framework that can be deployed for engineering analysis that employs a user-defined material subroutine (UMAT/VUMAT) for ABAQUS, which is written in FORTRAN. To accomplish this task while leveraging the back-propagation learning algorithm to speed up the neural-network training, we introduce an interface code where the weights and biases of the trained neural networks obtained via the PyTorch library can be automatically converted into a generic FORTRAN code that can be a part of the UMAT/VUMAT algorithm. To enable third-party validation, we purposely make all the data sets, source code used to train the neural-network-based constitutive models, and the trained models available in a public repository. Furthermore, the practicality of the workflow is then further tested on a dataset for anisotropic yield function to showcase the extensibility of the proposed framework. A number of representative numerical experiments are used to examine the accuracy, robustness and reproducibility of the results generated by the neural network models.
The tearing parameter criterion and material softening failure method currently used in the multilinear elastic-plastic constitutive model was added as an option to modular failure capabilities. The modular failure implementation was integrated with the multilevel solver for multi-element simulations. Currently, this implementation is only available to the J2 plasticity model due to the formulation of the material softening approach. The implementation compared well with multilinear elastic-plastic model results for a uniaxial tension test, a simple shear test, and a representative structural problem. Necessary generalizations of the failure method to extend it as a modular option for all plasticity models are highlighted.
Accurate and efficient constitutive modeling remains a cornerstone issue for solid mechanics analysis. Over the years, the LAMÉ advanced material model library has grown to address this challenge by implementing models capable of describing material systems spanning soft polymers to stiff ceramics including both isotropic and anisotropic responses. Inelastic behaviors including (visco)plasticity, damage, and fracture have all incorporated for use in various analyses. This multitude of options and flexibility, however, comes at the cost of many capabilities, features, and responses and the ensuing complexity in the resulting implementation. Therefore, to enhance confidence and enable the utilization of the LAMÉ library in application, this effort seeks to document and verify the various models in the LAMÉ library. Specifically, the broader strategy, organization, and interface of the library itself is first presented. The physical theory, numerical implementation, and user guide for a large set of models is then discussed. Importantly, a number of verification tests are performed with each model to not only have confidence in the model itself but also highlight some important response characteristics and features that may be of interest to end-users. Finally, in looking ahead to the future, approaches to add material models to this library and further expand the capabilities are presented.
Plate puncture simulations are challenging computational tasks that require advanced material models including high strain rate and thermal-mechanical effects on both deformation and failure, plus finite element techniques capable of representing large deformations and material failure. The focus of this work is on the material issues, which require large sets of experiments, flexible material models and challenging calibration procedures. In this study, we consider the puncture of 12.7 mm thick, 7075-T651 aluminum alloy plates by a cylindrical punch with a hemispherical nose and diameter of 12.7 mm. The plasticity and ductile failure models were isotropic with calibration data obtained from uniaxial tension tests at different temperatures and strain rates plus quasi-static notched tension tests and shear-dominated tests described here. Sixteen puncture experiments were conducted to identify the threshold penetration energy, mode of puncture and punch acceleration during impact, The punch was mounted on a 139 kg mass and dropped on the plates with different impact speeds. Since the mass was the same in all tests, the quantity of interest was the impact speed. The axis and velocity of the punch were perpendicular to the plate surface. The mean threshold punch speed was 3.05 m/s, and the mode of failure was plugging by thermal-mechanical shear banding accompanied by scabbing fragments. Application of the material models in simulations of the tests yielded accurate estimates of the threshold puncture speed and of the mode of failure. Time histories of the punch acceleration compared well between simulation and test. Remarkably, the success of the simulations occurred in spite of even the smallest element used being larger than the width of the shear bands.