Composite materials with different microstructural material symmetries are common in engineering applications where grain structure, alloying and particle/fiber packing are optimized via controlled manufacturing. In fact these microstructural tunings can be done throughout a part to achieve functional gradation and optimization at a structural level. To predict the performance of particular microstructural configuration and thereby overall performance, constitutive models of materials with microstructure are needed. In this work we provide neural network architectures that provide effective homogenization models of materials with anisotropic components. These models satisfy equivariance and material symmetry principles inherently through a combination of equivariant and tensor basis operations. We demonstrate them on datasets of stochastic volume elements with different textures and phases where the material undergoes elastic and plastic deformation, and show that the these network architectures provide significant performance improvements.
Numerical simulations are used to study the dynamics of a developing suspension Poiseuille flow with monodispersed and bidispersed neutrally buoyant particles in a planar channel, and machine learning is applied to learn the evolving stresses of the developing suspension. The particle stresses and pressure develop on a slower time scale than the volume fraction, indicating that once the particles reach a steady volume fraction profile, they rearrange to minimize the contact pressure on each particle. We consider the timescale for stress development and how the stress development connects to particle migration. For developing monodisperse suspensions, we present a new physics-informed Galerkin neural network that allows for learning the particle stresses when direct measurements are not possible. We show that when a training set of stress measurements is available, the MOR-physics operator learning method can also capture the particle stresses accurately.
This project applies methods in Bayesian inference and modern statistical methods to quantify the value of new experimental data, in the form of new or modified diagnostic configurations and/or experiment designs. We demonstrate experiment design methods that can be used to identify the highest priority diagnostic improvements or experimental data to obtain in order to reduce uncertainties on critical inferred experimental quantities and select the best course of action to distinguish between competing physical models. Bayesian statistics and information theory provide the foundation for developing the necessary metrics, using two high impact experimental platforms on Z as exemplars to develop and illustrate the technique. We emphasize that the general methodology is extensible to new diagnostics (provided synthetic models are available), as well as additional platforms. We also discuss initial scoping of additional applications that began development in the last year of this LDRD.
Deep operator learning has emerged as a promising tool for reduced-order modelling and PDE model discovery. Leveraging the expressive power of deep neural networks, especially in high dimensions, such methods learn the mapping between functional state variables. While proposed methods have assumed noise only in the dependent variables, experimental and numerical data for operator learning typically exhibit noise in the independent variables as well, since both variables represent signals that are subject to measurement error. In regression on scalar data, failure to account for noisy independent variables can lead to biased parameter estimates. With noisy independent variables, linear models fitted via ordinary least squares (OLS) will show attenuation bias, wherein the slope will be underestimated. In this work, we derive an analogue of attenuation bias for linear operator regression with white noise in both the independent and dependent variables, showing that the norm upper bound of the operator learned via OLS decreases with increasing noise in the independent variable. In the nonlinear setting, we computationally demonstrate underprediction of the action of the Burgers operator in the presence of noise in the independent variable. We propose error-in-variables (EiV) models for two operator regression methods, MOR-Physics and DeepONet, and demonstrate that these new models reduce bias in the presence of noisy independent variables for a variety of operator learning problems. Considering the Burgers operator in 1D and 2D, we demonstrate that EiV operator learning robustly recovers operators in high-noise regimes that defeat OLS operator learning. We also introduce an EiV model for time-evolving PDE discovery and show that OLS and EiV perform similarly in learning the Kuramoto-Sivashinsky evolution operator from corrupted data, suggesting that the effect of bias in OLS operator learning depends on the regularity of the target operator.