We introduce physics-informed multimodal autoencoders (PIMA)-a variational inference framework for discovering shared information in multimodal datasets. Individual modalities are embedded into a shared latent space and fused through a product-of-experts formulation, enabling a Gaussian mixture prior to identify shared features. Sampling from clusters allows cross-modal generative modeling, with a mixture-of-experts decoder that imposes inductive biases from prior scientific knowledge and thereby imparts structured disentanglement of the latent space. This approach enables cross-modal inference and the discovery of features in high-dimensional heterogeneous datasets. Consequently, this approach provides a means to discover fingerprints in multimodal scientific datasets and to avoid traditional bottlenecks related to high-fidelity measurement and characterization of scientific datasets.
Control volume analysis models physics via the exchange of generalized fluxes between subdomains. We introduce a scientific machine learning framework adopting a partition of unity architecture to identify physically-relevant control volumes, with generalized fluxes between subdomains encoded via Whitney forms. The approach provides a differentiable parameterization of geometry which may be trained in an end-to-end fashion to extract reduced models from full field data while exactly preserving physics. The architecture admits a data-driven finite element exterior calculus allowing discovery of mixed finite element spaces with closed form quadrature rules. An equivalence between Whitney forms and graph networks reveals that the geometric problem of control volume learning is equivalent to an unsupervised graph discovery problem. The framework is developed for manifolds in arbitrary dimension, with examples provided for H(div) problems in R2 establishing convergence and structure preservation properties. Finally, we consider a lithium-ion battery problem where we discover a reduced finite element space encoding transport pathways from high-fidelity microstructure resolved simulations. The approach reduces the 5.89M finite element simulation to 136 elements while reproducing pressure to under 0.1% error and preserving conservation.
High reliability (Hi-Rel) electronics for mission critical applications are handled with extreme care; stress testing upon full assembly can increase a likelihood of degrading these systems before their deployment. Moreover, novel material parts, such as wide bandgap semiconductor devices, tend to have more complicated fabrication processing needs which could ultimately result in larger part variability or potential defects. Therefore, an intelligent screening and inspection technique for electronic parts, in particular gallium nitride (GaN) power transistors, is presented in this paper. We present a machine-learning-based non-intrusive technique that can enhance part-selection decisions to categorize the part samples to the population's expected electrical characteristics. This technique provides relevant information about GaN HEMT device characteristics without having to operate all of these devices at the high current region of the transfer and output characteristics, lowering the risk of damaging the parts prematurely. The proposed non-intrusive technique uses a small signal pulse width modulation (PWM) of various frequencies, ranging from 10 kHz to 500 kHz, injected into the transistor terminals and the corresponding output signals are observed and used as training dataset. Unsupervised clustering techniques with K-means and feature dimensional reduction through principal component analysis (PCA) have been used to correlate a population of GaN HEMT transistors to the expected mean of the devices' electrical characteristic performance.
Using neural networks to solve variational problems, and other scientific machine learning tasks, has been limited by a lack of consistency and an inability to exactly integrate expressions involving neural network architectures. We address these limitations by formulating a polynomial-spline network, a novel shallow multilinear perceptron (MLP) architecture incorporating free knot B-spline basis functions into a polynomial mixture-of-experts model. Effectively, our architecture performs piecewise polynomial approximation on each cell of a trainable partition of unity while ensuring the MLP and its derivatives can be integrated exactly, obviating a reliance on sampling or quadrature and enabling error-free computation of variational forms. We demonstrate hp-convergence for regression problems at convergence rates expected from approximation theory and solve elliptic problems in one and two dimensions, with a favorable comparison to adaptive finite elements.
Using neural networks to solve variational problems, and other scientific machine learning tasks, has been limited by a lack of consistency and an inability to exactly integrate expressions involving neural network architectures. We address these limitations by formulating a polynomial-spline network, a novel shallow multilinear perceptron (MLP) architecture incorporating free knot B-spline basis functions into a polynomial mixture-of-experts model. Effectively, our architecture performs piecewise polynomial approximation on each cell of a trainable partition of unity while ensuring the MLP and its derivatives can be integrated exactly, obviating a reliance on sampling or quadrature and enabling error-free computation of variational forms. We demonstrate hp-convergence for regression problems at convergence rates expected from approximation theory and solve elliptic problems in one and two dimensions, with a favorable comparison to adaptive finite elements.