Publications

Results 51–75 of 132

Search results

Jump to search filters

Simultaneous inversion of shear modulus and traction boundary conditions in biomechanical imaging

Inverse Problems in Science and Engineering

Seidl, Daniel T.; van Bloemen Waanders, Bart G.; Wildey, Timothy M.

We present a formulation to simultaneously invert for a heterogeneous shear modulus field and traction boundary conditions in an incompressible linear elastic plane stress model. Our approach utilizes scalable deterministic methods, including adjoint-based sensitivities and quasi-Newton optimization, to reduce the computational requirements for large-scale inversion with partial differential equation (PDE) constraints. Here, we address the use of regularization for such formulations and explore the use of different types of regularization for the shear modulus and boundary traction. We apply this PDE-constrained optimization algorithm to a synthetic dataset to verify the accuracy in the reconstructed parameters, and to experimental data from a tissue-mimicking ultrasound phantom. In all of these examples, we compare inversion results from full-field and sparse data measurements.

More Details

WearGP: A computationally efficient machine learning framework for local erosive wear predictions via nodal Gaussian processes

Wear

Laros, James H.; Furlan, John M.; Pagalthivarthi, Krishnan V.; Visintainer, Robert J.; Wildey, Timothy M.; Wang, Yan

Computational fluid dynamics (CFD)-based wear predictions are computationally expensive to evaluate, even with a high-performance computing infrastructure. Thus, it is difficult to provide accurate local wear predictions in a timely manner. Data-driven approaches provide a more computationally efficient way to approximate the CFD wear predictions without running the actual CFD wear models. In this paper, a machine learning (ML) approach, termed WearGP, is presented to approximate the 3D local wear predictions, using numerical wear predictions from steady-state CFD simulations as training and testing datasets. The proposed framework is built on Gaussian process (GP) and utilized to predict wear in a much shorter time. The WearGP framework can be segmented into three stages. At the first stage, the training dataset is built by using a number of CFD simulations in the order of O(102). At the second stage, the data cleansing and data mining processes are performed, where the nodal wear solutions are extracted from the solution database to build a training dataset. At the third stage, the wear predictions are made, using trained GP models. Two CFD case studies including 3D slurry pump impeller and casing are used to demonstrate the WearGP framework, in which 144 training and 40 testing data points are used to train and test the proposed method, respectively. The numerical accuracy, computational efficiency and effectiveness between the WearGP framework and CFD wear model for both slurry pump impellers and casings are compared. It is shown that the WearGP framework can achieve highly accurate results that are comparable with the CFD results, with a relatively small size training dataset, with a computational time reduction on the order of 105 to 106.

More Details

Unified geometric multigrid algorithm for hybridized high-order finite element methods

SIAM Journal on Scientific Computing

Wildey, Timothy M.; Muralikrishnan, Sriramkrishnan; Bui-Thanh, Tan

We consider a standard elliptic partial differential equation and propose a geometric multigrid algorithm based on Dirichlet-to-Neumann (DtN) maps for hybridized high-order finite element methods. The proposed unified approach is applicable to any locally conservative hybridized finite element method including multinumerics with different hybridized methods in different parts of the domain. For these methods, the linear system involves only the unknowns residing on the mesh skeleton, and constructing intergrid transfer operators is therefore not trivial. The key to our geometric multigrid algorithm is the physics-based energy-preserving intergrid transfer operators which depend only on the fine scale DtN maps. Thanks to these operators, we completely avoid upscaling of parameters and no information regarding subgrid physics is explicitly required on coarse meshes. Moreover, our algorithm is agglomeration-based and can straightforwardly handle unstructured meshes. We perform extensive numerical studies with hybridized mixed methods, hybridized discontinuous Galerkin methods, weak Galerkin methods, and hybridized versions of interior penalty discontinuous Galerkin methods on a range of elliptic problems including subsurface flow through highly heterogeneous porous media. We compare the performance of different smoothers and analyze the effect of stabilization parameters on the scalability of the multigrid algorithm.

More Details

Robust uncertainty quantification using response surface approximations of discontinuous functions

International Journal for Uncertainty Quantification

Wildey, Timothy M.; Gorodetsky, A.A.; Belme, A.C.; Shadid, John N.

This paper considers response surface approximations for discontinuous quantities of interest. Our objective is not to adaptively characterize the interface defining the discontinuity. Instead, we utilize an epistemic description of the uncertainty in the location of a discontinuity to produce robust bounds on sample-based estimates of probabilistic quantities of interest. We demonstrate that two common machine learning strategies for classification, one based on nearest neighbors (Voronoi cells) and one based on support vector machines, provide reasonable descriptions of the region where the discontinuity may reside. In higher dimensional spaces, we demonstrate that support vector machines are more accurate for discontinuities defined by smooth interfaces. We also show how gradient information, often available via adjoint-based approaches, can be used to define indicators to effectively detect a discontinuity and to decompose the samples into clusters using an unsupervised learning technique. Numerical results demonstrate the epistemic bounds on probabilistic quantities of interest for simplistic models and for a compressible fluid model with a shock-induced discontinuity.

More Details

SBF-BO-2CoGP: A sequential bi-fidelity constrained Bayesian optimization for design applications

Proceedings of the ASME Design Engineering Technical Conference

Laros, James H.; Wildey, Timothy M.; Mccann, Scott

Bayesian optimization is an effective surrogate-based optimization method that has been widely used for simulation-based applications. However, the traditional Bayesian optimization (BO) method is only applicable to single-fidelity applications, whereas multiple levels of fidelity exist in reality. In this work, we propose a bi-fidelity known/unknown constrained Bayesian optimization method for design applications. The proposed framework, called sBF-BO-2CoGP, is built on a two-level CoKriging method to predict the objective function. An external binary classifier, which is also another CoKriging model, is used to distinguish between feasible and infeasible regions. The sBF-BO-2CoGP method is demonstrated using a numerical example and a flip-chip application for design optimization to minimize the warpage deformation under thermal loading conditions.

More Details

Convergence of Probability Densities Using Approximate Models for Forward and Inverse Problems in Uncertainty Quantification

SIAM Journal on Scientific Computing

Butler, T.; Jakeman, John D.; Wildey, Timothy M.

A previous study analyzed the convergence of probability densities for forward and inverse problems when a sequence of approximate maps between model inputs and outputs converges in L. Our report generalizes the analysis to cases where the approximate maps converge in LP for any 1 ≤ p < ∞. In particular, under the assumption that the approximate maps converge in LP, the convergence of probability density functions solving either forward or inverse problems is proven in V where the value of 1 ≤ q < ∞ may even be greater than p in certain cases. This greatly expands the applicability of the previous results to commonly used methods for approximating models (such as polynomial chaos expansions) that only guarantee LP convergence for some 1 ≤ p < ∞. Severalnumerical examples are also included along with numerical diagnostics of solutions and verification of assumptions made in the analysis.

More Details

Optimal experimental design using a consistent Bayesian approach

ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering

Walsh, Scott N.; Wildey, Timothy M.; Jakeman, John D.

We consider the utilization of a computational model to guide the optimal acquisition of experimental data to inform the stochastic description of model input parameters. Our formulation is based on the recently developed consistent Bayesian approach for solving stochastic inverse problems, which seeks a posterior probability density that is consistent with the model and the data in the sense that the push-forward of the posterior (through the computational model) matches the observed density on the observations almost everywhere. Given a set of potential observations, our optimal experimental design (OED) seeks the observation, or set of observations, that maximizes the expected information gain from the prior probability density on the model parameters. We discuss the characterization of the space of observed densities and a computationally efficient approach for rescaling observed densities to satisfy the fundamental assumptions of the consistent Bayesian approach. Numerical results are presented to compare our approach with existing OED methodologies using the classical/statistical Bayesian approach and to demonstrate our OED on a set of representative partial differential equations (PDE)-based models.

More Details

Utilizing adjoint-based error estimates for surrogate models to accurately predict probabilities of events

International Journal for Uncertainty Quantification

Wildey, Timothy M.; Butler, T.

We develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives precisely the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.

More Details

Combining push-forward measures and bayes' rule to construct consistent solutions to stochastic inverse problems

SIAM Journal on Scientific Computing

Wildey, Timothy M.; Butler, T.; Jakeman, John D.

We formulate, and present a numerical method for solving, an inverse problem for inferring parameters of a deterministic model from stochastic observational data on quantities of interest. The solution, given as a probability measure, is derived using a Bayesian updating approach for measurable maps that finds a posterior probability measure that when propagated through the deterministic model produces a push-forward measure that exactly matches the observed probability measure on the data. Our approach for finding such posterior measures, which we call consistent Bayesian inference or push-forward based inference, is simple and only requires the computation of the push-forward probability measure induced by the combination of a prior probability measure and the deterministic model. We establish existence and uniqueness of observation-consistent posteriors and present both stability and error analyses. We also discuss the relationships between consistent Bayesian inference, classical/statistical Bayesian inference, and a recently developed measure-theoretic approach for inference. Finally, analytical and numerical results are presented to highlight certain properties of the consistent Bayesian approach and the differences between this approach and the two aforementioned alternatives for inference.

More Details
Results 51–75 of 132
Results 51–75 of 132