Publications

Results 51–100 of 137

Search results

Jump to search filters

Simultaneous inversion of shear modulus and traction boundary conditions in biomechanical imaging

Inverse Problems in Science and Engineering

Seidl, D.T.; Van Bloemen Waanders, Bart; Wildey, Timothy

We present a formulation to simultaneously invert for a heterogeneous shear modulus field and traction boundary conditions in an incompressible linear elastic plane stress model. Our approach utilizes scalable deterministic methods, including adjoint-based sensitivities and quasi-Newton optimization, to reduce the computational requirements for large-scale inversion with partial differential equation (PDE) constraints. Here, we address the use of regularization for such formulations and explore the use of different types of regularization for the shear modulus and boundary traction. We apply this PDE-constrained optimization algorithm to a synthetic dataset to verify the accuracy in the reconstructed parameters, and to experimental data from a tissue-mimicking ultrasound phantom. In all of these examples, we compare inversion results from full-field and sparse data measurements.

More Details

WearGP: A computationally efficient machine learning framework for local erosive wear predictions via nodal Gaussian processes

Wear

Foulk, James W.; Furlan, John M.; Pagalthivarthi, Krishnan V.; Visintainer, Robert J.; Wildey, Timothy; Wang, Yan

Computational fluid dynamics (CFD)-based wear predictions are computationally expensive to evaluate, even with a high-performance computing infrastructure. Thus, it is difficult to provide accurate local wear predictions in a timely manner. Data-driven approaches provide a more computationally efficient way to approximate the CFD wear predictions without running the actual CFD wear models. In this paper, a machine learning (ML) approach, termed WearGP, is presented to approximate the 3D local wear predictions, using numerical wear predictions from steady-state CFD simulations as training and testing datasets. The proposed framework is built on Gaussian process (GP) and utilized to predict wear in a much shorter time. The WearGP framework can be segmented into three stages. At the first stage, the training dataset is built by using a number of CFD simulations in the order of O(102). At the second stage, the data cleansing and data mining processes are performed, where the nodal wear solutions are extracted from the solution database to build a training dataset. At the third stage, the wear predictions are made, using trained GP models. Two CFD case studies including 3D slurry pump impeller and casing are used to demonstrate the WearGP framework, in which 144 training and 40 testing data points are used to train and test the proposed method, respectively. The numerical accuracy, computational efficiency and effectiveness between the WearGP framework and CFD wear model for both slurry pump impellers and casings are compared. It is shown that the WearGP framework can achieve highly accurate results that are comparable with the CFD results, with a relatively small size training dataset, with a computational time reduction on the order of 105 to 106.

More Details

Unified geometric multigrid algorithm for hybridized high-order finite element methods

SIAM Journal on Scientific Computing

Wildey, Timothy; Muralikrishnan, Sriramkrishnan; Bui-Thanh, Tan

We consider a standard elliptic partial differential equation and propose a geometric multigrid algorithm based on Dirichlet-to-Neumann (DtN) maps for hybridized high-order finite element methods. The proposed unified approach is applicable to any locally conservative hybridized finite element method including multinumerics with different hybridized methods in different parts of the domain. For these methods, the linear system involves only the unknowns residing on the mesh skeleton, and constructing intergrid transfer operators is therefore not trivial. The key to our geometric multigrid algorithm is the physics-based energy-preserving intergrid transfer operators which depend only on the fine scale DtN maps. Thanks to these operators, we completely avoid upscaling of parameters and no information regarding subgrid physics is explicitly required on coarse meshes. Moreover, our algorithm is agglomeration-based and can straightforwardly handle unstructured meshes. We perform extensive numerical studies with hybridized mixed methods, hybridized discontinuous Galerkin methods, weak Galerkin methods, and hybridized versions of interior penalty discontinuous Galerkin methods on a range of elliptic problems including subsurface flow through highly heterogeneous porous media. We compare the performance of different smoothers and analyze the effect of stabilization parameters on the scalability of the multigrid algorithm.

More Details

SBF-BO-2CoGP: A sequential bi-fidelity constrained Bayesian optimization for design applications

Proceedings of the ASME Design Engineering Technical Conference

Foulk, James W.; Wildey, Timothy; Mccann, Scott

Bayesian optimization is an effective surrogate-based optimization method that has been widely used for simulation-based applications. However, the traditional Bayesian optimization (BO) method is only applicable to single-fidelity applications, whereas multiple levels of fidelity exist in reality. In this work, we propose a bi-fidelity known/unknown constrained Bayesian optimization method for design applications. The proposed framework, called sBF-BO-2CoGP, is built on a two-level CoKriging method to predict the objective function. An external binary classifier, which is also another CoKriging model, is used to distinguish between feasible and infeasible regions. The sBF-BO-2CoGP method is demonstrated using a numerical example and a flip-chip application for design optimization to minimize the warpage deformation under thermal loading conditions.

More Details

Robust Uncertainty Quantification using Response Surface Approximations of Discontinuous Functions

International Journal for Uncertainty Quantification

Wildey, Timothy; Gorodetsky, Alex; Belme, Anca; Shadid, John N.

This paper considers response surface approximations for discontinuous quantities of interest. Our objective is not to adaptively characterize the manifold defining the discontinuity. Instead, we utilize an epistemic description of the uncertainty in the location of a discontinuity to produce robust bounds on sample-based estimates of probabilistic quantities of interest. We demonstrate that two common machine learning strategies for classification, one based on nearest neighbors (Voronoi cells) and one based on support vector machines, provide reasonable descriptions of the region where the discontinuity may reside. In higher dimensional spaces, we demonstrate that support vector machines are more accurate for discontinuities defined by smooth manifolds. We also show how gradient information, often available via adjoint-based approaches, can be used to define indicators to effectively detect a discontinuity and to decompose the samples into clusters using an unsupervised learning technique. Numerical results demonstrate the epistemic bounds on probabilistic quantities of interest for simplistic models and for a compressible fluid model with a shock-induced discontinuity.

More Details

Data-Consistent Solutions to Stochastic Inverse Problems Using a Probabilistic Multi-Fidelity Method Based on Conditional Densities

International Journal for Uncertainty Quantification

Wildey, Timothy; Bruder, L.; Gee, M.W.

In this work, we build upon a recently developed approach for solving stochastic inverse problems based on a combination of measure-theoretic principles and Bayes' rule. We propose a multi-fidelity method to reduce the computational burden of performing uncertainty quantification using high-fidelity models. This approach is based on a Monte Carlo framework for uncertainty quantification that combines information from solvers of various fidelities to obtain statistics on the quantities of interest of the problem. In particular, our goal is to generate samples from a high-fidelity push-forward density at a fraction of the costs of standard Monte Carlo methods, while maintaining flexibility in the number of random model input parameters. Key to this methodology is the construction of a regression model to represent the stochastic mapping between the low- and high-fidelity models, such that most of the computations can be leveraged to the low-fidelity model. To that end, we employ Gaussian process regression and present extensions to multi-level-type hierarchies as well as to the case of multiple quantities of interest. Finally, we demonstrate the feasibility of the framework in several numerical examples.

More Details

Convergence of Probability Densities Using Approximate Models for Forward and Inverse Problems in Uncertainty Quantification

SIAM Journal on Scientific Computing

Butler, Troy; Jakeman, John D.; Wildey, Timothy

Here, we analyze the convergence of probability density functions utilizing approximate models for both forward and inverse problems. We consider the standard forward uncertainty quantification problem where an assumed probability density on parameters is propagated through the approximate model to produce a probability density, often called a push-forward probability density, on a set of quantities of interest (QoI). The inverse problem considered in this paper seeks to update an initial probability density assumed on model input parameters such that the subsequent push-forward of this updated density through the parameter-to-QoI map matches a given probability density on the QoI. We prove that the densities obtained from solving the forward and inverse problems, using approximate models, converge to the true densities as the approximate models converge to the true models. Numerical results are presented to demonstrate convergence rates of densities for sparse grid approximations of parameter-to-QoI maps and standard spatial and temporal discretizations of PDEs and ODEs.

More Details

Optimal experimental design using a consistent Bayesian approach

ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering

Walsh, Scott N.; Wildey, Timothy; Jakeman, John D.

We consider the utilization of a computational model to guide the optimal acquisition of experimental data to inform the stochastic description of model input parameters. Our formulation is based on the recently developed consistent Bayesian approach for solving stochastic inverse problems, which seeks a posterior probability density that is consistent with the model and the data in the sense that the push-forward of the posterior (through the computational model) matches the observed density on the observations almost everywhere. Given a set of potential observations, our optimal experimental design (OED) seeks the observation, or set of observations, that maximizes the expected information gain from the prior probability density on the model parameters. We discuss the characterization of the space of observed densities and a computationally efficient approach for rescaling observed densities to satisfy the fundamental assumptions of the consistent Bayesian approach. Numerical results are presented to compare our approach with existing OED methodologies using the classical/statistical Bayesian approach and to demonstrate our OED on a set of representative partial differential equations (PDE)-based models.

More Details

Combining push-forward measures and bayes' rule to construct consistent solutions to stochastic inverse problems

SIAM Journal on Scientific Computing

Wildey, Timothy; Butler, T.; Jakeman, John D.

We formulate, and present a numerical method for solving, an inverse problem for inferring parameters of a deterministic model from stochastic observational data on quantities of interest. The solution, given as a probability measure, is derived using a Bayesian updating approach for measurable maps that finds a posterior probability measure that when propagated through the deterministic model produces a push-forward measure that exactly matches the observed probability measure on the data. Our approach for finding such posterior measures, which we call consistent Bayesian inference or push-forward based inference, is simple and only requires the computation of the push-forward probability measure induced by the combination of a prior probability measure and the deterministic model. We establish existence and uniqueness of observation-consistent posteriors and present both stability and error analyses. We also discuss the relationships between consistent Bayesian inference, classical/statistical Bayesian inference, and a recently developed measure-theoretic approach for inference. Finally, analytical and numerical results are presented to highlight certain properties of the consistent Bayesian approach and the differences between this approach and the two aforementioned alternatives for inference.

More Details

Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

International Journal for Uncertainty Quantification

Wildey, Timothy; Butler, Troy

In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives precisely the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.

More Details

Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

Journal of Computational Physics

Smith, Thomas M.; Shadid, John N.; Cyr, Eric C.; Pawlowski, Roger; Wildey, Timothy

A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts to apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier-Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.

More Details
Results 51–100 of 137
Results 51–100 of 137