Publications

Results 26–50 of 147
Skip to search filters

The Future of Sensitivity Analysis: An essential discipline for systems modeling and policy support

Environmental Modelling and Software

Razavi, Saman; Jakeman, Anthony; Saltelli, Andrea; Prieur, Clémentine; Iooss, Bertrand; Borgonovo, Emanuele; Plischke, Elmar; Lo Piano, Samuele; Iwanaga, Takuya; Becker, William; Tarantola, Stefano; Guillaume, Joseph H.A.; Jakeman, John D.; Gupta, Hoshin; Melillo, Nicola; Rabitti, Giovanni; Chabridon, Vincent; Duan, Qingyun; Sun, Xifu; Smith, Stefán; Sheikholeslami, Razi; Hosseini, Nasim; Asadzadeh, Masoud; Puy, Arnald; Kucherenko, Sergei; Maier, Holger R.

Sensitivity analysis (SA) is en route to becoming an integral part of mathematical modeling. The tremendous potential benefits of SA are, however, yet to be fully realized, both for advancing mechanistic and data-driven modeling of human and natural systems, and in support of decision making. In this perspective paper, a multidisciplinary group of researchers and practitioners revisit the current status of SA, and outline research challenges in regard to both theoretical frameworks and their applications to solve real-world problems. Six areas are discussed that warrant further attention, including (1) structuring and standardizing SA as a discipline, (2) realizing the untapped potential of SA for systems modeling, (3) addressing the computational burden of SA, (4) progressing SA in the context of machine learning, (5) clarifying the relationship and role of SA to uncertainty quantification, and (6) evolving the use of SA in support of decision making. An outlook for the future of SA is provided that underlines how SA must underpin a wide variety of activities to better serve science and society.

More Details

Deep learning of parameterized equations with applications to uncertainty quantification

International Journal for Uncertainty Quantification

Qin, Tong; Chen, Zhen; Jakeman, John D.; Xiu, Dongbin

We propose a learning algorithm for discovering unknown parameterized dynamical systems by using observational data of the state variables. Our method is built upon and extends the recent work of discovering unknown dynamical systems, in particular those using a deep neural network (DNN). We propose a DNN structure, largely based upon the residual network (ResNet), to not only learn the unknown form of the governing equation but also to take into account the random effect embedded in the system, which is generated by the random parameters. Once the DNN model is successfully constructed, it is able to produce system prediction over a longer term and for arbitrary parameter values. For uncertainty quantification, it allows us to conduct uncertainty analysis by evaluating solution statistics over the parameter space.

More Details

Non-destructive simulation of node defects in additively manufactured lattice structures

Additive Manufacturing

Lozanovski, Bill; Downing, David; Tino, Rance; du Plessis, Anton; Tran, Phuong; Jakeman, John D.; Shidid, Darpan; Emmelmann, Claus; Qian, Ma; Choong, Peter; Brandt, Milan; Leary, Martin

Additive Manufacturing (AM), commonly referred to as 3D printing, offers the ability to not only fabricate geometrically complex lattice structures but parts in which lattice topologies in-fill volumes bounded by complex surface geometries. However, current AM processes produce defects on the strut and node elements which make up the lattice structure. This creates an inherent difference between the as-designed and as-fabricated geometries, which negatively affects predictions (via numerical simulation) of the lattice's mechanical performance. Although experimental and numerical analysis of an AM lattice's bulk structure, unit cell and struts have been performed, there exists almost no research data on the mechanical response of the individual as-manufactured lattice node elements. This research proposes a methodology that, for the first time, allows non-destructive quantification of the mechanical response of node elements within an as-manufactured lattice structure. A custom-developed tool is used to extract and classify each individual node geometry from micro-computed tomography scans of an AM fabricated lattice. Voxel-based finite element meshes are generated for numerical simulation and the mechanical response distribution is compared to that of the idealised computer-aided design model. The method demonstrates compatibility with Uncertainty Quantification methods that provide opportunities for efficient prediction of a population of nodal responses from sampled data. Overall, the non-destructive and automated nature of the node extraction and response evaluation is promising for its application in qualification and certification of additively manufactured lattice structures.

More Details

A generalized approximate control variate framework for multifidelity uncertainty quantification

Journal of Computational Physics

Gorodetsky, Alex A.; Geraci, Gianluca G.; Eldred, Michael S.; Jakeman, John D.

We describe and analyze a variance reduction approach for Monte Carlo (MC) sampling that accelerates the estimation of statistics of computationally expensive simulation models using an ensemble of models with lower cost. These lower cost models — which are typically lower fidelity with unknown statistics — are used to reduce the variance in statistical estimators relative to a MC estimator with equivalent cost. We derive the conditions under which our proposed approximate control variate framework recovers existing multifidelity variance reduction schemes as special cases. We demonstrate that existing recursive/nested strategies are suboptimal because they use the additional low-fidelity models only to efficiently estimate the unknown mean of the first low-fidelity model. As a result, they cannot achieve variance reduction beyond that of a control variate estimator that uses a single low-fidelity model with known mean. However, there often exists about an order-of-magnitude gap between the maximum achievable variance reduction using all low-fidelity models and that achieved by a single low-fidelity model with known mean. We show that our proposed approach can exploit this gap to achieve greater variance reduction by using non-recursive sampling schemes. The proposed strategy reduces the total cost of accurately estimating statistics, especially in cases where only low-fidelity simulation models are accessible for additional evaluations. Several analytic examples and an example with a hyperbolic PDE describing elastic wave propagation in heterogeneous media are used to illustrate the main features of the methodology.

More Details

Adaptive multi-index collocation for uncertainty quantification and sensitivity analysis

International Journal for Numerical Methods in Engineering

Jakeman, John D.; Eldred, Michael S.; Geraci, Gianluca; Gorodetsky, Alex

In this paper, we present an adaptive algorithm to construct response surface approximations of high-fidelity models using a hierarchy of lower fidelity models. Our algorithm is based on multi-index stochastic collocation and automatically balances physical discretization error and response surface error to construct an approximation of model outputs. This surrogate can be used for uncertainty quantification (UQ) and sensitivity analysis (SA) at a fraction of the cost of a purely high-fidelity approach. We demonstrate the effectiveness of our algorithm on a canonical test problem from the UQ literature and a complex multiphysics model that simulates the performance of an integrated nozzle for an unmanned aerospace vehicle. We find that, when the input-output response is sufficiently smooth, our algorithm produces approximations that can be over two orders of magnitude more accurate than single fidelity approximations for a fixed computational budget.

More Details

Mfnets: Multi-fidelity data-driven networks for bayesian learning and prediction

International Journal for Uncertainty Quantification

Gorodetsky, Alex A.; Jakeman, John D.; Geraci, Gianluca G.; Eldred, Michael S.

This paper presents a Bayesian multifidelity uncertainty quantification framework, called MFNets, which can be used to overcome three of the major challenges that arise when data from different sources are used to enhance statistical estimation and prediction with quantified uncertainty. Specifically, we demonstrate that MFNets can (1) fuse heterogeneous data sources arising from simulations with different parameterizations, e.g., simulation models with different uncertain parameters or data sets collected under different environmental conditions; (2) encode known relationships among data sources to reduce data requirements; and (3) improve the robustness of existing multifidelity approaches to corrupted data. In this paper we use MFNets to construct linear-subspace surrogates and estimate statistics using Monte Carlo sampling. In addition to numerical examples highlighting the efficacy of MFNets we also provide a number of theoretical results. Firstly we provide a mechanism to assess the quality of the posterior mean of a MFNets Monte Carlo estimator as a frequentist estimator. We then use this result to compare MFNets estimators to existing single fidelity, multilevel, and control variate Monte Carlo estimators. In this context, we show that the Monte Carlo-based control variate estimator can be derived entirely from the use of Bayes rule and linear-Gaussian models—to our knowledge the first such derivation. Finally, we demonstrate the ability to work with different uncertain parameters across different models.

More Details

Polynomial chaos expansions for dependent random variables

Computer Methods in Applied Mechanics and Engineering

Jakeman, John D.; Franzelin, Fabian; Narayan, Akil; Eldred, Michael; Plfüger, Dirk

Polynomial chaos expansions (PCE) are well-suited to quantifying uncertainty in models parameterized by independent random variables. The assumption of independence leads to simple strategies for building multivariate orthonormal bases and for sampling strategies to evaluate PCE coefficients. In contrast, the application of PCE to models of dependent variables is much more challenging. Three approaches can be used to construct PCE of models of dependent variables. The first approach uses mapping methods where measure transformations, such as the Nataf and Rosenblatt transformation, can be used to map dependent random variables to independent ones; however we show that this can significantly degrade performance since the Jacobian of the map must be approximated. A second strategy is the class of dominating support methods. In these approaches a PCE is built using independent random variables whose distributional support dominates the support of the true dependent joint density; we provide evidence that this approach appears to produce approximations with suboptimal accuracy. A third approach, the novel method proposed here, uses Gram–Schmidt orthogonalization (GSO) to numerically compute orthonormal polynomials for the dependent random variables. This approach has been used successfully when solving differential equations using the intrusive stochastic Galerkin method, and in this paper we use GSO to build PCE using a non-intrusive stochastic collocation method. The stochastic collocation method treats the model as a black box and builds approximations of the input–output map from a set of samples. Building PCE from samples can introduce ill-conditioning which does not plague stochastic Galerkin methods. To mitigate this ill-conditioning we generate weighted Leja sequences, which are nested sample sets, to build accurate polynomial interpolants. We show that our proposed approach, GSO with weighted Leja sequences, produces PCE which are orders of magnitude more accurate than PCE constructed using mapping or dominating support methods.

More Details
Results 26–50 of 147
Results 26–50 of 147