Multifidelity strategies for optimization under uncertainty of wind power plants
Abstract not provided.
Abstract not provided.
Abstract not provided.
International Journal for Uncertainty Quantification
This paper presents a multifidelity uncertainty quantification framework called MFNets. We seek to address three existing challenges that arise when experimental and simulation data from different sources are used to enhance statistical estimation and prediction with quantified uncertainty. Specifically, we demonstrate that MFNets can (1) fuse heterogeneous data sources arising from simulations with different parameterizations, e.g simulation models with different uncertain parameters or data sets collected under different environmental conditions; (2) encode known relationships among data sources to reduce data requirements; and (3) improve the robustness of existing multi-fidelity approaches to corrupted data. MFNets construct a network of latent variables (LVs) to facilitate the fusion of data from an ensemble of sources of varying credibility and cost. These LVs are posited as explanatory variables that provide the source of correlation in the observed data. Furthermore, MFNets provide a way to encode prior physical knowledge to enable efficient estimation of statistics and/or construction of surrogates via conditional independence relations on the LVs. We highlight the utility of our framework with a number of theoretical results which assess the quality of the posterior mean as a frequentist estimator and compare it to standard sampling approaches that use single fidelity, multilevel, and control variate Monte Carlo estimators. We also use the proposed framework to derive the Monte Carlo-based control variate estimator entirely from the use of Bayes rule and linear-Gaussian models -- to our knowledge the first such derivation. Finally, we demonstrate the ability to work with different uncertain parameters across different models.
International Journal for Uncertainty Quantification
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.
The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Journal of Computational Physics
We describe and analyze a variance reduction approach for Monte Carlo (MC) sampling that accelerates the estimation of statistics of computationally expensive simulation models using an ensemble of models with lower cost. These lower cost models — which are typically lower fidelity with unknown statistics — are used to reduce the variance in statistical estimators relative to a MC estimator with equivalent cost. We derive the conditions under which our proposed approximate control variate framework recovers existing multifidelity variance reduction schemes as special cases. We demonstrate that existing recursive/nested strategies are suboptimal because they use the additional low-fidelity models only to efficiently estimate the unknown mean of the first low-fidelity model. As a result, they cannot achieve variance reduction beyond that of a control variate estimator that uses a single low-fidelity model with known mean. However, there often exists about an order-of-magnitude gap between the maximum achievable variance reduction using all low-fidelity models and that achieved by a single low-fidelity model with known mean. We show that our proposed approach can exploit this gap to achieve greater variance reduction by using non-recursive sampling schemes. The proposed strategy reduces the total cost of accurately estimating statistics, especially in cases where only low-fidelity simulation models are accessible for additional evaluations. Several analytic examples and an example with a hyperbolic PDE describing elastic wave propagation in heterogeneous media are used to illustrate the main features of the methodology.
Abstract not provided.
Proceedings of the ASME Design Engineering Technical Conference
Bayesian optimization (BO) is an efficient and flexible global optimization framework that is applicable to a very wide range of engineering applications. To leverage the capability of the classical BO, many extensions, including multi-objective, multi-fidelity, parallelization, and latent-variable modeling, have been proposed to address the limitations of the classical BO framework. In this work, we propose a novel multi-objective (MO) extension, called srMOBO-3GP, to solve the MO optimization problems in a sequential setting. Three different Gaussian processes (GPs) are stacked together, where each of the GP is assigned with a different task: the first GP is used to approximate a single-objective computed from the MO definition, the second GP is used to learn the unknown constraints, and the third GP is used to learn the uncertain Pareto frontier. At each iteration, a MO augmented Tchebycheff function converting MO to single-objective is adopted and extended with a regularized ridge term, where the regularization is introduced to smooth the single-objective function. Finally, we couple the third GP along with the classical BO framework to explore the richness and diversity of the Pareto frontier by the exploitation and exploration acquisition function. The proposed framework is demonstrated using several numerical benchmark functions, as well as a thermomechanical finite element model for flip-chip package design optimization.
Proceedings of the 6th European Conference on Computational Mechanics: Solids, Structures and Coupled Problems, ECCM 2018 and 7th European Conference on Computational Fluid Dynamics, ECFD 2018
SNOWPAC (Stochastic Nonlinear Optimization With Path-Augmented Constraints) is a method for stochastic nonlinear constrained derivative-free optimization. For such problems, it extends the path-augmented constraints framework introduced by the deterministic optimization method NOWPAC and uses a noise-adapted trust region approach and Gaussian processes for noise reduction. In recent developments, SNOWPAC is available in the DAKOTA framework which offers a highly flexible interface to couple the optimizer with different sampling strategies or surrogate models. In this paper we discuss details of SNOWPAC and demonstrate the coupling with DAKOTA. We showcase the approach by presenting design optimization results of a shape in a 2D supersonic duct. This simulation is supposed to imitate the behavior of the flow in a SCRAMJET simulation but at a much lower computational cost. Additionally different mesh or model fidelities can be tested. Thus, it serves as a convenient test case before moving to costly SCRAMJET computations. Here, we study deterministic results and results obtained by introducing uncertainty on inflow parameters. As sampling strategies we compare classical Monte Carlo sampling with multilevel Monte Carlo approaches for which we developed new error estimators. All approaches show a reasonable optimization of the design over the objective while maintaining or seeking feasibility. Furthermore, we achieve significant reductions in computational cost by using multilevel approaches that combine solutions from different grid resolutions.
AIAA Scitech 2020 Forum
Truly predictive numerical simulations can only be obtained by performing Uncertainty Quantification. However, many realistic engineering applications require extremely complex and computationally expensive high-fidelity numerical simulations for their accurate performance characterization. Very often the combination of complex physical models and extreme operative conditions can easily lead to hundreds of uncertain parameters that need to be propagated through high-fidelity codes. Under these circumstances, a single fidelity uncertainty quantification approach, i.e. a workflow that only uses high-fidelity simulations, is unfeasible due to its prohibitive overall computational cost. To overcome this difficulty, in recent years multifidelity strategies emerged and gained popularity. Their core idea is to combine simulations with varying levels of fidelity/accuracy in order to obtain estimators or surrogates that can yield the same accuracy of their single fidelity counterparts at a much lower computational cost. This goal is usually accomplished by defining a priori a sequence of discretization levels or physical modeling assumptions that can be used to decrease the complexity of a numerical model realization and thus its computational cost. Less attention has been dedicated to low-fidelity models that can be built directly from a small number of available high-fidelity simulations. In this work we focus our attention on reduced order models (ROMs). Our main goal in this work is to investigate the combination of multifidelity uncertainty quantification and ROMs in order to evaluate the possibility to obtain an efficient framework for propagating uncertainties through expensive numerical codes. We focus our attention on sampling-based multifidelity approaches, like the multifidelity control variate, and we consider several scenarios for a numerical test problem, namely the Kuramoto-Sivashinsky equation, for which the efficiency of the multifidelity-ROM estimator is compared to the standard (single-fidelity) Monte Carlo approach.
Proceedings of the 6th European Conference on Computational Mechanics: Solids, Structures and Coupled Problems, ECCM 2018 and 7th European Conference on Computational Fluid Dynamics, ECFD 2018
Wind energy is stochastic in nature; the prediction of aerodynamic quantities and loads relevant to wind energy applications involves modeling the interaction of a range of physics over many scales for many different cases. These predictions require a range of model fidelity, as predictive models that include the interaction of atmospheric and wind turbine wake physics can take weeks to solve on institutional high performance computing systems. In order to quantify the uncertainty in predictions of wind energy quantities with multiple models, researchers at Sandia National Laboratories have applied Multilevel-Multifidelity methods. A demonstration study was completed using simulations of a NREL 5MW rotor in an atmospheric boundary layer with wake interaction. The flow was simulated with two models of disparate fidelity; an actuator line wind plant large-eddy scale model, Nalu, using several mesh resolutions in combination with a lower fidelity model, OpenFAST. Uncertainties in the flow conditions and actuator forces were propagated through the model using Monte Carlo sampling to estimate the velocity defect in the wake and forces on the rotor. Coarse-mesh simulations were leveraged along with the lower-fidelity flow model to reduce the variance of the estimator, and the resulting Multilevel-Multifidelity strategy demonstrated a substantial improvement in estimator efficiency compared to the standard Monte Carlo method.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Uncertainty quantification is recognized as a fundamental task to obtain predictive numerical simulations. However, many realistic engineering applications require complex and computationally expensive high-fidelity numerical simulations for the accurate characterization of the system responses. Moreover, complex physical models and extreme operative conditions can easily lead to hundreds of uncertain parameters that need to be propagated through high-fidelity codes. Under these circumstances, a single fidelity approach, i.e. a workflow that only uses high-fidelity simulations to perform the uncertainty quantification task, is unfeasible due to the prohibitive overall computational cost. In recent years, multifidelity strategies have been introduced to overcome this issue. The core idea of this family of methods is to combine simulations with varying levels of fidelity/accuracy in order to obtain the multifidelity estimators or surrogates with the same accuracy of their single fidelity counterparts at a much lower computational cost. This goal is usually accomplished by defining a prioria sequence of discretization levels or physical modeling assumptions that can be used to decrease the complexity of a numerical realization and thus its computational cost. However ,less attention has been dedicated to low-fidelity models that can be built directly from the small number of high-fidelity simulations available. In this work we focus our attention on Reduced-Order Models that can be considered a particular class of data-driven approaches. Our main goal is to explore the combination of multifidelity uncertainty quantification and reduced-order models to obtain an efficient framework for propagating uncertainties through expensive numerical codes.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Computer Methods in Applied Mechanics and Engineering
Polynomial chaos expansions (PCE) are well-suited to quantifying uncertainty in models parameterized by independent random variables. The assumption of independence leads to simple strategies for building multivariate orthonormal bases and for sampling strategies to evaluate PCE coefficients. In contrast, the application of PCE to models of dependent variables is much more challenging. Three approaches can be used to construct PCE of models of dependent variables. The first approach uses mapping methods where measure transformations, such as the Nataf and Rosenblatt transformation, can be used to map dependent random variables to independent ones; however we show that this can significantly degrade performance since the Jacobian of the map must be approximated. A second strategy is the class of dominating support methods. In these approaches a PCE is built using independent random variables whose distributional support dominates the support of the true dependent joint density; we provide evidence that this approach appears to produce approximations with suboptimal accuracy. A third approach, the novel method proposed here, uses Gram–Schmidt orthogonalization (GSO) to numerically compute orthonormal polynomials for the dependent random variables. This approach has been used successfully when solving differential equations using the intrusive stochastic Galerkin method, and in this paper we use GSO to build PCE using a non-intrusive stochastic collocation method. The stochastic collocation method treats the model as a black box and builds approximations of the input–output map from a set of samples. Building PCE from samples can introduce ill-conditioning which does not plague stochastic Galerkin methods. To mitigate this ill-conditioning we generate weighted Leja sequences, which are nested sample sets, to build accurate polynomial interpolants. We show that our proposed approach, GSO with weighted Leja sequences, produces PCE which are orders of magnitude more accurate than PCE constructed using mapping or dominating support methods.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
AIAA Scitech 2019 Forum
In the context of the DARPA funded project SEQUOIA we are interested in the design under uncertainty of a jet engine nozzle subject to the performance requirements of a reconnaissance mission for a small unmanned military aircraft. This design task involves complex and expensive aero-thermo-structural computational analyses where it is of a paramount importance to also include the effect of the uncertain variables to obtain reliable predictions of the device’s performance. In this work we focus on the forward propagation analysis which is a key part of the design under uncertainty workflow. This task cannot be tackled directly by means of single fidelity approaches due to the prohibitive computational cost associated to each realization. We report here a summary of our latest advancement regarding several multilevel and multifidelity strategies designed to alleviate these challenges. The overall goal of these techniques is to reduce the computational cost of analyzing a high-fidelity model by resorting to less accurate, but less computationally demanding, lower fidelity models. The features of these multifidelity UQ approaches are initially illustrated and demonstrated on several model problems and afterward for the aero-thermo-structural analysis of the jet engine nozzle.
Abstract not provided.
Abstract not provided.
This study explores a Bayesian calibration framework for the RAMPAGE alloy potential model for Cu-Ni and Cu-Zr systems, respectively. In RAMPAGE potentials, it is proposed that once calibrated potentials for individual elements are available, the inter-species interactions can be described by fitting a Morse potential for pair interactions with three parameters, while densities for the embedding function can be scaled by two parameters from the elemental densities. Global sensitivity analysis tools were employed to understand the impact each parameter has on the MD simulation results. A transitional Markov Chain Monte Carlo algorithm was used to generate samples from the multimodal posterior distribution consistent with the discrepancy between MD simulation results and DFT data. For the Cu-Ni system the posterior predictive tests indicate that the fitted interatomic potential model agrees well with the DFT data, justifying the basic RAMPAGE assumptions. For the Cu-Zr system, where the phase diagram suggests more complicated atomic interactions than in the case of Cu-Ni, the RAMPAGE potential captured only a subset of the DFT data. The resulting posterior distribution for the 5 model parameters exhibited several modes, with each mode corresponding to specific simulation data and a suboptimal agreement with the DFT results.
In this work we propose an approach for accelerating Uncertainty Quantification (UQ) analysis in the context of Multifidelity applications. In the presence of complex multiphysics applications, which often require a prohibitive computational cost for each evaluation, multifidelity UQ techniques try to accelerate the convergence of statistics by leveraging the in- formation collected from a larger number of a lower fidelity model realizations. However, at the-state-of-the-art, the performance of virtually all the multifidelity UQ techniques is related to the correlation between the high and low-fidelity models. In this work we proposed to design a multifidelity UQ framework based on the identification of independent important directions for each model. The main idea is that if the responses of each model can be represented in a common space, this latter can be shared to enhance the correlation when the samples are drawn with respect to it instead of the original variables. There are also two main additional advantages that follow from this approach. First, the models might be correlated even if their original parametrizations are chosen independently. Second, if the shared space between models has a lower dimensionality than the original spaces, the UQ analysis might benefit from a dimension reduction standpoint. In this work we designed this general framework and we also tested it on several test problems ranging from analytical functions for verification purpose, up to more challenging application problems as an aero-thermo-structural analysis and a scramjet flow analysis.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
AIAA Journal
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertainparameters involvedandthe high computational costofflow simulations. These difficulties are addressedin this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying themin the current studyto large-eddy simulations ofajet incrossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system's stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.
Abstract not provided.