Sampling Complex Distributions with Transitional Markov Chain Monte Carlo
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE Transactions on Power Systems
Stochastic economic dispatch models address uncertainties in forecasts of renewable generation output by considering a finite number of realizations drawn from a stochastic process model, typically via Monte Carlo sampling. Accurate evaluations of expectations or higher order moments for quantities of interest, e.g., generating cost, can require a prohibitively large number of samples. We propose an alternative to Monte Carlo sampling based on polynomial chaos expansions. These representations enable efficient and accurate propagation of uncertainties in model parameters, using sparse quadrature methods. We also use Karhunen-Loève expansions for efficient representation of uncertain renewable energy generation that follows geographical and temporal correlations derived from historical data at each wind farm. Considering expected production cost, we demonstrate that the proposed approach can yield several orders of magnitude reduction in computational cost for solving stochastic economic dispatch relative to Monte Carlo sampling, for a given target error threshold.
The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncertainty in numerical model predictions. Version 3.0.3 offers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sen- sitivity analysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.
Abstract not provided.
Abstract not provided.
International Journal for Numerical Methods in Fluids
In this paper, we present a Bayesian framework for estimating joint densities for large eddy simulation (LES) sub-grid scale model parameters based on canonical forced isotropic turbulence direct numerical simulation (DNS) data. The framework accounts for noise in the independent variables, and we present alternative formulations for accounting for discrepancies between model and data. To generate probability densities for flow characteristics, posterior densities for sub-grid scale model parameters are propagated forward through LES of channel flow and compared with DNS data. Synthesis of the calibration and prediction results demonstrates that model parameters have an explicit filter width dependence and are highly correlated. Discrepancies between DNS and calibrated LES results point to additional model form inadequacies that need to be accounted for. Copyright © 2016 John Wiley & Sons, Ltd.
Abstract not provided.
19th AIAA Non-Deterministic Approaches Conference, 2017
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress towards optimal engine designs requires both accurate flow simulations as well as uncertainty quantification (UQ). However, performing UQ for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. We address these difficulties by combining UQ algorithms and numerical methods to the large eddy simulation of the HIFiRE scramjet configuration. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, helping reduce the stochastic dimension of the problem and discover sparse representations. Second, as models of different fidelity are available and inevitably used in the overall UQ assessment, a framework for quantifying and propagating the uncertainty due to model error is introduced. These methods are demonstrated on a non-reacting scramjet unit problem with parameter space up to 24 dimensions, using 2D and 3D geometries with static and dynamic treatments of the turbulence subgrid model.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings of the Combustion Institute
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H2/O2-mechanism chain branching reaction H + O2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the given summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.
Abstract not provided.
The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncertainty in numerical model predictions. Version 3.0 offers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sensitivity analysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.
Abstract not provided.
Monte Carlo (MC) sampling is a common method used to randomly sample a range of scenarios. The associated error follows a predictable rate of convergence of $1/\sqrt{N}$, such that quadrupling the sample size halves the error. This method is often employed in performing global sensitivity analysis which computes sensitivity indices, measuring fractional contributions of uncertain model inputs to the total output variance. In this study, several models are used to observe the rate of decay in the MC error in the estimation of the conditional variance, the total variance in the output, and the global sensitivity indices. The purpose is to examine the rate of convergence of the error in existing specialized, albeit MC-based, sampling methods for estimation of the sensitivity indices. It was found that the conditional variances and sensitivity indices all follow the $1/\sqrt{N}$ convergence rate. Future work will test the convergence of observables from more complex models such as ignition time in combustion.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Demonstrate algorithm-based resilience to silent data corruption (SDC) and hard faults in a task-based domain-decomposition preconditioner for elliptic PDEs.
Abstract not provided.