Publications

Results 151–175 of 424

Search results

Jump to search filters

Inference and combination of missing data sets for investigation of H2O2 thermal decomposition rate uncertainty

11th Asia-Pacific Conference on Combustion, ASPACC 2017

Casey, Tiernan A.; Najm, H.N.

Prescribing uncertainty measures to rate expressions is crucial for performing useful predictive combustion computations. Raw experimental measurement data, and its associated noise and uncertainty, is typically unavailable for most reported investigations of elementary reaction rates, making the direct derivation of the desired joint uncertainty structure of the parameters in rate expressions difficult. To approximate this uncertainty structure we construct an inference procedure, relying on maximum entropy and approximate Bayesian computation methods, and using a two-level nested Markov Chain Monte Carlo algorithm, to arrive at a joint density on rate parameters and missing data. This method employs the reported context of a specific experiment to construct a set of hypothetical experimental data profiles consistent with the reported statistics of the data, in the form of error bars on rate constants at the experimental temperatures. Bayesian inference can then be performed using these consistent data sets as evidence to determine the joint posterior density on the rate parameters for any choice of chemical model. The method is also used to demonstrate the combination of missing data from different experiments for the generation of consensus rate expressions using these multiple sources of experimental evidence.

More Details

Inference of reaction rate parameters based on summary statistics from experiments

Proceedings of the Combustion Institute

Khalil, Mohammad K.; Chowdhary, Kamaljit S.; Safta, Cosmin S.; Sargsyan, Khachik S.; Najm, H.N.

Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H2/O2-mechanism chain branching reaction H + O2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the given summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.

More Details

Bayesian estimation of Karhunen-Loève expansions; A random subspace approach

Journal of Computational Physics

Chowdhary, Kamaljit S.; Najm, H.N.

One of the most widely-used procedures for dimensionality reduction of high dimensional data is Principal Component Analysis (PCA). More broadly, low-dimensional stochastic representation of random fields with finite variance is provided via the well known Karhunen-Loève expansion (KLE). The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L2 sense, i.e., which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition) on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build probabilistic Karhunen-Loève expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.

More Details

Convergence Study in Global Sensitivity Analysis

Harmon, Rebecca H.; Khalil, Mohammad K.; Najm, H.N.; Safta, Cosmin S.

Monte Carlo (MC) sampling is a common method used to randomly sample a range of scenarios. The associated error follows a predictable rate of convergence of $1/\sqrt{N}$, such that quadrupling the sample size halves the error. This method is often employed in performing global sensitivity analysis which computes sensitivity indices, measuring fractional contributions of uncertain model inputs to the total output variance. In this study, several models are used to observe the rate of decay in the MC error in the estimation of the conditional variance, the total variance in the output, and the global sensitivity indices. The purpose is to examine the rate of convergence of the error in existing specialized, albeit MC-based, sampling methods for estimation of the sensitivity indices. It was found that the conditional variances and sensitivity indices all follow the $1/\sqrt{N}$ convergence rate. Future work will test the convergence of observables from more complex models such as ignition time in combustion.

More Details
Results 151–175 of 424
Results 151–175 of 424