Missing experimental data and rate parameter inference for H2+OH=H2O+H
Abstract not provided.
Abstract not provided.
Molecular Physics
A new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm−1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE Transactions on Power Systems
Stochastic economic dispatch models address uncertainties in forecasts of renewable generation output by considering a finite number of realizations drawn from a stochastic process model, typically via Monte Carlo sampling. Accurate evaluations of expectations or higher order moments for quantities of interest, e.g., generating cost, can require a prohibitively large number of samples. We propose an alternative to Monte Carlo sampling based on polynomial chaos expansions. These representations enable efficient and accurate propagation of uncertainties in model parameters, using sparse quadrature methods. We also use Karhunen-Loève expansions for efficient representation of uncertain renewable energy generation that follows geographical and temporal correlations derived from historical data at each wind farm. Considering expected production cost, we demonstrate that the proposed approach can yield several orders of magnitude reduction in computational cost for solving stochastic economic dispatch relative to Monte Carlo sampling, for a given target error threshold.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Computational Physics
Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to not only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. The algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
International Journal for Numerical Methods in Fluids
In this paper, we present a Bayesian framework for estimating joint densities for large eddy simulation (LES) sub-grid scale model parameters based on canonical forced isotropic turbulence direct numerical simulation (DNS) data. The framework accounts for noise in the independent variables, and we present alternative formulations for accounting for discrepancies between model and data. To generate probability densities for flow characteristics, posterior densities for sub-grid scale model parameters are propagated forward through LES of channel flow and compared with DNS data. Synthesis of the calibration and prediction results demonstrates that model parameters have an explicit filter width dependence and are highly correlated. Discrepancies between DNS and calibrated LES results point to additional model form inadequacies that need to be accounted for. Copyright © 2016 John Wiley & Sons, Ltd.
Abstract not provided.
10th U.S. National Combustion Meeting
The thermal decomposition of H2O2 is an important process in hydrocarbon combustion playing a particularly crucial role in providing a source of radicals at high pressure where it controls the 3rd explosion limit in the H2-O2 system, and also as a branching reaction in intermediatetemperature hydrocarbon oxidation. As such, understanding the uncertainty in the rate expression for this reaction is crucial for predictive combustion computations. Raw experimental measurement data, and its associated noise and uncertainty, is typically unreported in most investigations of elementary reaction rates, making the direct derivation of the joint uncertainty structure of the parameters in rate expressions difficult. To overcome this, we employ a statistical inference procedure, relying on maximum entropy and approximate Bayesian computation methods, and using a two-level nested Markov Chain Monte Carlo algorithm, to arrive at a posterior density on rate parameters for a selected case of laser absorption measurements in a shock tube study, subject to the constraints imposed by the reported experimental statistics. The procedure constructs a set of H2O2 concentration decay profiles consistent with these reported statistics. These consistent data sets are then used to determine the joint posterior density on the rate parameters through straightforward Bayesian inference. Broadly, the method also provides a framework for the replication and comparison of missing data from different experiments, based on reported statistics, for the generation of consensus rate expressions.
11th Asia-Pacific Conference on Combustion, ASPACC 2017
Prescribing uncertainty measures to rate expressions is crucial for performing useful predictive combustion computations. Raw experimental measurement data, and its associated noise and uncertainty, is typically unavailable for most reported investigations of elementary reaction rates, making the direct derivation of the desired joint uncertainty structure of the parameters in rate expressions difficult. To approximate this uncertainty structure we construct an inference procedure, relying on maximum entropy and approximate Bayesian computation methods, and using a two-level nested Markov Chain Monte Carlo algorithm, to arrive at a joint density on rate parameters and missing data. This method employs the reported context of a specific experiment to construct a set of hypothetical experimental data profiles consistent with the reported statistics of the data, in the form of error bars on rate constants at the experimental temperatures. Bayesian inference can then be performed using these consistent data sets as evidence to determine the joint posterior density on the rate parameters for any choice of chemical model. The method is also used to demonstrate the combination of missing data from different experiments for the generation of consensus rate expressions using these multiple sources of experimental evidence.
19th AIAA Non-Deterministic Approaches Conference, 2017
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress towards optimal engine designs requires both accurate flow simulations as well as uncertainty quantification (UQ). However, performing UQ for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. We address these difficulties by combining UQ algorithms and numerical methods to the large eddy simulation of the HIFiRE scramjet configuration. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, helping reduce the stochastic dimension of the problem and discover sparse representations. Second, as models of different fidelity are available and inevitably used in the overall UQ assessment, a framework for quantifying and propagating the uncertainty due to model error is introduced. These methods are demonstrated on a non-reacting scramjet unit problem with parameter space up to 24 dimensions, using 2D and 3D geometries with static and dynamic treatments of the turbulence subgrid model.
2017 Fall Technical Meeting of the Western States Section of the Combustion Institute, WSSCI 2017
The reaction of OH with H2 is a crucial chain-propagating step in the H2-O2 system thus making the specification of its rate, and its uncertainty, important for predicting the high-temperature combustion of hydrocarbons. In order to obtain an uncertain representation of this reaction rate in the absence of actual experimental data, we perform an inference procedure employing maximum entropy and approximate Bayesian computation methods to discover hypothetical data from a target shock-tube experiment designed to measure the reverse reaction rate. This method attempts to invert the fitting procedure from noisy measurement data to parameters, with associated uncertainty specifications, to arrive at candidate noisy data sets consistent with these reported parameters and their uncertainties. The uncertainty structure of the Arrhenius parameters is obtained by fitting each hypothetical data set in a Bayesian framework and pooling the resulting joint parameter posterior densities to arrive at a consensus density. We highlight the advantages of working with a data-centric representation of the experimental uncertainty with regards to model choice and consistency, and the ability for combining experimental evidence from multiple sources. Finally, we demonstrate the utility of knowledge of the joint Arrhenius parameter density for performing predictive modeling of combustion systems of interest.