Integration of renewable power sources into grids remains an active research and development area,particularly for less developed renewable energy technologies such as wave energy converters (WECs).WECs are projected to have strong early market penetration for remote communities, which serve as naturalmicrogrids. Hence, accurate wave predictions to manage the interactions of a WEC array with microgridsis especially important. Recently developed, low-cost wave measurement buoys allow for operationalassimilation of wave data at remote locations where real-time data have previously been unavailable. This work includes the development and assessment of a wave modeling framework with real-time dataassimilation capabilities for WEC power prediction. The availability of real-time wave spectral componentsfrom low-cost wave measurement buoys allows for operational data assimilation with the Ensemble Kalmanfilter technique, whereby measured wave conditions within the numerical wave forecast model domain areassimilated onto the combined set of internal and boundary grid points while taking into account model andobservation error covariances. The updated model state and boundary conditions allow for more accuratewave characteristic predictions at the locations of interest. Initial deployment data indicated that measured wave data from one buoy that were assimilated intothe wave modeling framework resulted in improved forecast skill for a case where a traditional numericalforecast model (e.g., Simulating WAves Nearshore; SWAN) did not well represent the measured conditions.On average, the wave power forecast error was reduced from 73% to 43% using the data assimilationmodeling with real-time wave observations.
Integration of renewable power sources into grids remains an active research and development area,particularly for less developed renewable energy technologies such as wave energy converters (WECs).WECs are projected to have strong early market penetration for remote communities, which serve as naturalmicrogrids. Hence, accurate wave predictions to manage the interactions of a WEC array with microgridsis especially important. Recently developed, low-cost wave measurement buoys allow for operationalassimilation of wave data at remote locations where real-time data have previously been unavailable. This work includes the development and assessment of a wave modeling framework with real-time dataassimilation capabilities for WEC power prediction. The availability of real-time wave spectral componentsfrom low-cost wave measurement buoys allows for operational data assimilation with the Ensemble Kalmanfilter technique, whereby measured wave conditions within the numerical wave forecast model domain areassimilated onto the combined set of internal and boundary grid points while taking into account model andobservation error covariances. The updated model state and boundary conditions allow for more accuratewave characteristic predictions at the locations of interest. Initial deployment data indicated that measured wave data from one buoy that were assimilated intothe wave modeling framework resulted in improved forecast skill for a case where a traditional numericalforecast model (e.g., Simulating WAves Nearshore; SWAN) did not well represent the measured conditions.On average, the wave power forecast error was reduced from 73% to 43% using the data assimilationmodeling with real-time wave observations.
This project has developed models of variability of performance to enable robust design and certification. Material variability originating from microstructure has significant effects on component behavior and creates uncertainty in material response. The outcomes of this project are uncertainty quantification (UQ) enabled analysis of material variability effects on performance and methods to evaluate the consequences of microstructural variability on material response in general. Material variability originating from heterogeneous microstructural features, such as grain and pore morphologies, has significant effects on component behavior and creates uncertainty around performance. Current engineering material models typically do not incorporate microstructural variability explicitly, rather functional forms are chosen based on intuition and parameters are selected to reflect mean behavior. Conversely, mesoscale models that capture the microstructural physics, and inherent variability, are impractical to utilize at the engineering scale. Therefore, current efforts ignore physical characteristics of systems that may be the predominant factors for quantifying system reliability. To address this gap we have developed explicit connections between models of microstructural variability and component/system performance. Our focus on variability of mechanical response due to grain and pore distributions enabled us to fully probe these influences on performance and develop a methodology to propagate input variability to output performance. This project is at the forefront of data-science and material modeling. We adapted and innovated from progressive techniques in machine learning and uncertainty quantification to develop a new, physically-based methodology to address the core issues of the Engineering Materials Reliability (EMR) research challenge in modeling constitutive response of materials with significant inherent variability and length-scales.
The advent of fabrication techniques such as additive manufacturing has focused attention on the considerable variability of material response due to defects and other microstructural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. To account for material response variability through variations in physical parameters, we adapt a recent Bayesian embedded modeling error calibration technique. We use Bayesian model selection to determine the most plausible of a variety of plasticity models and the optimal embedding of parameter variability. To expedite model selection, we develop an adaptive importance-sampling-based numerical integration scheme to compute the Bayesian model evidence. In conclusion, we demonstrate that the new framework provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.
Integration of renewable power sources into electrical grids remains an active research and development area, particularly for less developed renewable energy technologies, such as wave energy converters (WECs). High spatio-temporal resolution and accurate wave forecasts at a potential WEC (or WEC array) lease area are needed to improve WEC power prediction and to facilitate grid integration, particularly for microgrid locations. The availability of high quality measurement data from recently developed low-cost buoys allows for operational assimilation of wave data into forecast models at remote locations where real-time data have previously been unavailable. This work includes the development and assessment of a wave modeling framework with real-time data assimilation capabilities for WEC power prediction. Spoondrift wave measurement buoys were deployed off the coast of Yakutat, Alaska, a microgrid site with high wave energy resource potential. A wave modeling framework with data assimilation was developed and assessed, which was most effective when the incoming forecasted boundary conditions did not represent the observations well. For that case, assimilation of the wave height data using the ensemble Kalman filter resulted in a reduction of wave height forecast normalized root mean square error from 27% to an average of 16% over a 12-hour period. This results in reduction of wave power forecast error from 73% to 43%. In summary, the use of the low-cost wave buoy data assimilated into the wave modeling framework improved the forecast skill and will provide a useful development tool for the integration of WECs into electrical grids.
This investigation tackles the probabilistic parameter estimation problem involving the Arrhenius parameters for the rate coefficient of the chain branching reaction H + O2 → OH + O. This is achieved in a Bayesian inference framework that uses indirect data from the literature in the form of summary statistics by approximating the maximum entropy solution with the aid of approximate bayesian computation. The summary statistics include nominal values and uncertainty factors of the rate coefficient, obtained from shock-tube experiments performed at various initial temperatures. The Bayesian framework allows for the incorporation of uncertainty in the rate coefficient of a secondary reaction, namely OH + H2 → H2O + H, resulting in a consistent joint probability density on Arrhenius parameters for the two rate coefficients. It also allows for uncertainty quantification in numerical ignition predictions while conforming with the published summary statistics. The method relies on probabilistic reconstruction of the unreported data, OH concentration profiles from shock-tube experiments, along with the unknown Arrhenius parameters. The data inference is performed using a Markov chain Monte Carlo sampling procedure that relies on an efficient adaptive quadrature in estimating relevant integrals needed for data likelihood evaluations. For further efficiency gains, local Padé–Legendre approximants are used as surrogates for the time histories of OH concentration, alleviating the need for 0-D auto-ignition simulations. The reconstructed realisations of the missing data are used to provide a consensus joint posterior probability density on the unknown Arrhenius parameters via probabilistic pooling. Uncertainty quantification analysis is performed for stoichiometric hydrogen–air auto-ignition computations to explore the impact of uncertain parameter correlations on a range of quantities of interest.
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. Although these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolution in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. Parallel sparse matrix–vector operations are used to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.
We investigate the feasibility of constructing a data-driven distance metric for use in null-hypothesis testing in the context of arms-control treaty verification. The distance metric is used in testing the hypothesis that the available data are representative of a certain object or otherwise, as opposed to binary-classification tasks studied previously. The metric, being of strictly quadratic form, is essentially computed using projections of the data onto a set of optimal vectors. These projections can be accumulated in list mode. The relatively low number of projections hampers the possible reconstruction of the object and subsequently the access to sensitive information. The projection vectors that channelize the data are optimal in capturing the Mahalanobis squared distance of the data associated with a given object under varying nuisance parameters. The vectors are also chosen such that the resulting metric is insensitive to the difference between the trusted object and another object that is deemed to contain sensitive information. Data used in this study were generated using the GEANT4 toolkit to model gamma transport using a Monte Carlo method. For numerical illustration, the methodology is applied to synthetic data obtained using custom models for plutonium inspection objects. The resulting metric based on a relatively low number of channels shows moderate agreement with the Mahalanobis distance metric for the trusted object but enabling a capability to obscure sensitive information.
Our study details the derivation of the nonlinear equations of motion for the axial, biaxial bending and torsional vibrations of an aeroelastic cantilever undergoing rigid body (pitch) rotation at the base. The primary attenstion is focussed on the geometric nonlinearities of the system, whereby the aeroelastic load is modeled by the theory of linear quasisteady aerodynamics. This modelling effort is intended to mimic the wind-tunnel experimental setup at the Royal Military College of Canada. While the derivation closely follows the work of Hodges and Dowell [1] for rotor blades, this aeroelastic system contains new inertial terms which stem from the fundamentally different kinematics than those exhibited by helicopter or wind turbine blades. Using the Hamilton’s principle, a set of coupled nonlinear partial differential equations (PDEs) and an ordinary differential equation (ODE) are derived which describes the coupled axial-bending-bending-torsion-pitch motion of the aeroelastic cantilever with the pitch rotation. The finite dimensional approximation of the coupled system of PDEs are obtained using the Galerkin projection, leading to a coupled system of ODEs. Subsequently, these nonlinear ODEs are solved numerically using the built-in MATLAB implicit ODE solver and the associated numerical results are compared with those obtained using Houbolt’s method. It is demonstrated that the system undergoes coalescence flutter, leading to a limit cycle oscillation (LCO) due to coupling between the rigid body pitching mode and teh flexible mode arising from the flapwise bending motion.
The thermal decomposition of H2O2 is an important process in hydrocarbon combustion playing a particularly crucial role in providing a source of radicals at high pressure where it controls the 3rd explosion limit in the H2-O2 system, and also as a branching reaction in intermediatetemperature hydrocarbon oxidation. As such, understanding the uncertainty in the rate expression for this reaction is crucial for predictive combustion computations. Raw experimental measurement data, and its associated noise and uncertainty, is typically unreported in most investigations of elementary reaction rates, making the direct derivation of the joint uncertainty structure of the parameters in rate expressions difficult. To overcome this, we employ a statistical inference procedure, relying on maximum entropy and approximate Bayesian computation methods, and using a two-level nested Markov Chain Monte Carlo algorithm, to arrive at a posterior density on rate parameters for a selected case of laser absorption measurements in a shock tube study, subject to the constraints imposed by the reported experimental statistics. The procedure constructs a set of H2O2 concentration decay profiles consistent with these reported statistics. These consistent data sets are then used to determine the joint posterior density on the rate parameters through straightforward Bayesian inference. Broadly, the method also provides a framework for the replication and comparison of missing data from different experiments, based on reported statistics, for the generation of consensus rate expressions.
The thermal decomposition of H2O2 is an important process in hydrocarbon combustion playing a particularly crucial role in providing a source of radicals at high pressure where it controls the 3rd explosion limit in the H2-O2 system, and also as a branching reaction in intermediatetemperature hydrocarbon oxidation. As such, understanding the uncertainty in the rate expression for this reaction is crucial for predictive combustion computations. Raw experimental measurement data, and its associated noise and uncertainty, is typically unreported in most investigations of elementary reaction rates, making the direct derivation of the joint uncertainty structure of the parameters in rate expressions difficult. To overcome this, we employ a statistical inference procedure, relying on maximum entropy and approximate Bayesian computation methods, and using a two-level nested Markov Chain Monte Carlo algorithm, to arrive at a posterior density on rate parameters for a selected case of laser absorption measurements in a shock tube study, subject to the constraints imposed by the reported experimental statistics. The procedure constructs a set of H2O2 concentration decay profiles consistent with these reported statistics. These consistent data sets are then used to determine the joint posterior density on the rate parameters through straightforward Bayesian inference. Broadly, the method also provides a framework for the replication and comparison of missing data from different experiments, based on reported statistics, for the generation of consensus rate expressions.
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H2/O2-mechanism chain branching reaction H + O2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the given summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.